id
stringlengths
9
18
document
stringlengths
62
1.33M
summary
stringlengths
127
12.3k
GAO_GAO-01-786T
Background Annual vaccination is the primary method for preventing influenza, which is associated with serious illness, hospitalizations, and even deaths among people at high risk for complications of the disease, such as pneumonia. Senior citizens are particularly at risk, as are individuals with chronic medical conditions. The Centers for Disease Control and Prevention (CDC) estimates that influenza epidemics contribute to approximately 20,000 deaths and 110,000 hospitalizations in the United States each year. Here in Oregon, and throughout the nation, influenza and pneumonia rank as the fifth leading cause of death among persons 65 years of age and older. Producing the influenza vaccine is a complex process that involves growing viruses in millions of fertilized chicken eggs. This process, which requires several steps, generally takes at least 6 to 8 months from January through August each year. Each year’s vaccine is made up of three different strains of influenza viruses, and, typically, each year one or two of the strains is changed to better protect against the strains that are likely to be circulating during the coming flu season. The Food and Drug Administration (FDA) and its advisory committee decide which strains to include based on CDC surveillance data, and FDA also licenses and regulates the manufacturers that produce the vaccine. Only three manufacturers—two in the United States and one in the United Kingdom—produced the vaccine used in the United States during the 2000-01 flu season. Like other pharmaceutical products, flu vaccine is sold to thousands of purchasers by manufacturers, numerous medical supply distributors, and other resellers such as pharmacies. These purchasers provide flu shots at physicians’ offices, public health clinics, nursing homes, and less traditional locations such as workplaces and various retail outlets. CDC has recommended October through mid-November as the best time to receive a flu shot because the flu season generally peaks from December through early March. However, if flu activity peaks late, as it has in 10 of the past 19 years, vaccination in January or later can still be beneficial. To address our study questions, we interviewed officials from the Department of Health and Human Services (HHS), including CDC, FDA, and the Health Care Financing Administration (HCFA), as well as flu vaccine manufacturers, distributors, physician associations, flu shot providers, and others. We surveyed 58 physician group practices nationwide to learn about their experiences and interviewed health department officials in all 50 states. Manufacturing Problems Caused Temporary Shortages and Spikes in Price Although the eventual supply of vaccine in the 2000-01 flu season was about the same as the previous year’s—about 78 million doses— production delays of about 6 to 8 weeks limited the amount that was available during the peak vaccination period. During the period when supply was limited and demand was higher, providers who wanted to purchase vaccine from distributors with available supplies often faced rapidly escalating prices. By December, as vaccine supply increased and demand dropped, prices declined. Most Vaccine Was Not Ready During Period of Peak Demand Last fall, fewer than 28 million doses were available by the end of October, compared with more than 70 million doses available by that date in 1999. Two main factors contributed to last year’s delay. The first was that two manufacturers had unanticipated problems growing one of the two new influenza strains introduced into the vaccine for the 2000-01 flu season. Because manufacturers must produce a vaccine that includes all three strains selected for the year, delivery was delayed until sufficient quantities of this difficult strain could be produced. The second factor was that two of the four manufacturers producing vaccine the previous season shut down parts of their facilities because of FDA concerns about compliance with good manufacturing practices, including issues related to safety and quality control. One of these manufacturers reopened its facilities and eventually shipped its vaccine, although much later than usual. The other, which had been expected to produce 12 to 14 million doses, announced in September 2000 that it would cease production altogether and, as a result, supplied no vaccine. These vaccine production and compliance problems did not affect every manufacturer to the same degree. Consequently, when a purchaser received vaccine depended to some extent on which manufacturer’s vaccine it had ordered. Purchasers that contracted only with the late- shipping manufacturers were in particular difficulty. For example, health departments and other public entities in 36 states, including Oregon, banded together under a group purchasing contract and ordered nearly 2.6 million doses from the manufacturer that, as it turned out, experienced the greatest delays from production difficulties. Some of these public entities, which ordered vaccine for high-risk people in nursing homes or clinics, did not receive most of their vaccine until December, according to state health officials. Limited Availability During Peak Demand Created Temporary Price Spikes Because supply was limited during the usual vaccination period, distributors and others who had supplies of the vaccine had the ability— and the economic incentive—to sell their supplies to the highest bidders rather than filling lower-priced orders they had already received. Most of the physician groups and state health departments we contacted reported that they waited for delivery of their original lower-priced orders, which often arrived in several partial shipments from October through December or later. Those who purchased vaccine in the fall found themselves paying much higher prices. For example, one physicians’ practice in our survey ordered flu vaccine from a supplier in April 2000 at $2.87 per dose. When none of that vaccine had arrived by November 1, the practice placed three smaller orders in November with a different supplier at the escalating prices of $8.80, $10.80, and $12.80 per dose. On December 1, the practice ordered more vaccine from a third supplier at $10.80 per dose. The four more expensive orders were delivered immediately, before any vaccine had been received from the original April order. When More Vaccine Became Available, Demand Had Already Dropped Demand for influenza vaccine dropped as additional vaccine became available after the prime period for vaccinations had passed. In all, roughly one-third of the total distribution was delivered in December or later. Part of this additional supply resulted from actions taken by CDC in September, when it appeared there could be a shortfall in production. At that point, CDC contracted with one of the manufacturers to extend production into late December for 9 million additional doses. Despite efforts by CDC and others to encourage people to seek flu shots later in the season, providers still reported a drop in demand in December. The unusually light flu season also probably contributed to the lack of interest. Had a flu epidemic hit in the fall or early winter, the demand for influenza vaccine would likely have remained high. As a result of the waning demand, manufacturers and distributors reported having more vaccine than they could sell. Manufacturers reported shipping about 9 percent less than in 1999, and more than 7 million of the 9 million additional doses produced under the CDC contract were never shipped at all. In addition, some physicians’ offices, employee health clinics, and other organizations that administered flu shots reported having unused doses in December and later. Distribution of Vaccine Does Not Ensure Priority to High-Risk Individuals In a typical year, there is enough vaccine available in the fall to give a flu shot to anyone who wants one. However, when the supply is not sufficient, there is no mechanism currently in place to establish priorities and distribute flu vaccine first to high-risk individuals. Indeed last year, mass immunizations in nonmedical settings, normally undertaken to promote vaccinations, created considerable controversy as healthy persons received vaccine in advance of those at high risk. In addition, manufacturers and distributors that tried to prioritize their vaccine shipments encountered difficulties doing so. Availability of Vaccine for Mass Immunization Campaigns Created Controversy Flu shots are generally widely available in a variety of settings, ranging from the usual physicians’ offices, clinics, and hospitals to retail outlets such as drugstores and grocery stores, workplaces, and other convenience locations. Millions of individuals receive flu shots through mass immunization campaigns in nonmedical settings, where organizations, such as visiting nurse agencies under contract, administer the vaccine. The widespread availability of flu shots may help increase immunization rates overall, but it generally does not lend itself to targeting vaccine to high- priority groups. The timing of some of the mass immunization campaigns last fall generated a great deal of controversy. Some physicians and public health officials were upset when their local grocery stores, for example, were offering flu shots to everyone when they, the health care providers, were unable to obtain vaccine for their high-risk patients. Examples of these situations include the following: A radio station in Colorado sponsored a flu shot and a beer for $10 at a local restaurant and bar—at the same time that the public health department and the community health center did not have enough vaccine. One grocery store chain in Minnesota participated in a promotion offering a discounted flu shot for anyone who brought in three soup can labels. Flu shots were available for purchase to all fans attending a professional football game. CDC took some steps to try to manage the anticipated vaccine delay by issuing recommendations for vaccinating high-risk individuals first. In July 2000, CDC recommended that mass immunization campaigns, such as those open to the public or to employee groups, be delayed until early to mid-November. CDC issued more explicit voluntary guidelines in October 2000, which stated that vaccination efforts should be focused on persons aged 65 and older, pregnant women, those with chronic health conditions that place them at high risk, and health care workers. The October guidelines also stated that while efforts should be made to increase participation in mass immunization campaigns by high-risk persons and their household contacts, other persons should not be turned away. Some organizations that conducted mass immunizations said they generally did not screen individuals who came for flu shots in terms of their risk levels. Some said they tried to target high-risk individuals and provided information on who was at high risk, but they let each person decide whether to receive a shot. Their perspective was that the burden lies with the individual to determine his or her own level of risk, not with the provider. Moreover, they said that the convenience locations provide an important option for high-risk individuals as well as others. Health care providers in both traditional and nontraditional settings told us that it is difficult to turn someone away when he or she requests a flu shot. Manufacturers and Distributors Reported Difficulty Determining How to Get Vaccine to High-Risk Individuals The manufacturers and distributors we interviewed reported that it was difficult to determine which of their purchasers should receive priority vaccine deliveries in response to CDC’s recommendations to vaccinate high-risk individuals first. They did not have plans in place to prioritize deliveries to target vaccine to high-risk individuals because there generally had been enough vaccine in previous years and thus there had been little practical need for this type of prioritization. When they did try to identify purchasers serving high-risk individuals, the manufacturers and distributors often found they lacked sufficient information about their customers to make such decisions, and they also were aware that all types of vaccine providers were likely to serve at least some high-risk individuals. As a result, manufacturers reported using various approaches in distributing their vaccine, including making partial shipments to all purchasers as a way to help ensure that more high-risk persons could be vaccinated. Others made efforts to ship vaccine first to nursing homes, where they could be identified, and to physicians’ offices. All of the manufacturers and distributors we talked to said that once they distributed the vaccine it would be up to the purchasers and health care providers to target the available vaccine to high-risk groups. Immunization statistics are not yet available to show how successful these ad hoc distribution strategies may have been in reaching high-risk groups, but there may be cause for concern. Some state health officials reported that nursing homes often purchase their flu vaccine from local pharmacies, and some distributors considered pharmacies to be lower priority for deliveries. In addition, many physicians reported that they felt they did not receive priority for vaccine delivery, even though nearly two- thirds of seniors—one of the largest high-risk groups—generally get their flu shots in medical offices. The experience of the 58 physicians’ practices we surveyed seemed consistent with this reported lack of priority: as a group, they received their shipments at about the same delayed rate that vaccine was generally available on the market. Additional Actions Needed to Prepare for Future Vaccine Delays and Shortages Ensuring an adequate and timely supply of vaccine, already a difficult task given the complex manufacturing process, has become even more difficult as the number of manufacturers has decreased. Now, a production delay or shortfall experienced by even one of the three remaining manufacturers can significantly affect overall vaccine availability. Looking back, we are fortunate that the 2000-01 flu season arrived late and was less severe than normal because we lacked the vaccine last October and November to prepare for it. Had the flu hit early with normal or greater severity, the consequences could have been serious for the millions of Americans who were unable to get their flu shots on time. This raises the question of what more can be done to better prepare for possible vaccine delays and shortages in the future. We need to recognize that flu vaccine production and distribution are private-sector responsibilities, and as such options are somewhat limited. HHS has no authority to directly control flu vaccine production and distribution, beyond FDA’s role in regulating good manufacturing practices and CDC’s role in encouraging appropriate public health actions. Working within these constraints, HHS undertook several initiatives in response to the problems experienced during the 2000-01 flu season. For example, the National Institutes of Health, working with FDA and CDC, conducted a clinical trial on the feasibility of using smaller doses of vaccine for healthy adults. If smaller doses offer acceptable levels of protection, this would be one way to stretch limited vaccine supplies. Final results from this work are expected in fall 2001. In addition, for the upcoming flu season CDC and its advisory committee extended the optimal period for getting a flu shot until the end of November, to encourage more people to get shots later in the season. HHS is also working to complete a plan for a national response to a severe worldwide influenza outbreak, called a pandemic. While the plan itself would likely be applied only in cases of public health emergencies, we believe that the advance preparations by manufacturers, distributors, physicians, and public health officials to implement the plan could provide a foundation to assist in dealing with less severe problems, such as those experienced last year. We believe it would be helpful for HHS agencies to take additional actions in three areas. Progress in these areas could prove valuable in managing future flu vaccine disruptions and targeting vaccine to high-risk individuals. First, because vaccine production and distribution are private- sector responsibilities, CDC needs to work with a wide range of private entities to prepare for potential problems in the future. CDC can take an ongoing leadership role in organizing and supporting efforts to bring together all interested parties to formulate voluntary guidelines for vaccine distribution in the event of a future vaccine delay or shortage. In March 2001, CDC co-sponsored a meeting with the American Medical Association that brought together public health officials, vaccine manufacturers, distributors, physicians, and other providers to discuss flu vaccine distribution, including ways to target vaccine to high-risk groups in the event of a future supply disruption. This meeting was a good first step, and continued efforts should be made to achieve consensus among the public- and private-sector entities involved in vaccine production, distribution, and administration.
Until the 2001 flu season, the production and distribution of influenza vaccine generally went smoothly. Last year, however, several people reported that they wanted but could not get flu shots. In addition, physicians and public health departments could not provide shots to high-risk patients in their medical offices and clinics because they had not received vaccine they ordered many months in advance, or because they were being asked to pay much higher prices for vaccine in order to get it right away. At the same time, there were reports that providers in other locations, even grocery stores and restaurants, were offering flu shots to everyone--including younger, healthier people who were not at high risk. This testimony discusses the delays in production, distribution, and pricing of the 2000-2001 flu vaccine. GAO found that manufacturing difficulties during the 2000-2001 flu season resulted in an overall delay of about six to eight weeks in shipping vaccine to most customers. This delay created an initial shortage and temporary price spikes. There is no system in place to ensure that high-risk people have priority for receiving flu shots when supply is short. Because vaccine purchases are mainly done in the private sector, federal actions to help mitigate any adverse effects of vaccine delays or shortages need to rely to a great extent on collaboration between the public and private sectors.
GAO_GAO-16-551
Background Telework Enhancement Act of 2010 and Agency and OPM Roles and Responsibilities In drafting the Telework Enhancement Act of 2010, Congress recognized that telework was an important tool and that legislation was needed to help agencies overcome their resistance to telework. The act established a framework of requirements for executive agencies to meet in implementing telework. These requirements include notifying all employees of their eligibility to telework and establishing agency telework participation goals. The act also requires each executive agency to designate a telework managing officer (TMO) who develops telework policy, serves as an advisor for agency leadership, and is a resource for managers and employees. The act assigns OPM major leadership responsibilities including (1) providing policy and policy guidance for telework; (2) assisting each agency in establishing appropriate qualitative and quantitative measures and teleworking goals; (3) identifying best practices and recommendations for the federal government and reviewing the outcomes associated with an increase in telework, including effects on energy consumption, job creation and availability, urban transportation patterns, and the ability to anticipate the dispersal of work during periods of emergency; and (4) submitting an annual report to Congress addressing the telework program of each executive agency that includes an assessment of each agency’s progress in meeting outcome goals that the agency may have established, such as the impact of telework on recruitment and retention and energy use, among others. The act also requires each executive agency to submit an annual report on the agency’s efforts to promote telework to the Chair and Vice Chair of the Chief Human Capital Officers (CHCO) Council. In addition, the act requires OPM to consult with the CHCO Council in submitting its annual report to Congress addressing the telework programs of each executive agency. The CHCO Council receives updates from OPM on agencies’ annual telework reports and discusses their implications and promising practices. Telework was discussed at the February 2016 meeting. Telework Participation As shown in figure 1, three key areas of telework participation have increased, according to OPM’s 2014 annual report. OPM reported that, from 2011 to 2012, the number of employees eligible for telework increased from 684,589 to 1,020,034 (an increase of about 49 percent), and the number of employees that had telework agreements increased from 144,851 to 267,227 (an 84 percent increase). While we have previously reported on data limitations related to OPM’s telework report, OPM’s report provides useful context about the status of telework in the federal government and is the most comprehensive source of information on telework in the executive branch. Selected Agencies Identified Benefits and Costs Associated with Telework, but Generally Lacked Supporting Data Figure 2 shows telework benefits we identified associated with federal agency telework programs, based on a literature review and the experiences of the six selected agencies whose telework programs we examined. These benefits included reduced employee absences, improved work/life balance, improved recruitment and retention, maintaining continuity of operations (COOP) during designated emergencies or inclement weather, reduced commuting costs/transit subsidies, increased productivity, reduced real estate costs, reduced utilities, and positive environmental impacts, such as reduced greenhouse emissions. All Selected Agencies Identified Benefits All six selected agencies identified benefits associated with their telework programs. Specifically, all six selected agencies identified human capital (improved recruitment/retention), improved work/life balance, and increased productivity and five of them identified reduced utilities, reduced commuting costs/transit subsidies and reduced employee absences as benefits (see table 1). For example, USDA officials reported that the agency highlights telework as an agency benefit during hiring events to recruit and attract veterans and persons with disabilities. The officials also said they have been able to retain staff who, because they can telework, choose to relocate from their established duty stations and continue working at the agency rather than retire. In addition, FDIC officials reported that their telework program contributes to improved work/life balance for their employees due to reduced commuting time. Four of the six agencies identified COOP, reduced real estate use, and positive environmental impact as benefits of their telework programs (see table 1). For example, a FDIC official said that FDIC was able to reduce the amount of office space it leased because eligible teleworkers opted to relinquish their dedicated office space and telework from home or at an approved alternate work site when not working at an insured depository institution. Government-wide Telework-Related Outcome Goals Include Benefits That We Identified Beginning in 2011, OPM also began collecting data on agency progress in setting and achieving outcome goals including telework benefits that we identified in our inventory, such as employee recruitment and retention. The number of agencies government-wide that set and assessed the progress of their telework-related outcome goals substantially decreased between 2012 and 2013, according to OPM’s 2014 annual report. (see figure 3). Fewer agencies reported setting a goal for emergency preparedness—83 agencies in 2012 compared to 41 in 2013. Likewise, fewer agencies reported setting goals for employee recruitment as a telework related outcome goal—62 agencies in 2012 compared to 26 in 2013. OPM officials noted in the 2014 report that agencies set ambitious telework-related outcome goals in the early implementation of the act and as agency telework programs matured, agencies began to identify and set fewer telework-related outcome goals to track and assess progress. Costs Associated with Agencies’ Telework Programs Figure 4 shows the costs associated with telework that we identified for federal agency telework programs based on a literature review and the experiences of the six selected agencies in our review. Unlike benefits, OPM’s annual report does not include information on costs associated with agency telework programs. Agencies may incur one-time costs for implementing their telework program and ongoing costs to maintain their telework program. One-time costs may include program planning, initial information technology (IT) setup, or employee outfitting costs. Ongoing costs may include personnel costs associated with required training and administrative costs of staff managing the telework program. Five of the six selected agencies identified ongoing costs associated with their telework programs, including personnel and technology related costs. The cost of personnel was the most frequently identified ongoing cost associated with these five agency telework programs. Personnel costs can include salaries for telework coordinators or employee training costs. For example, EPA identified employee training as an ongoing personnel cost because all employees are required to participate in telework training to remain telework eligible. Managers who supervise teleworkers also receive training. In addition, USDA officials reported ongoing costs to purchase additional remote access software to accommodate annual increases in teleworkers to the network and maintain the required licenses annually. MSPB did not identify any costs associated with its telework program. None of the six selected agencies identified one-time costs associated with implementing their telework programs. Selected Agencies Had Little Supporting Data for Benefits and Costs The act does not require agencies to provide supporting data to OPM for benefits or costs incurred. We defined supporting data as having both a data source and a corresponding methodology. Supporting data can be quantitative or qualitative. For example, emissions reductions connected to telework can be measured in metric tons of carbon emissions avoided. Cost savings can include reduced spending on transit subsidies or utility bills. Qualitative support for benefits might include responses from survey questions or results from focus groups indicating that telework has improved work/life balance. Selected Agencies Had Supporting Data for Some Benefits Supporting data for benefits from all of the selected agencies are shown in figure 5. Specifically, all of the selected agencies had supporting data for 1 to 7 of the benefits that they identified. DOT had supporting data for 1 of the 10 benefits (reducing environmental impact) that it identified. The agency reported avoiding approximately 21.7 million kg of carbon dioxide emissions in fiscal year 2014, which is equal to 1.7 kg on average per employee, per day. EPA had supporting data for 1 of the 9 benefits it identified: reduced environmental impact. EPA reported avoiding 10,791 telework-related metric tons of carbon dioxide emissions in 2014 as compared to 2011. FDIC had supporting data for 4 of the 8 benefits it identified. FDIC conducted a telework survey of managers and employees in 2008 which suggests that telework contributed to retaining employees, work/life balance, and increased productivity. GSA had supporting data for 5 of the 12 benefits it identified: work/life balance, transit subsidies, environmental impact, reduced paid administrative leave, and increased job satisfaction/employee morale. To calculate costs savings for reduced transit subsidies from teleworking, GSA officials obtained a transit subsidy participation list from DOT for fiscal year 2013 through fiscal year 2015 and compared it against reported telework hours for the same period to calculate a cost savings of $926,872 in 2015 based on telework-related reduced use of transit subsidies in comparison with 2013. GSA officials also reported that from fiscal year 2013 through fiscal year 2015, employees used 45,426 fewer hours of paid administrative leave during worksite closures and they teleworked about 202,886 hours more. According to GSA officials, these figures represent its emphasis on enabling and requiring employees to telework when agency worksites are closed, which previously would have resulted in the use of only paid administrative leave. MSPB officials had supporting data for 1 of the 5 benefits identified. An MSPB official cited employee satisfaction data related to the impact of telework on work/life balance from the Federal Employee Viewpoint Survey. USDA had supporting data and corresponding methodologies for 7 of the 10 benefits it identified: employee retention, work/life balance, transit subsidies, utilities, real estate, environmental impact and employee satisfaction. USDA provided cost savings from 2011 to 2014 information for 4 of these benefits, as shown in figure 6. For example, USDA identified at least 32 cases of employees who accepted full-time telework arrangements in place of retirement, allowing the agency to retain experienced employees and saving an estimated $1.5 million from reduced or eliminated commuter costs and reduced salaries due to changes in locality pay, among other things. USDA used information from multiple sources to quantify benefits and savings attributable to telework. For example, its Office of Operations provided data for real estate and utilities cost savings and National Finance Center data were used for transit subsidy savings. Officials from DOT, GSA, and MSPB reported various reasons that they did not have supporting data for some of the benefits that they identified. First, DOT and MSPB officials said that they do not track data on some identified benefits. For example, DOT officials said that they do not have tracking systems or data sources to calculate specific COOP cost savings associated with teleworking. Second, DOT and GSA officials said that in many cases, the telework benefits are not distinguishable from those of other activities. For example, DOT officials said that a number of programs contribute toward reaching goals and it is difficult to ascertain the extent to which telework contributes to their accomplishment. OPM Is Collecting Less Information on Cost Savings OPM did not include questions about cost savings associated with telework in the 2014-2015 data call on telework. OPM had asked questions about costs savings in its 2011, 2012, and 2013 telework data calls and added additional questions on the amount of cost savings and the methodology for calculating the savings in the 2013 telework data call in response to our recommendation. OPM officials told us that they had asked specific questions on cost savings in previous telework data calls as part of their effort to help agencies set and evaluate their goals. OPM officials said they streamlined the 2014-2015 data call to focus on the requirements of the act, which does not specifically require OPM to include questions on overall cost savings associated with telework programs. In addition, they said that the survey still provides opportunities for agencies to describe cost savings as agencies were asked to describe their progress in achieving each outcome goal listed in the act as well as any other outcomes goals, including the data and methodology used to assess progress. OPM officials said they believe that they were successful in their 3 year effort to get agencies to set outcome goals, including cost savings, and to evaluate the success in meeting their goals. In 2013, about 20 percent of agencies (17 of 89) reported achieving 29 different instances of cost savings from telework which included rent for office space, utilities, human capital (such as using telework for retention), reduced employee absences, and parking and/or transportation subsidies, according to OPM’s 2014 annual report and our analysis of the 2013 telework data call results. The report also stated that 4 agencies reported a corresponding dollar savings amount. For example, the Election Assistance Commission reported yearly rental savings of $750,000 and that it obtained the data from an existing agency real estate report. In addition, IRS reported telework enabled it to close 22 small offices and save $410,539. Each of the 4 agencies that reported a dollar savings amount also reported a corresponding methodology for determining the costs. However, from 2012 to 2013, there was a decrease in the number of cost savings reported associated with federal telework programs and the number of agencies that were planning to track cost savings, according to the 2014 OPM report. Agencies reported fewer examples of cost savings (from 66 to 29) in 2013 than in 2012 and the number of agencies reporting that planning was underway to assess cost savings decreased from 31 to 18, according to the 2014 OPM report and our analysis of the 2013 telework data call results. About 60 percent of agencies (54 of 89) reported that they were unable to track any cost savings, according to OPM’s 2014 annual report. The report noted that agencies have had difficulty establishing and linking cost savings directly to telework programs. Establishing cost savings through telework remains a work in progress and agencies also often do not track such investments, according to the report. None of the selected agencies’ officials said that they were planning to collect additional cost savings information. For example, EPA officials said that the agency might collect additional data when it identified new telework goals and measures, and USDA officials said that no additional data were necessary. The act requires an assessment of each agency’s progress in achieving established telework-related outcome goals. Outcome goals such as emergency preparedness and reduced energy use reflect the benefits agencies can achieve and, in some cases, the cost savings that relate to reduced real estate or utilities paid. We have previously found that federal agencies should establish measurable telework program goals and processes, procedures, or a tracking system to collect data to evaluate the telework program, and that complete and reliable information is vital to assessing effectiveness. Federal internal control standards also suggest that to ensure that management’s objectives are carried out, activities need to be established to monitor performance measures and indicators, including validating the integrity of the measures and indicators. However, with no information being required on cost savings as a part of OPM’s data call and agencies’ plans to reduce collection of this data in the future, agencies will have less information to assess the value of their telework programs than they currently do. In the current fiscal climate, cost savings is an important measure of the success of telework programs, according to OPM’s 2014 report. Two Selected Agencies Had Supporting Data for Some Costs Incurred Of the six selected agencies we reviewed, five identified costs incurred, but only two—FDIC and GSA—provided supporting data, as shown in figure 7. For example, FDIC officials reported that they had ongoing costs from financial reimbursements to encourage employees to elect the full- time telework option and opt out of office space in field offices during field office lease expirations. The option allows FDIC to rent less real estate space. FDIC officials stated that they calculated these costs by multiplying the dollar value of the one-time reimbursement for costs associated with equipment not otherwise provided by FDIC (up to $500) or the ongoing outfitting cost payment (up to $480 annually for costs associated with multiple phone lines and high-speed Internet) by the number of employees receiving it. GSA officials reported total ongoing salary costs of about $245,290 for 2 percent of the salaries of GSA’s 34 telework coordinators and 20 percent of the salary of its agency telework coordinator. GSA officials stated that they multiplied the average full-time equivalent costs by the percentage of time used by each official in a coordinator role. GSA officials also reported that 14,300 employees have completed mandatory telework training since 2011 at a salary cost of about $62 per employee, which equals about $884,600 in salary costs associated with training. Officials from DOT, EPA, GSA, FDIC, and MSPB also reported various reasons why they did not have supporting data for some costs incurred. DOT, GSA, and MSPB officials stated that telework is part of normal business operations and they cannot easily or meaningfully distinguish telework costs incurred from routine business costs. According to EPA and MSPB officials, both agencies’ telework programs evolved from bargaining units’ requests for flexible work schedules over 15 years ago. Therefore, associated costs incurred have been considered normal operating costs in some cases and thus not tracked as telework-related costs incurred. FDIC officials said that they were unable to provide information on start-up costs for their agency’s telework program, which began in 2003, because they were past the mandatory retention period for any records. In addition, according to the 2014 OPM report, obtaining data to calculate energy use and environmental impact is challenging for agencies and often may require cross-agency collaboration or data that are not currently being collected. Agencies Identified and Partially Addressed Barriers to Telework and Risks In the 2011 and 2013 OPM data calls, agencies reported on barriers to telework participation and steps taken to address the barriers. Our analysis shows that, government-wide, agencies identified fewer barriers to telework participation in 2013 than 2011, as shown in figure 8. However, management resistance remains the most frequently reported barrier to telework, according to OPM. Among the six selected agencies we reviewed, DOT, EPA, MSPB, and USDA reported certain barriers to telework participation in the 2014 OPM report. DOT reported barriers, including IT security and funding issues, management resistance (for example, managers/supervisors who may not be fully comfortable managing employees working offsite), and organizational cultures. EPA reported that managers and supervisors have been uncomfortable with telework. MSPB also reported that some managers were not comfortable approving telework agreements for some job series. Finally, USDA reported barriers including IT infrastructure and secure remote access, employee desire and ability to use telework tools, and budgetary limitations related to purchasing equipment to support telework consistently across the department. OPM’s 2014 annual report also stated that agencies are taking steps to overcome barriers to telework participation (see figure 9). Among the selected agencies, DOT, EPA, GSA, MSPB, and USDA reported addressing barriers in OPM’s 2014 report. For example, DOT reported providing training for employees and managers, marketing for telework via intranet, all hands meetings, memorandums to employees, and including a performance standard in SES performance plans in support of telework. GSA reported that its mobility and telework policy addresses unfamiliarity with telework or hesitation to participate. Officials from two of the six selected agencies (DOT and USDA) also identified potential risks associated with their telework programs. Risk assessment is the identification and analysis of relevant risks associated with achieving objectives, deciding how to manage the risk, and identifying what corresponding actions should be taken. DOT officials identified risks related to technology, IT security and IT funding, management resistance, and organizational culture. USDA officials identified risks related to management resistance and technology. DOT and USDA officials stated that they had taken steps to manage risks associated with their telework programs. EPA and GSA officials also identified activities related to risk mitigation. Officials from DOT, EPA, GSA, and USDA stated that they provided telework training for managers and employees. For example, USDA officials reported that the agency requires supervisors to complete all telework training. DOT and GSA officials stated that they provided clear messaging and information. Specifically, DOT officials said that they internally market and encourage telework as a means to continue operations (e.g., when options for “unscheduled telework” have been announced by OPM) and provide telework policy guidance to employees, supervisors, and managers on an ongoing basis. GSA officials also said that ongoing communication across GSA supports employees’ understanding of the flexibilities available to them, and their responsibilities in regard to telework participation. USDA officials said that they addressed management resistance by incorporating telework into managers’ performance plans to make managers accountable for providing employees the necessary training and equipment for effective implementation of telework. DOT and USDA stated that they addressed technology issues relating to telework. For example, DOT reported that it periodically updates its computers and remote access technologies to contend with emerging data security threats. Officials from FDIC and GSA also stated that they identified risks related to fraud, waste and/or abuse associated with their telework programs, while DOT, EPA, MSPB, and USDA officials did not. In the initial stages of implementing its telework program, FDIC identified potential risks associated with its telework program that included a possible decline in productivity, access to sensitive information off-site, and time and attendance concerns, according to FDIC officials. FDIC has taken steps to mitigate risks and help prevent fraud, waste, and abuse through a range of control activities. Among other actions, it issued a directive on telework that clearly delineates program guidelines and responsibilities, provided training on telework and information security to staff, and made telework participation subject to the employee/supervisor agreement, program participation eligibility, and adherence to the telework policy. Inspectors general at some of the selected agencies have noted fraud and other risks in those agencies’ telework programs. GSA officials noted six Office of the Inspector General (OIG) recommendations from a 2015 audit that related to (1) tracking telework agreements, (2) recording duty stations and using correct locality pay for all virtual employees, (3) timekeeping for teleworkers, (4) controls over transit subsidies, (5) completion of required telework training, and (6) ensuring GSA telework training addressed requirements of its telework policy. GSA officials reported they took actions to address each recommendation that included (1) implementing a tracking tool for telework agreements and updating its policy, (2) verifying official duty stations and adjusting pay appropriately, (3) enhancing timekeeping controls for teleworking, (4) reviewing transit subsidies and working with DOT to transition to an automated transit subsidy application, (5) developing tracking for telework training completion, and (6) developing updated telework training. In 2014 and 2015, EPA’s OIG reported on four cases of time and attendance fraud involving telework at the agency. First, the OIG investigated an EPA manager who entered and approved fraudulent time and attendance records for an employee who exclusively teleworked for several years, which cost the government more than $500,000. Second, the OIG found evidence that a senior executive knew about but took no action regarding an arrangement between a supervisor and employee during which the employee had been teleworking for more than 20 years with very little substantive work produced. Third, an executive prepared and approved false telework time and attendance records for an employee who was suffering from a debilitating disease and was not working. Fourth, the OIG reported that EPA fired an employee for misconduct that included falsely claiming telework hours on numerous occasions. While the OIG noted that EPA has begun to change its time and attendance policies and practices, it identified a culture of complacency among some EPA supervisors regarding time and attendance controls, and taking prompt action against employees, and the OIG recommended that the agency take measures to communicate its commitment to internal controls. Although both DOT and EPA reported that they did not identify telework risks associated with fraud, waste, or abuse, both reported taking actions to avoid these risks. While EPA did not identify telework risks associated with fraud, waste, or abuse in response to our questions, EPA officials did describe actions the agency had taken to avoid such risks. EPA adopted a new policy that requires employees to complete telework training prior to being approved to telework, to annually recertify their telework agreements, and to document their time and attendance telework status. In the event of fraud, waste, or abuse, the policy allows management to modify or terminate a telework agreement at any time. In addition, the Census and Commerce OIGs also reported on cases of abuse and waste involving, but not limited to, telework. First, in 2015, the Census OIG reported that employees in its Census Hiring and Employment Check office engaged in time and attendance abuse—some of which involved employees who claimed to telework a full day with evidence showing they performed little or no work at all. Second, the Department of Commerce OIG uncovered waste in 2014 at the U.S. Patent and Trademark Office’s Patent Trial and Appeal Board. At one office, the Patent Trial and Appeal Board paid the employees approximately $5 million for time in which employees were not working. DOT did not identify telework risks associated with fraud, waste, or abuse, but it reported occasionally issuing preemptive guidance to hedge against potential risks. For example, in 2014, DOT reported issuing internal guidance reminding employees and managers to be diligent in accounting for work hours while teleworking and code telework hours in the time and attendance system. Lack of Guidance Limits Agencies’ Ability to Better Determine Benefits and Costs Incurred OPM Resources for Agencies OPM provides three types of telework assistance to agencies. First, OPM offers training and webinars on responding to the telework data call. The training includes standards for setting and evaluating goals and identifies some data sources available to evaluate telework-related agency outcome goals. An OPM official stated that a large majority of officials responsible for completing the telework data call participated in the 2014- 2015 training sessions. Information presented in the training sessions is not available in other forms such as guidance or policies, but general information on setting and evaluating goals is available through OPM resources which are posted on telework.gov, according to OPM officials. Second, OPM collaborated with Mobile Work Exchange to develop and publish Measuring Telework and Mobility Return on Investment: A Snapshot of Agency Best Practices in 2014 on methodologies and guidance to measure agency return on investment (ROI) on telework programs. The report also highlights a variety of tools and best practices for measuring telework ROI across the federal government. The report is intended to be a snapshot in time and has not been updated since publication, according to Mobile Work Exchange. Third, OPM offers various services for a fee to help agencies implement or improve an existing telework program. Services include an evaluation of existing telework policies and practices, a telework satisfaction survey that establishes a baseline to track progress, telework training sessions, and a program evaluation. OPM officials stated that they also have identified ROI factors and indicators to measure telework programs. OPM officials reported that the most frequent services agencies ask for are telework training and the telework satisfaction survey, but that no agency has thus far asked for telework ROI. In addition, OPM officials said that Global Workplace Analytics conducts key telework research related to federal agencies calculating benefits and costs associated with implementing their telework programs. They stated that the Global Workplace Analytics calculator is comprehensive and based on solid research. Existing Data Collection That Can Assist Telework Benefits Calculations Some of the data many agencies may already collect for requirements under an Office of Management and Budget (OMB) memorandum and an executive order could also be of use for calculating the benefits associated with telework (see table 2). Agencies can apply a cost allocation approach to help calculate the amount of benefits associated with telework in instances where agencies collect data that are not directly related to telework. EPA and GSA are already using the data for this purpose. Under OMB’s Reduce the Footprint memorandum, CFO Act agencies are required to submit plans that include the efficient use of office space and to identify cost-effective alternatives to the acquisition of additional office space, such as teleworking and hoteling. Agencies are also required to specify annual reduction targets for domestic office and warehouse space. If an agency requires less office space due to teleworking and consolidates that space, this could result in agencies meeting reduction targets, disposing of surplus properties, and using their real estate more efficiently. DOT’s 2016-2020 plan identified telework as a strategy to consolidate office space and improve space management. DOT identified the introduction of full-time telework as a contributing factor to closing the Federal Highway Administration Legal Services and Resource Center office in San Francisco, resulting in a reduction of 9,804 rentable square feet. In addition, according to DOT, it anticipates that the combination of alternative work schedules, telework, and shared workspace scenarios will reduce the office workspaces designed in the future by at least 10 percent. GSA also identified telework as a contributing factor to its headquarters renovation which resulted in a 40 percent reduction in office space and $24.6 million in annual rent savings. Executive Order 13693 on planning for federal sustainability and its implementing instructions requires agencies to submit and annually update a plan focused on, among other things, specific agency strategies to accomplish greenhouse gas emissions reduction targets, including approaches for achieving the goals and quantifiable metrics for agency implementation. The data generated to measure a reduction in greenhouse gas emissions could be partially a result of telework reducing real estate use, utilities, and commuting. For example, GSA calculated the environmental impact associated with its telework program by using data generated for this order. In addition, EPA’s 2014 plan recognized that telework contributed to the agency’s reduction of greenhouse gas emissions by about 40 percent from fiscal year 2008 to fiscal year 2013 because its telework program allowed staff to work from an alternate location. The plan noted that telework decreased the greenhouse gas emissions associated with employee commuting by reducing the number of days employees commute to work each week. GSA also created the Carbon Footprint Tool (tool) to help agencies meet the executive order’s requirements, which can be used to calculate greenhouse gas emissions avoided from teleworking. The tool calculates, measures, and reports greenhouse gas emission reductions. The tool allows agencies to change the number of teleworkers to calculate the impact on greenhouse gas emissions. For example, GSA used data generated by the tool to calculate avoided emissions from telework. GSA estimated that telework in fiscal year 2013 avoided over 8,800 metric tons of carbon dioxide equivalent. Net Savings Central to Determining the Value of Telework OPM may be missing an opportunity to advise agencies of options that can inform the assessment of the agency’s progress in meeting outcome goals and contribute to understanding the full value of the telework program. OPM has guidance on calculating telework benefits in various resources but none on costs incurred. OPM’s 2014-2015 telework data call includes a list of data sources and examples of measures or metrics, which include the amount of spending on transit subsidies and the percentage of employees expressing satisfaction with their jobs. In addition, the data call has information on establishing good goals, choosing a time frame, choosing a method for assessing a goal, selecting a metric/measure, and finding sources of data. OPM’s training materials on the data call also include similar information. OPM officials stated that there is also information on setting goals and evaluating telework programs, including discussion of a range of benefits associated with telework programs in several resources, in telework.gov, the Guide to Telework in the Federal Government, and webinars for agency human resource professionals. While the act requires OPM to report on the progress of agencies that have set outcome goals reflecting telework benefits, it does not require OPM to report on costs associated with telework programs. In addition, the OPM data call does not include questions on costs incurred and OPM’s data call training materials do not discuss the types of costs that agencies’ telework programs may incur. Moreover, OPM’s guidance lacks information on the existing data collection that can assist telework benefits calculations discussed previously, specifically under Executive Order 13693 and OMB’s Reduce the Footprint memorandum. Given the focus on increasing access to telework as embodied in the provisions of the act, it is essential that agencies understand the true effects of their telework programs. The act requires an assessment of each agency’s progress in meeting established telework-related outcome goals. An evaluation of benefits and costs, which would assist agencies in identifying net cost savings, provides a systematic framework for assessing telework programs. Thus, an agency’s potential to realize net cost savings depends on its ability to develop data on costs incurred from implementing the telework program. We have previously found that federal agencies should establish measurable telework program goals and processes, procedures or a tracking system to collect data to evaluate their telework programs, and that complete and reliable information is vital to assessing effectiveness. OPM officials said that it is difficult to provide government-wide guidance on evaluating telework programs as agency use of telework to achieve goals varies. Furthermore, OPM officials stated that it is difficult to identify the effects of telework because it requires extensive research design and a staff with the expertise and skills to conduct rigorous evaluations. For example, OPM officials said that agencies vary in the resources available to them to track and evaluate telework programs. In addition, OPM officials said they do not have the resources to target and assist each agency in establishing appropriate qualitative and quantitative measures and teleworking goals. OPM officials also stated that agencies may not be aware of all the available resources, including OPM’s services for a fee. However, the CHCO Council provides an additional avenue for OPM to engage agencies on telework. As mentioned, the council receives updates from OPM on agencies’ annual telework reports and discusses their implications and promising practices. Telework was discussed at the February 2016 meeting. While we recognize that providing this guidance could be challenging, some of the agencies we reviewed told us that they could benefit from having such guidance. DOT reported that it would be useful to have standard government-wide guidance on translating qualitative telework programmatic outcomes into quantifiable cost savings data. GSA officials also stated that, although the 2014 report on which OPM collaborated with Mobile Work Exchange, Measuring Telework and Mobility Return on Investment: A Snapshot of Agency Best Practices, is helpful to quantify some benefits related to their telework program, there are no available guidance or tools that provide a holistic solution to evaluating telework programs. By not taking advantage of data sources that can inform on benefits nor providing guidance on costs associated with telework, agency assessments may be less informative about the net cost savings of telework and, ultimately, the value of telework. Further, Congress will have less information to understand the full value of the telework program which could affect its ability to oversee telework across the federal government. Conclusions Telework is a tool that has the potential to impact agencies’ performance and costs. A better understanding of the benefits achieved and costs incurred via telework can help an agency determine the value of this tool. Given that agency employees are increasingly using telework, it is important that agencies examine the impact of using this tool on their performance and cost bottom lines. Even though agencies are not required to report on costs incurred by the act, Congress has a clear interest in the value of this flexibility offered to the federal workforce. Congress signaled its interest by assigning OPM a role in reporting an assessment of each agency’s progress in meeting telework outcome goals that reflect benefits, such as the impact on recruitment and retention and energy use. However, agencies continue to face challenges in quantifying the impact of telework, identifying costs incurred, and translating benefits into quantifiable cost savings. OPM can work with the CHCO Council on methods to assist agencies in assessing the benefits and costs associated with their telework programs. OPM did not ask agencies about cost savings in its 2014-2015 telework data call despite a substantial decrease in the number of cost savings examples reported associated with telework programs and the number of agencies planning on tracking telework cost savings. However, the act requires an assessment of each agency’s progress in meeting telework- related outcome goals that reflect the benefits agencies can achieve and, in some cases, these may be cost savings. In the current fiscal climate, cost savings is an important measure of the success of telework programs and the absence of these questions will likely result in agencies reporting even more limited cost savings information than they do currently. The six selected agencies had little supporting data for either the benefits or costs associated with their telework programs. Such supporting data are important to inform decision making about the value of telework. Moreover, without data on net benefits including cost savings associated with telework, agencies have incomplete information to determine the value of telework through assessing whether the benefits being achieved outweigh the costs incurred. Congress also will not have the information it needs to oversee federal telework programs as OPM will not be reporting this information to Congress. Recommendations for Executive Action We recommend the Director of OPM take the following actions: 1. To help ensure that agencies are reporting cost savings associated with their telework programs, include cost savings questions in future telework data calls. 2. To help agencies determine the value of their telework programs, working with the Chief Human Capital Officers Council, provide clarifying guidance on options for developing supporting data for benefits and costs associated with agency telework programs. For example, the guidance could identify potential data sources, such as the data generated in response to requirements under OMB Reduce the Footprint Memorandum 2015-01 and Executive Order 13693. Agency Comments and Our Evaluation We provided a draft of this report to the Acting Director of OPM, Secretary of DOT, Administrator of EPA, Chairman of FDIC, Administrator of GSA, Chairman of MSPB, Deputy Assistant Inspector General for Audit of USDA, and the Executive Director of the CHCO Council. OPM provided written comments (reproduced in appendix II). OPM concurred with our recommendation to include cost savings questions in future telework data calls beginning with the 2016 telework data call. OPM also concurred with our second recommendation and said it would work with the CHCO Council to support agency efforts to determine the value of their telework programs by developing clarifying guidance for agencies with CHCO input and by hosting a CHCO academy session focused on evaluating the benefits and costs of telework programs. None of the other agencies provided comments on the report’s findings, conclusions, or recommendations. However, three agencies (FDIC, EPA, and OPM) provided technical comments that were incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 17 days from the report date. At that time, we will send copies of this report to OPM, DOT, EPA, FDIC, GSA, MSPB, and USDA as well as interested congressional committees and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6806 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology This report identifies (1) the reported benefits and costs associated with federal agency telework programs and assesses the extent to which selected agencies have supporting data; and (2) some of the key resources that federal agencies can use to help calculate benefits and costs associated with their telework programs. For our review, we included benefits that are quantifiable such as cost savings from reduced real estate use and benefits that are non-monetized such as environmental impacts. We defined costs incurred as one-time and ongoing costs. One-time costs incurred include one-time information technology set-up, such as system software. Ongoing costs incurred include ongoing personnel costs and equipment and services, such as information technology maintenance. To address both of our objectives we reviewed our previous work on agency telework programs, the Telework Enhancement Act of 2010 (act), and guidance for setting and evaluating telework program goals. We also reviewed Office of Personnel Management (OPM) and General Services Administration (GSA) documents related to agency telework programs. In addition, we interviewed key OPM officials from the offices of Human Resource Solutions and Employee Services and received answers to written questions from the Acting Executive Director of the Chief Human Capital Officers Council on the council’s involvement with telework. We reviewed OPM’s 2012, 2013, and 2014 annual reports to Congress and agencies’ 2013 responses to OPM on telework participation, telework-related outcome goals, costs savings associated with telework programs, barriers to telework participation, and actions taken to address those barriers. To determine the reliability of the data that OPM used in the 2014 annual report to Congress, we reviewed our prior data reliability assessments conducted on OPM’s 2011 data call. In addition, we consulted knowledgeable OPM officials about OPM’s data collection methods over time for its 2011, 2012, and 2013 telework data calls since the enactment of the act. We did this to determine if significant changes had occurred that might affect the reliability of the data in the 2014 annual report. For the purposes of our review, we have determined that the data are sufficiently reliable for providing contextual information on agencies reporting telework participation, telework-related outcome goals, cost savings from telework, barriers to telework participation, and methods to overcoming these barriers. To identify the selected agencies, we compiled a universe of agencies that had: reported achieving cost savings or a methodology for calculating cost savings associated with telework programs in the 2013 OPM data call, been identified by OPM as a leader, or were recipients of a 2013 telework Mobile Work Exchange award. We then selected the agencies from this universe by considering the criteria above as well as the size of the agency and agencies that had reported achieving agency goals using their telework program in the 2013 OPM telework data call, reported cost savings in multiple areas, and had core missions directly linked to telework benefits. We selected a nongeneralizable sample of six agencies: Environmental Protection Agency (EPA), Federal Deposit Insurance Corporation (FDIC), General Services Administration (GSA), Merit Systems Protection Board (MSPB), Department of Transportation (DOT), and United States Department of Agriculture (USDA). Initially we had also selected the United States Patent and Trade Office. However, we excluded that office because of a recent IG report that there was potential fraud related to its telework program and our 2013 report that questioned its reported cost savings associated with its telework program. To identify and create an inventory of the reported benefits and costs associated with federal agency telework programs, we conducted a literature search encompassing public and private sector organizations’ telework programs and identified two reports that discussed the benefits and costs incurred by federal agency telework programs. We also identified a third report focused on nonfederal telework programs that provided a more detailed review of one-time and ongoing costs associated with implementing and maintaining telework programs. In addition, we conducted another literature review to check that the inventory was not missing any key benefits or costs incurred. We also asked the selected agencies which benefits and costs were associated with their telework programs. To assess the extent to which selected agencies have supporting data for identified benefits and costs associated with their telework programs, we reviewed selected agencies’ policies, guidance, and other relevant documents related to their telework programs. In addition, we conducted semi-structured interviews with selected agency officials on benefits, costs incurred, and challenges associated with calculating benefits and costs incurred. We analyzed this information to assess the extent that the selected agencies have supporting data for the benefits and costs associated with telework programs that the agencies identified. We defined supporting data as having both a data source and a corresponding methodology. Supporting data can be quantitative, monetized, or qualitative. For example, emissions reductions connected to telework can be measured in metric tons of carbon emissions reduced or avoided. Cost savings can include spending on transit subsidies or utility bills. Qualitative support for benefits might include responses from open-ended survey questions or results from focus groups indicating that telework has improved work/life balance. We reviewed whether the selected agencies had supporting data and not the quality of the supporting data because this was outside of the scope of our review. We present the examples of supporting data for contextual purposes only. Since FDIC initiated the agency’s telework program in 2003, we included supporting data that pre-dated the 2010 act. We used our previous report that found that agencies should establish measurable telework program goals and processes, procedures or a tracking system to collect data to evaluate the telework program. We also utilized OPM’s 2013 telework data call, which includes guidance for setting and evaluating telework program goals and directs agencies to select metrics/measures and identify data sources to evaluate telework program goals. In addition, we used our federal internal controls standards, which state that activities need to be established to monitor performance measures and indicators and information should be recorded and communicated to management that enables them to carry out their internal control and other responsibilities. To review whether agencies had identified potential risks associated with telework programs and methods to address them, we reviewed the 2012, 2013, and 2014 OPM telework annual reports and agencies’ 2013 telework data call responses to OPM on barriers to telework participation. We also asked the selected agencies semi-structured interview questions on risks associated with telework programs and how the agencies had addressed the risks. In addition, we reviewed relevant reports on potential fraud, waste, or abuse related to agency telework programs. To identify available resources that federal agencies can use to help calculate benefits and costs incurred, we compiled a list of potential resources from our literature review and information from OPM and the selected agencies. We reviewed the resources and determined which ones were helpful through reviewing the documents and, in some cases, asking the relevant agency or organization clarifying questions. We also asked GSA follow-up questions about the Carbon Footprint Calculator and OPM about its relevant resources. To assess the Global Workforce Analytics calculator, we reviewed documents summarizing the calculator and the set of assumptions using the Office of Management and Budget’s (OMB) Circular A-94 Guidelines and Discount Rates for Benefit-Cost Analysis of Federal Programs. In addition, we interviewed the creator of the calculator and asked detailed questions about some of its assumptions. In general, we found that the creators of the calculator supported the assumptions employed by citing studies and the findings of others. However, we noted certain limitations in the literature cited. Specifically, we noted that because much of the literature supporting the assumptions was based on the private sector, there is less demonstrated applicability to the federal government. For example, the calculator claimed unused sick days due to telework as a financial benefit to the organization, given that unused sick days are forfeited at the end of the year, which is not true for the federal government (although the authors note that this can be adjusted in the calculator). In addition, the paper assumes cost savings because of a reduced footprint due to real estate savings. However, the paper does not take into account that employees may tend to telework on similar days, reducing the ability to achieve savings by sharing office space. In addition, OMB guidance on benefit-cost analysis suggests that uncertainty be incorporated into a benefit-cost estimate and estimates of outcomes presented with a range. In this way, policy makers can determine not just what the most likely outcome is, but the distribution of outcomes that are within the range of possibility. The calculator can produce high and low estimates (by using different assumptions). However, a limitation is that it does not automatically produce ranges of estimates. We conducted this performance audit from April 2015 to July 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Office of Personnel Management Appendix III: GAO Contact and Staff Acknowledgments GAO Contact: Staff Acknowledgments: In addition to the contact named above, Signora May (Assistant Director) and Maya Chakko (Analyst-in-Charge) supervised this review and the development of the resulting report. Crystal Bernard, Benjamin Bolitzer, Karin Fangman, Ellen Grady, Erik Kjeldgaard, Judith Kordahl, Steven Putansu, Robert Robinson, Stewart Small, and Colleen Taylor made key contributions to this report.
With over 1 million federal employees eligible for telework in 2013, federal agencies are fully engaged in incorporating telework as a standard human capital flexibility. GAO was asked to review the benefits and costs associated with agency telework programs. This report (1) identifies the reported benefits and costs associated with federal agency telework programs and assesses the extent to which selected agencies have supporting data; and (2) identifies some of the key resources that federal agencies can use to help calculate benefits and costs associated with their telework programs. For this review, GAO selected six agencies—DOT, EPA, FDIC, GSA, MSPB, and USDA—based on criteria that included agency size and reported cost savings from telework. GAO analyzed selected agencies' documents and interviewed agency officials to assess the extent that these agencies had supporting data. GAO also compiled and reviewed potential resources for agencies. Benefits associated with telework programs include continuity of operations and reduced employee absences, based on GAO's literature review and the experiences of six selected agencies. The benefits most frequently cited by the selected agencies—the Department of Transportation (DOT), Environmental Protection Agency (EPA), Federal Deposit Insurance Corporation (FDIC), General Services Administration (GSA), Merit Systems Protection Board (MSPB), and the United States Department of Agriculture (USDA)—were improved recruitment/retention, increased productivity, and improved work/life balance. Ongoing costs of telework programs include training and managing the telework program and one-time costs include information technology set up. The ongoing cost most frequently cited by the selected agencies was personnel costs. However, GAO found that the selected agencies had little data to support the benefits or costs associated with their telework programs. All of the selected agencies could provide some supporting documentation for some of the benefits and only two could provide supporting documentation for some of the costs. The Office of Personnel Management (OPM) collects data on telework via its annual data call and consults with the Chief Human Capital Officers (CHCO) Council about its annual telework report to Congress. However, GAO found substantial declines in agency reporting of telework cost savings to OPM. For example, in 2012, agencies reported 66 examples of telework cost savings, but a year later they reported 29 examples. Amidst this decline, OPM decided to collect less information about cost savings—a key benefit of telework. OPM asked agencies for cost savings information in 2011, 2012, and 2013 but did not in its 2014-2015 agency data request. The Telework Enhancement Act of 2010 requires an annual assessment of agencies in meeting established outcome goals. Assessments that include information on benefits, net costs savings, and costs can help decision makers in determining the overall effects of their telework programs and the progress achieved. OPM officials stated that they streamlined the annual data request to focus on the act's requirements, which do not explicitly include reporting on cost savings. However, as a result of this decision Congress will have less information to assess the value of telework. OPM provides resources to agencies to help them with their telework programs, but may be missing other opportunities to help agencies better identify the net cost savings associated with their telework programs. The resources OPM offers include fee-for-service assistance to help implement or improve existing telework programs and training and webinars on responding to its annual data call. However, OPM guidance lacks information about how agencies can use existing data collection efforts to more readily identify benefits of their telework programs, and OPM has not provided guidance on how agencies should calculate the costs of their programs. By not taking advantage of existing data sources or having guidance on calculating costs, agencies are limited in their efforts to evaluate the net cost savings associated with their telework programs. As a result, Congress does not have the information it needs to assess the true value of telework, which could impact its ability to provide oversight of telework across the federal government.
GAO_GAO-17-50
Background In carrying out its mission of providing services to veterans, VA’s programs are administered through its three major administrations—VHA, Veterans Benefits Administration, and National Cemetery Administration—and VA engages all of its administrations in its strategic planning process. VA’s Office of Policy is responsible for ensuring integration, collaboration, and cooperation across VA with regard to policy and strategy development. This office also leads VA’s strategic planning efforts and is involved with managing VA’s governance process. Within the Office of Policy, the Policy Analysis Services, the Strategic Studies Group, and the Strategic Planning Service support this work. In March 2014, VA published its current strategic plan, Department of Veterans Affairs FY 2014 - 2020 Strategic Plan, which identifies the department’s mission, values, and three strategic goals. Within VHA, the Office of the Assistant Deputy Under Secretary for Health for Policy and Planning supports and advises several VHA offices—the offices of the Under Secretary for Health, Principal Deputy Under Secretary for Health, and the Deputy Under Secretary for Health for Policy and Services—on the development and implementation of VHA policy, strategic planning, and forecasting. VHA’s Under Secretary for Health directs all aspects of VHA’s health care system, including its annual budget and overseeing the delivery of care to veterans, as well as the health care professionals and support staff that deliver that care. In December 2012, VHA published its current strategic plan, VHA Strategic Plan FY 2013 – 2018, which identifies its mission and vision and guides budgeting, performance management, and service alignment across VHA, and its plans for how to provide for the health care needs of veterans. (See app. I.) The plan also outlines VHA’s three goals and 17 objectives that are to be used to guide planning, budgeting, performance management, and service alignment across VHA; those three goals are to: (1) provide veterans personalized, proactive, patient-driven health care, (2) achieve measurable improvements in health outcomes, and (3) align resources to deliver sustained value to veterans. VHA Directive 1075: Strategic Planning Process is VHA’s most current strategic planning guidance, and it outlines how VHA will identify its strategic priorities and establish and execute its strategic plan, as well as identifies roles and responsibilities in this process. In recent years, VA and VHA have established several new initiatives, activities, and priorities in response to changes in internal and external factors—including VHA leadership changes, congressional concerns regarding veterans’ access to care, and the placement of VHA on our High-Risk List. Specifically, VA and VHA have developed the following new initiatives and priorities in the last 2 years: MyVA: This VA-wide initiative was launched in July 2015 and is aimed towards transforming the veterans’ experience. VA has developed five priorities for this initiative: (1) improving the veteran’s experience, (2) improving the employee experience, (3) improving internal support services, (4) establishing a culture of continuous improvement, and (5) enhancing strategic partnerships. “The Under Secretary for Health’s Five Priorities:” VHA’s Under Secretary for Health established these priorities after being appointed to his position in July 2015. The five priorities are to (1) improve access, (2) increase employee engagement, (3) establish consistent best practices, (4) build a high-performing network (which includes VA and non-VA providers), and (5) rebuild the trust of the American people. “The Secretary’s 12 Breakthrough Priorities:” These priorities describe focus areas for the MyVA initiative and were first presented in a January 2016 testimony by the Secretary of VA to the Senate Committee on Veterans’ Affairs. Several of these priorities—such as improving the veterans experience, increasing access to health care, and improving community care—are relevant to health care delivery in VHA. Additionally, in September 2014, VHA developed the Blueprint for Excellence, which presents strategies for transforming VHA health care service delivery in response to concerns regarding the VHA access and wait-time crisis that year. These strategies, which are linked to the goals and objectives in VHA’s current strategic plan, include operating a health care network that anticipates and meets the unique needs of enrolled veterans, in general, and the service disabled and most vulnerable veterans, and delivering high-quality, veteran-centered care that compares favorably to the best of private sector in measured outcomes, value, access, and patient experience. More recently, VHA developed a crosswalk for internal discussion documenting how the various VA and VHA initiatives, activities, and priorities—such as MyVA, Blueprint for Excellence strategies, the Under Secretary for Health’s Five Priorities, and the Commission on Care report—relate to each other. See fig. 1 for a timeline of internal and external factors that have affected or may affect VHA’s strategic goals and objectives. VHA Uses a Multi- Step Strategic Planning Process to Identify Strategic Goals and Objectives VHA conducts a strategic planning process annually and through this process also established its current strategic plan. According to officials, VHA’s current strategic plan was developed through its FY 2012 strategic planning process. VHA officials told us that they currently do not have plans to revise VHA’s current strategic plan or develop a new one, but may, however, develop an operational plan that will cascade from and operationalize VA’s strategic plan. VHA officials told us that they do plan to continue to use their current annual strategic planning process to identify VHA’s future strategic goals and objectives. According to VHA officials, VHA’s strategic planning process includes two key steps—(1) assessing the environment, which VHA refers to as environmental scanning, and (2) holding the annual NLC Strategic Planning Summit. These steps are consistent with leading practices in strategic planning. VHA conducts environmental scanning to identify and assess factors that may affect its future health care delivery. According to VHA policy, data from VHA’s environmental scan, such as the projected number of veterans to be served, are to be used by VHA in developing its goals and objectives. In addition, the results from VHA’s environmental scanning are to be used by VHA’s Office of Policy and Planning and VHA program offices, in strategic decision making. In addition to environmental scanning, the NLC Strategic Planning Summit is also key to VHA’s strategic planning process, in that it is the primary forum through which VHA leadership identifies and discusses the strategic goals and objectives for the next year. The NLC is responsible for recommending new or revised strategic goals and objectives and for formulating strategies to achieve them. VHA’s Office of Policy and Planning coordinates the summit and invites various stakeholders to attend—such as officials from VHA central office, including program offices; VA central office; Veterans Benefits Administration; National Cemetery Administration; VISNs; and representatives of veterans service organizations. Veterans Benefits Administration and National Cemetery Administration officials indicated that they have varied levels of participation. Veterans Benefits Administration officials told us that they have historically attended several days of the summit, and National Cemetery Administration officials indicated that they may listen to the VHA’s general body sessions during the summit. Even though officials from both administrations indicated they believe their input and coordination with VHA regarding its strategic planning was sufficient, Veterans Benefits Administration officials noted that increased engagement in VHA’s strategic planning process would be beneficial, given the direct correlation between veterans’ disability compensation ratings assigned by them and the subsequent care delivered from VHA. During the course of the summit, the NLC determines if changes to VHA’s strategic goals and objectives are needed. If there are changes, VHA’s Office of Policy and Planning drafts a document, gathers additional stakeholder feedback, and presents the document to VHA’s Under Secretary for Health for approval. VHA also obtains and uses information from VA to inform its strategic planning process. For example, according to VA and VHA officials, VHA’s environmental scanning process is, and has historically been, health-care focused, and is adjunct to the broad environmental scanning process conducted by VA. Officials noted that though distinct processes, VA’s and VHA’s environmental scanning processes are interrelated. VHA officials told us that they leverage VA’s environmental scanning results in making decisions regarding VHA’s strategic goals and objectives. Additionally, VA officials have historically been invited to participate in VHA’s NLC Strategic Planning Summit, including the 2016 summit. VA’s draft report of its environmental scanning identified changes in the veteran population, including growth in the number of veteran enrollees aged 65 and older, which VA officials reiterated at the 2016 NLC summit. VISNs and VAMCs Are Responsible for Operationalizing VHA’s Strategic Goals and Objectives, but VHA Has Not Developed Adequate Strategies or an Effective Oversight Process to Ensure Operationalization VISNs and VAMCs Have Responsibility for Operationalizing VHA’s Strategic Goals and Objectives, but VHA Has Not Defined VAMCs’ Role VISNs and VAMCs have responsibility for operationalizing VHA’s strategic goals and objectives, including its strategic plan, according to VHA officials. Operationalizing involves putting strategic goals and objectives into use by an organization, and includes developing initiatives, programs, or actions that will be used to accomplish those goals and objectives. VISNs and VAMCs must allocate resources, develop day-to- day activities, and create policies as part of that process. However, we found that VHA provides limited guidance for VAMCs in how to operationalize VHA’s strategic goals and objectives. First, VHA has not clearly identified VAMCs’ responsibilities in operationalizing its strategic goals and objectives as it has for VISNs. For example, VHA Directive 1075 states that VISN directors are to be responsible for: developing operational plans; annually tracking and reporting accomplishments in support of the VHA strategic plan; regularly updating plans to address local issues, such as geographic-specific needs; and providing input to inform future VHA goals, objectives, and strategies. However, there are no such stated responsibilities for VAMCs. According to federal internal control standards, successful organizations should assign responsibility to discrete units, and delegate authority to achieve organizational objectives. VHA and VISN officials told us that it is inferred that the VAMCs are part of the overall process even though there is no specific policy or guidance for VAMCs. All three VISNs in our review reported developing operational plans or strategies to operationalize VHA’s goals and objectives at the VISN level, but two of the nine VAMCs in our review had not developed such strategies. Second, in FY 2013, VHA provided VISNs with a strategic planning guide for operationalizing the current strategic plan, but did not provide a similar guide for VAMCs. According to the guide, its purpose was to assist VISNs in outlining a multi-year plan that aligned with VHA’s current strategic plan and provide relevant information regarding the development of strategies, the process for conducting a strategic analysis, and the time frame for providing strategic planning information, such as strategies, to VHA central office. The lack of guidance for VAMCs may hinder them from effectively operationalizing VHA’s strategic goals and objectives, and may lead to inconsistencies in time frames, documentation, and data used for the strategic planning process. For those VAMCs in our review that developed strategies to operationalize VHA’s strategic goals and objectives, for example, almost all developed local strategies on a fiscal- year cycle, which aligns with VHA’s budgeting and strategic planning processes, but one VAMC developed strategies on a calendar-year cycle. Although there is no requirement for VAMCs to conduct strategic planning on a specific timeline, per leading practices for strategic planning, organizations should align their activities, core processes, and resources to ensure achievement of the agency’s objectives. In addition, per federal internal control standards, management should effectively communicate information throughout the organization, as the organization performs key activities in achieving the objectives of the organization. VHA Has Not Developed Adequate Strategies or an Effective Oversight Process to Ensure Operationalization of Its Strategic Goals and Objectives VHA has not developed adequate strategies or an effective oversight process to ensure VHA’s strategic goals and objectives are effectively operationalized. Specifically, VISNs and VAMCs lack consistently developed strategies for operationalizing VHA’s strategic goals and objectives, and existing performance assessments are limited in measuring progress towards meeting these goals and objectives. Lack of consistently developed strategies. VHA has not consistently developed strategies for VISNs and VAMCs to use in operationalizing its strategic goals and objectives. Strategies should describe how a strategic plan’s goals and objectives are to be achieved, and should include a description of the operational processes, staff skills, use of technology, as well as the resources— such as, human, capital, and information—required. Among other things, our previous work has shown that strategies should have clearly defined milestones, outline how an organization will hold managers and staff accountable for achieving its goals, and be linked to the day-to-day activities of the organization. In addition, individual strategies should be linked to a specific goal or objective. In September 2014, VHA published the Blueprint for Excellence to provide strategies for transforming VHA health care service delivery in response to concerns regarding the VHA wait-time crisis that year. However, VHA did not develop a similar document for the other strategic planning years despite the development of multiple strategic documents, such as the Under Secretary for Health’s five priorities. Without developing adequate strategies to correspond to all of its strategic goals and objectives, the VISNs and VAMCs have limited guidance to help them operationalize VHA’s strategic goals and objectives. Moreover, the day-to-day activities and initiatives developed by VISNs and VAMCs may not appropriately align with those goals and objectives. A direct alignment between strategic goals and their associated strategies is important in assessing an organization’s ability to achieve those goals. No process for ensuring and assessing progress in meeting all of VHA’s strategic goals and objectives. As our previous work has shown, assessments can provide feedback to an organization on how well day-to-day activities and programs developed to operationalize strategic goals and objectives contribute to the achievement of those goals and objectives. Specifically, formal assessments are to be objective and measure the results, impact, or effects of a program or policy, as well as the implementation and results of programs, operating policies, and practices; they can also help in determining the appropriateness of goals or the effectiveness of strategies. However, VHA does not have effective oversight process for ensuring that VISNs and VAMCs are meeting all of its strategic goals and objectives. According to VHA officials, there are currently two methods for assessing VHA’s performance towards meeting selected strategic goals and objectives. One method is VISN and VAMC directors’ individual annual performance plans. For FY 2016, these plans present VHA’s strategies for providing a successful health care delivery system, including those outlined in its Blueprint for Excellence. The directors’ plans include performance metrics, which VHA, as well as VISNs and VAMCs, can use to measure demonstrated progress of a VISN or VAMC in meeting these strategies. However, multiple strategic goals and objectives have been communicated to the field, such as the Under Secretary for Health’s five priorities, and it is not clear how these goals align with the strategies in the current directors’ performance plans or how progress towards them can be assessed. A VHA official, who is a member of the workgroup reviewing VHA’s performance metrics, told us that over the years, performance metrics were added to the director performance plans as problems or needs arose without considering the overall purpose of the metric. VHA officials reported that they have reviewed the current plans and have revised them. According to a VHA official, the new plans will have fewer metrics, and will be more strategically focused on VA’s and VHA’s strategic priorities, such as the Under Secretary for Health’s five priorities. According to VHA officials, implementation is planned for October 1, 2016. However, it is not clear how these metrics will be linked to the strategic goals and objectives in VHA’s current strategic plan. Veterans’ satisfaction with VA’s health care system is the second method for assessing VHA’s performance towards meeting strategic goals and objectives, according to VHA officials. VHA currently collects information from a survey of veterans that addresses two of VA’s priority goals, including improving access to health care, as experienced by the veteran. According to VHA officials, for the veterans’ access goal, there is a large degree of alignment between the department-wide goal and how VHA measures its progress towards meeting some of its access goals and objectives in its strategic plan. However, it is not clear how VAMCs and VISNs are to use veterans’ satisfaction to assess progress toward meeting other goals and objectives that have been communicated to them—such as the Under Secretary for Health’s five priorities that are not focused on access. VHA officials told us that no additional VHA-level assessments had been conducted to measure progress towards meeting strategic goals and objectives. Though VHA has performance information from VISN and VAMC directors’ performance plans and veteran satisfaction surveys, the performance of the agency toward meeting VA’s and VHA’s other strategic goals and objectives may help provide a more complete picture of overall effectiveness. In addition to a lack of adequate strategies and an effective oversight process, a large number of vacant, acting, and interim positions at some of the VISNs and VAMCs in our review have also created challenges for VHA in operationalizing its strategic goals and objectives. For example, officials from one VISN reported that acting and interim senior leadership positions within a facility in their region have had an effect on the operations of the VAMC, including the operationalization of VHA’s strategic goals and objectives. They added that the acting and interim senior leaders did not feel empowered to make long-term decisions regarding the operations of the medical center because they did not know how long they would hold the position. The Under Secretary for Health told us that one of VHA’s top priorities for 2016 is to fill 90 percent of VAMC director positions with permanent appointments by the end of the year. The Under Secretary added that filling these positions would help address the current gaps in leadership and provide stability for the VAMCs. Conclusions As the demand for health care by our nation’s veterans increases, and concerns about VA’s health care system persist, it is essential that VHA conduct the necessary strategic planning to achieve its goals and objectives. VHA has established a strategic planning process to identify strategic goals and objectives for accomplishing its mission, and VISNs and VAMCs are expected to operationalize these goals and objectives. However, VHA has not delineated a role for VAMCs in this process as it has for VISNs. Moreover, the lack of adequate strategies to operationalize VHA’s strategic goals and objectives, as well as the lack of an effective oversight process for assessing progress, may hinder the achievement of VHA’s goals and objectives. Without consistently developed strategies, the day-to-day activities and initiatives that are developed to operationalize VHA’s strategic goals and objectives may not appropriately align with those goals and objectives. This may result in VHA not being able to determine if it is adequately addressing top management concerns or department-wide strategic goals. Further, because VHA does not have an effective process to assess progress in meeting its strategic goals and objectives, it does not have needed information on how well the day-to-day activities and programs of VISNs and VAMCs are contributing to their achievement. Recommendations for Executive Action We recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following three actions: 1. Define the roles and responsibilities of VAMCs in operationalizing VHA’s strategic goals and objectives; this could be accomplished by establishing roles and responsibilities for VAMCs similar to how VHA defines roles and responsibilities for VISNs in VHA Directive 1075 and by developing guidance for VAMCs similar to guidance developed for VISNs. 2. Consistently develop strategies that can be used by VISNs and VAMCs to operationalize VHA’s goals and objectives, ensuring that they clearly link directly to VHA’s goals and objectives. 3. Develop an oversight process to assess progress made in meeting VHA’s strategic goals and objectives, including feedback on how well activities and programs are contributing to achieving these goals and objectives. Agency Comments and Our Evaluation We provided VA with a draft of this report for its review and comment. In its written comments, reproduced in appendix II, VA concurred with our three recommendations, and described the actions it is taking to implement them by September 2017. VA described the role that community-based outpatient clinics and health care centers play as critical health care access points for veterans and commented that our draft report does not mention these access points as components of VAMC service delivery systems. While our report draft noted the role of community-based outpatient clinics and health care centers in VA’s service delivery system, we clarified that these facilities are components of VAMCs. VA also commented that our report draft does not mention the essential role that VHA program offices play in contributing to and implementing the VHA strategic plan. Our report states that program offices contribute to VHA’s strategic planning process by developing some of the programs and actions that VISNs and VAMCs use to provide health care services to veterans. VA also provided technical comments, which we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to appropriate congressional committees, the Secretary of Veterans Affairs, the Under Secretary for Health, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: VHA Strategic Plan FY 2013 – 2018 Appendix II: Comments from the Department of Veterans Affairs Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Janina Austin, Assistant Director; Kelli A. Jones, Analyst-in-Charge; Jennie Apter; and LaKendra Beard made key contributions to this report. Also contributing were George Bogart, Christine Davis, Jacquelyn Hamilton, and Vikki Porter.
Veterans' health care needs may change due to changes in veteran demographics and other factors. Strategic planning, including identifying mission, vision, goals, and objectives, and operationalizing strategies to achieve those goals and objectives are essential for VHA to establish its strategic direction to respond to these changing demands and provide care in a dynamic environment. GAO was asked to review VHA's strategic planning. This report examines (1) VHA's strategic planning process and (2) the extent to which VHA operationalizes its strategic goals and objectives. GAO reviewed VHA strategic planning documents; and interviewed officials from VA and VHA central office, three VISNs selected to provide variation in geographic location, and nine VAMCs within these VISNs selected to provide variation in factors such as geographic location and facility complexity. GAO evaluated VHA's actions against federal standards for internal control and leading practices for strategic planning. The Department of Veterans Affairs' (VA) Veterans Health Administration (VHA) uses a multi-step strategic planning process to develop its strategic goals and objectives, which includes two key steps—(1) identifying and assessing factors that may affect health care delivery, which is referred to as environmental scanning, and (2) holding the annual National Leadership Council (NLC) Strategic Planning Summit—according to officials. VHA officials told GAO that they leverage VA's environmental scanning results in making decisions regarding VHA's strategic goals and objectives and that VA's central office has historically had a role in aspects of VHA's strategic planning process—such as participating in the NLC summit. VHA relies on the VA medical centers (VAMC) that directly provide care to veterans and the Veterans Integrated Service Networks (VISN), regional entities to which the VAMCs report, to operationalize its strategic goals and objectives. However, certain limitations in VHA's processes hinder VISNs' and VAMCs' efforts in operationalizing these goals and objectives. Specifically, VHA has not specified VAMCs' role and responsibilities in its strategic planning guidance, as it has for VISNs. For example, VHA's directive for VISNs clearly states how VISN directors are to operationalize VHA's operational plans; no such directive exists for VAMC officials. Similarly, VHA provided VISNs a strategic planning guide for operationalizing its current strategic plan, but did not provide a similar guide to the VAMCs. VHA has not developed detailed strategies for VISNs and VAMCs to use in operationalizing all of its strategic goals and objectives. According to leading practices for strategic planning, strategies should describe how strategic goals and objectives are to be achieved, including a description of the operational processes, staff skills, technology and other resources required. In September 2014, VHA published the Blueprint for Excellence to provide strategies for transforming VHA health care service delivery in response to concerns regarding the VHA wait-time crisis that year. However, it did not develop similar strategy documents for other years or for the other goals and objectives in its strategic plan. VHA does not have an effective oversight process for ensuring and assessing the progress of VISNs and VAMCs in meeting VHA's strategic goals and objectives. According to VHA officials, VHA relies on two methods for assessing performance towards meeting selected strategic goals and objectives. Specifically, VHA uses VISN and VAMC directors' individual annual performance plans, as well as veteran survey information, to assess VHA's performance towards meeting certain metrics, such as improving veterans' access. However, it is unclear how these specific metrics are linked to assessing overall progress towards VHA's strategic goals and objectives. As a result, VHA may not know to what extent VISNs' and VAMCs' efforts to operationalize its goals and objectives are adequately addressing top management concerns or department-wide strategic goals.
GAO_GAO-16-85
Background State and local governments are the primary administrators of child welfare programs designed to protect children from abuse or neglect. Children enter state foster care when they have been removed from their parents or guardians and placed under the responsibility of a state child welfare agency. Removal from the home can occur because of reasons such as abuse or neglect, though in some cases a child’s behavior may also be a factor. When children are taken into foster care, the state’s child welfare agency becomes responsible for determining where the child should live and providing the child with needed support. Federal Funding Sources for Child Welfare Programs Title IV-E of the Social Security Act authorizes federal funding to states to help cover costs associated with states’ foster care and adoption programs. Title IV-E funds, which make up the large majority of federal funding dedicated to child welfare primarily provides financial support for the care of eligible children who have been removed from their homes due to abuse or neglect, as well as to families who adopt eligible children with special needs from the foster care system. For example, funds may be used to reimburse states for a portion of expenses to support eligible children in foster care (such as for food, clothing, and shelter), and for the costs of subsidies to parents who adopt eligible children with special needs (adoption assistance), as well as for related case management activities, training, data collection, and other program administrative costs. While Title IV-E funds are used primarily for eligible children in foster care, Title IV-B funds may generally be used for services for children and their families regardless of whether those children are living in their own homes, have been removed from their homes and placed in foster care settings, or have left the foster care system. Title IV-B funds are provided primarily through two formula grant programs. Funds may be used for case planning and review services for children in foster care and other services to families such as parenting skills training or substance abuse treatment. Although Titles IV-B and IV-E are the primary sources of federal funding available to states for child welfare programs, states also use other federal funds, such as Temporary Assistance for Needy Families and Social Services Block Grant funds, as well as Medicaid. Oversight and Monitoring of Child Welfare Programs HHS provides oversight and monitoring of states in a variety of ways to ensure their child welfare programs are in compliance with federal law, regulations, and relevant approved state plans. For example: Twice a year, states are required to submit data on the characteristics of children in foster care. HHS compiles, validates, and reports data from state child welfare agencies on children in foster care and children who have been adopted from the child welfare system in AFCARS. HHS conducts statewide periodic assessments known as the Child and Family Services Reviews (CFSR) that involve case-file reviews and stakeholder interviews to ensure conformity with federal requirements for child welfare services. The reviews are structured to help states identify strengths and areas needing improvement within their agencies and programs. HHS conducts periodic Title IV-E foster care eligibility reviews to monitor the state Title IV-E agency’s compliance with certain requirements of the Title IV-E foster care maintenance payments program. As part of the review, HHS examines a Title IV-E agency’s compliance with requirements related to placing a child in a licensed foster family home or child care institution, and ensuring that safety requirements are met by the child’s foster care provider. HHS also provides support and training through centers that provide states with training, technical assistance, research, and information through referral and consultation. For the purposes of collecting data from states on their foster care systems, HHS uses the two terms below to refer to non-family settings, called congregate care in this report: Group home: a licensed or approved home providing 24-hour care for children in a small group setting that generally has from 7 to 12 children. Institution: a child care facility operated by a public or private agency and providing 24-hour care and/or treatment for children who require separation from their own homes and group living experience. For example, these facilities may include: child care institutions, residential treatment facilities, or maternity homes, according to HHS. Although states report data to HHS on the number of foster care children placed in two types of congregate care settings, states do not necessarily use the same terminology and may vary in the way they classify and or describe similar facilities. For detailed information on the types of congregate care facilities used by states we visited see appendix I. HHS has proposed revising its AFCARS regulations to collect more detailed information from states on the types of congregate care used, although the proposed changes have not yet been finalized. Placement Decisions and Congregate Care When children are removed from their homes, the child welfare agency may place the child in a foster home of a relative or non-relative, or in a congregate care setting, depending on the child’s needs. Children generally remain in foster care until a permanent suitable living arrangement can be made, either by addressing the issues that led to the child’s removal and returning the child to his or her family, or in cases where this is not possible in a timely manner, through adoption, guardianship, placement with a relative, or another planned permanent living arrangement. In some cases, the child reaches adulthood before leaving foster care, commonly referred to as “aging out of foster care”. HHS’s Title IV-E regulations require that each child’s case plan include a discussion of how it is designed to achieve a safe placement for the child in the least restrictive (most family like) setting available and in close proximity to the home of the parent(s) when the case plan goal is reunification, and a discussion of how the placement is consistent with the best interests and special needs of the child. However, states have flexibility and discretion to make decisions for each child on a case-by- case basis to ensure that the most appropriate placement is made and the individual needs of the child are met. HHS issued a report on congregate care in May 2015 that stated that in addition to federal law, child development theory and best practices confirm that children should be placed in family-like settings that are developmentally appropriate and least restrictive. The report also stated that congregate care stays should be based on the specialized behavioral and mental health needs or clinical disabilities of children, and only for as long as needed to stabilize them so they can return to a family-like setting. Furthermore, the report noted that congregate care should not be used as a default placement setting due to a lack of appropriate family- based care, but as part of a continuum of foster care settings. Young children need family-like settings to form healthy attachments to adults, and older children need family-like settings to allow them to develop autonomy, according to research. This is also in keeping with changes in the field of congregate care, which is increasing its focus on stays in a residential center as treatment interventions to meet specific needs rather than a placement of last resort for foster children. However, a recent HHS study using AFCARS data on states’ use of congregate care found that for all children who entered foster care for the first time in 2008 (first-time entry cohort focusing on first episodes), an estimated 38,205 of these children experienced congregate care at some point during a 5-year follow-up period. Of these children, 31 percent were aged 12 or younger when they experienced congregate care at some point during the 5-year follow- up. While one-fifth of these young children who experienced time in congregate care were in these settings for less than a week, 24.1 percent were there for longer than a year. Of those aged 13 years or older who experienced some time in congregate care during that time period, about 40 percent were identified as entering foster care due to a child behavior problem and no other clinical or mental disability, highlighting the need for a thorough assessment to ensure children are placed in the least restrictive settings to meet their needs. Additionally, of the children in care as of September 30, 2013, HHS found that the overall total time in foster care was longer for children in congregate care settings, with an average of 27 months in foster care compared to 21 months for children placed in other types of out-of-home settings. National Trends Over the past 10 years, the number of children and youth in the foster care system declined by 21 percent from 507,555 at the end of fiscal year 2004 to 402,378 at the end of fiscal year 2013, according to data reported to HHS by the states. HHS reported that there were fewer entries into foster care, an increase in exits, and shorter lengths of stay during this time period; it did not attribute the decline to any particular factor. The number of children in congregate care also declined, and at a greater rate than children in foster care, 37 percent compared to 21 percent. According to the most recent data available, nationally, 14 percent of children in foster care were in congregate care placements at the end of fiscal year 2013, although the rates of congregate care use varied among the states (see fig. 1). Eight Selected States Reduced the Use of Congregate Care Substantially Using Multiple Approaches, but the Rates of Use Varied Widely The Eight Selected States Averaged a 47 Percent Reduction in Congregate Care Use, Although the Current Rates of Use Still Varied among These States From September 30, 2004, to September 30, 2013, the share of all foster care children in congregate care in the eight states we reviewed declined 47 percent on average, with reductions ranging from 7 to 78 percent, according to the most recent data available from HHS. This decline outpaced these states’ average decline of 26 percent in the number of foster children overall. However, the states’ percentages of congregate care placements ranged from approximately 5 percent in Washington to 34 percent in Colorado (see fig. 2). Nationwide, congregate care placements are declining and this trend is reflected in the 8 selected states. Selected States’ Efforts to Reduce Congregate Care Included Expanding Services, Developing Alternative Placements, and Revising How They Used Congregate Care The eight selected states reported a variety of efforts they took to help reduce their use of congregate care for foster children. In some cases, reform efforts were intended to reduce the number of children removed from their homes or to improve the state’s overall child welfare system, while others focused specifically on reducing congregate care. Based on our analysis, we categorized these efforts into three areas: expanding services that may prevent entry into foster care, increasing availability of family-based placements in foster care, and revising how congregate care is used. These are discussed in more detail below. Expanding services to avoid the need to remove the child in the first place and to support children in family-based settings. When sufficient resources are available and circumstances warrant it, caseworkers may decide to provide services for at-risk families in the home to help stabilize the family rather than remove the child from the home, as we found in our previous work. In addition, other resources can help ease the transition from congregate care to a family-based setting, whether in the foster care system or the home from which the child was removed. Increasing the availability of family-based placement options. Increased efforts to find relatives who can care for children who are removed from their homes can help children remain in family settings, according to one child welfare official. In addition, caseworkers may also recruit or train foster families to serve as treatment or therapeutic foster families. These terms generally refer to a model of care that attempts to provide elements of traditional foster care with clinical treatment of a child’s serious emotional, behavioral, and medical problems in a specialized foster home. One state child welfare official told us that in the past, children and youth with significant behavioral or other problems were often placed in congregate care because foster families or relatives with the requisite skills to help the child were not always available, nor were adequate supports available in the community. Revising how congregate care is used for foster children. When congregate care is considered as a placement by a caseworker, specific policies can affect the level at which the final decision is made, what criteria are used, the duration of stay, and if a plan for transitioning out of the congregate care setting is established. One child welfare official told us her state agency’s efforts were often meant to ensure that all other placement options had been exhausted before congregate care could be considered, that the length of stay in congregate care was as short as possible, and that the child received appropriate treatment while in care. In addition, one congregate care provider noted that the provider had developed new service delivery models, which in some cases included providing services when the child returned to the home and community. See table 1 for examples of selected state efforts and examples as described by state officials. The eight states used a combination of policies and practices noted above in their efforts to reduce or limit the use of congregate care. Because child welfare systems are complex with many interrelated features, states’ efforts often resulted in the need to transform several features of their systems at the same time, as described in the summaries below. Washington had the smallest proportion of its foster care caseload placed in congregate care of the eight states we reviewed, as well as the smallest reduction from the end of fiscal years 2004 through 2013. According to officials, intensive family searches to locate family members to care for youth has been a successful effort used by caseworkers in the state to help reduce congregate care. The state and local child welfare officials and service providers we spoke with placed emphasis on placements with available family members or foster homes, even for youth with a high need for treatment or other services. One official noted that the emphasis on family placements first has been a longstanding policy preference in the state. In addition, about 15 years ago, Washington changed its model of care for how services are delivered by congregate care providers. Officials said that the state changed its contract with providers from a structure with a set number of beds and service levels to a contract for an array of services which could be delivered in multiple settings, such as congregate care, treatment foster homes, regular foster homes, and family or relative homes. Kansas had the second lowest percentage of foster children in congregate care of the eight states we reviewed with 5 percent as of September 30, 2013. Officials attributed a 31 percent decline in their congregate care population over the 9 year period to several factors. In 1996, according to officials, the state began contracting with private non- profit organizations to provide family preservation, foster care and adoption services. State officials told us that prior to establishing these contracts, up to 40 percent of their foster children were in congregate care settings. Officials also cited as contributing factors the method of payment to contractors and holding foster care providers accountable for meeting outcome goals established by the state to place children in a family-like setting when possible or face monetary penalties. New Jersey began reforming its child welfare system about 10 years ago, and according to state officials, it has resulted in reductions in the state’s overall foster care population and congregate care. Officials explained that the state adopted a new family model of care that included extensive recruitment of foster, adoption, and kinship caregivers— referred to as resource families—that helped to reduce the overall foster care population and congregate care placements. In this model, these resource families are provided with extensive training and a resource worker is assigned to help provide services to the child in the home. One official told us that this is a new paradigm of care that is very intensive. They work with the family and bring in as many community resources as possible to keep children in their homes, which has been effective in reducing the number of children entering foster care overall. Louisiana officials told us that following Hurricane Katrina in 2005, state officials worked with the Annie E. Casey Foundation to improve performance in key areas in child welfare. Hurricane Katrina caused widespread destruction and displacement of youth. Many of the state’s foster children were temporarily displaced and child welfare officials did not have current emergency contact information, which made it difficult for them to find the foster families that had to evacuate. According to officials, over a 2-year period, they reduced the number of children in congregate care settings by approximately 200 youth through various efforts, including: (1) focused efforts on stepping down youth placed in residential levels of care into less restrictive placements; (2) recruited foster/adoptive homes that could accept placement of youth stepping down when relative resources were not available and recruited homes that could provide placement to children/youth entering care without relative resources; (3) increased availability of in-home services so that youth were stepped down, the services would be in place to assist in supporting the placement. To support the foster home recruitment piece, dedicated recruiters were hired and placed in all 9 regions of the state with the sole task of recruiting homes. Another effort officials described during this time was the revision of the licensing regulations for residential facilities and child placing agencies. Maryland launched a statewide initiative in 2007 called “Place Matters” that greatly affected the state’s child welfare system and improved outcomes for all children in the state, including those in congregate care, according to state officials. The goals of the “Place Matters” initiative include: (1) providing more in-home support to help maintain children with their families; (2) placing children in family settings (either with relatives or family-based care); and (3) reducing the length of stay in foster care and increasing the number of reunified families. By 2014, Maryland officials reported a reduction in the number of children in out-of-home care by over 50 percent and a reduction of children placed in congregate care of almost 60 percent. Maryland officials also described changes in the placement and review process that they said have helped reduce the number of children in congregate care. For example, a placement protocol was instituted to ensure that family settings were ruled out before children could be placed in congregate care settings. According to officials, several layers of review have also been added to ensure that more restrictive placements are warranted and necessary based on the child’s needs. Maryland also instituted a state-wide initiative that included an extensive search for relatives of a foster child, according to officials. Minnesota continues to explore alternatives to group settings for children in foster care needing specialized services, such as behavioral and mental health needs that a foster family may not be capable of providing, according to state officials. The state is currently in the process of developing intensive treatment foster care services, as provided for under a Minnesota statute enacted in 2013, according to officials. These include intensive treatment services that will be provided within a foster family setting to help reduce the need for congregate care placements. In addition, in January 2015, the state implemented Northstar Care, a program intended to help children who cannot return home to find other permanent families. Officials expect that with the implementation of Northstar Care and other services, like treatment foster care services, the number of children in congregate care will continue to decline. Connecticut officials told us that the primary impetus for their focus on reducing congregate care was a change in leadership that occurred in 2011. At that time, the newly appointed head of the state child welfare agency set a goal of reducing the percentage of foster children in congregate care from 23 percent to 10 percent. Connecticut officials described going through the case files of all youth in foster care and working, in consultation with the youth, to identify possible options for a home for the youth that may include family members or close friends. Through this process, Connecticut officials told us they were able to place some children into a home and out of a congregate care setting. According to officials, targeted family outreach and also engaging people not related by birth or marriage who have an emotionally significant relationship with a child has also resulted in a significant reduction in the number of children coming into foster care in general. Officials believe that this shift in attitude around connecting youth to their families and communities is leading to better outcomes for youth. Other efforts described by officials included increasing the availability of community- based supports across the state to help prevent children from coming into care. Specifically, officials said the state modified its contracts with health care providers to increase access to emergency psychiatric services for anyone in the state, including those who are not currently in foster care. Colorado had one of the higher percentages of youth in congregate care among our eight states, according to HHS data. The state is currently working with Casey Family Programs and the Annie E. Casey Foundation to improve placements for children in congregate care by finding creative ways of placing children into family homes. Colorado state officials described changes and new ways of working with the congregate care provider community to develop models of care that are more treatment-oriented to help children transition back into a community settings. For example, state officials held two forums with providers in their state to educate them on how to adjust their services and the service delivery expectations as the state is shifting towards using providers more for treatment than just a placement. State officials said they are also working with the judicial system to identify alternative options, such as in- home services, because according to these officials some judges are used to ordering that children be placed into a congregate care facility, often as a consequence of behavioral issues. Stakeholders Cited Challenges in Developing Alternatives to Congregate Care and HHS Has Begun Efforts to Help States Developing a Sufficient Supply of Appropriate Family Placements and Needed Services While Transitioning to a More Treatment–Based Model of Congregate Care Poses Challenges Stakeholders we interviewed described challenges involved as efforts were made to reduce reliance on congregate care where appropriate, or as one child welfare foundation says, to “right-size” states’ use of congregate care. From this information, we identified four areas that posed challenges in the selected states and that may inform other states’ efforts to reduce the role of congregate care in their child welfare systems. Building capacity for family placements. While developing alternative family placements is a part of states’ efforts to reduce congregate care, stakeholders we spoke with said that doing so posed challenges. Several stakeholders told us that too few foster families were available generally, and that traditional foster families can be overwhelmed by the needs of some foster children and youth, such as those with behavior problems. Officials in one state also told us that building capacity in appropriate family placements to replace congregate care placements requires recruitment and training of specialized foster families and training to change caseworker’s existing practices. A few stakeholders also told us that this can require additional resources or a redirection of existing resources. In addition, because congregate care placements typically cost more than traditional foster families, less use of congregate care should free up state resources for developing more foster families with the training and skills to support children and youth with greater needs, according to an expert we spoke with. A few stakeholders we spoke with agreed that a shift away from congregate care must be planned and implemented carefully to ensure that children are placed with families adequately prepared to meet their needs and to avoid unintended consequences. For example, if a child with significant needs that require more attention is placed in a traditional foster family without adequate supports, the result may be multiple unsuccessful placements, inappropriate medications to manage a youth’s behavior, or entry into the juvenile justice system, according to some of the officials we spoke with. One expert said that, based on her observation, one state had rushed to reduce congregate care without first putting sufficient supports in place for foster families, which resulted in unintended consequences, such as unsuccessful placements. Addressing shortages of needed services. In addition, several stakeholders noted the shortage of services that can help bolster supports for at-risk children and families before the child or youth is removed from home or during foster care to help avoid or reduce the length of a congregate care stay. This is consistent with the findings from our 2013 report in which we reported that local child welfare systems use existing community resources, which are sometimes in short supply, leading to gaps in areas such as substance abuse treatment, assistance with material needs, and mental health services. One stakeholder noted there is a lack of more holistic support systems in some communities, including access to behavioral and mental health services; crisis support 24 hours a day, 7 days a week; housing; and education that would facilitate more use of family settings rather than congregate care. However, Title IV-E funds are generally not available for services for children and families not in the foster care system. Improving assessments. Having accurate information on a child or youth’s physical and mental health needs is a factor in identifying what, if any, treatments and services may be needed, and the eight states we reviewed told us they had assessment processes in place. While we did not review the types or quality of the assessment processes in these states, two experts we spoke with raised concerns about the variation in types and quality of assessments performed nationwide. This is due in part to insufficient caseworker training and large workloads in states and localities generally, as we have also found in our previous work. More specifically, one of these experts said that some child welfare assessments may result in an incorrect diagnosis due to lack of understanding of trauma-based conditions and treatments. In this expert’s opinion, children in congregate care were sometimes diagnosed with other conditions, such as bi-polar disorder, and were overmedicated to contain the issue rather than treat it. In our previous work, we have found that foster children may receive psychotropic drugs at higher rates than children not placed in foster care. We found in the five states analyzed that the higher rates do not necessarily indicate inappropriate prescribing practices, as they could be due to foster children’s greater exposure to traumatic experiences and the unique challenges of coordinating their medical care. However, experts that we consulted during that work explained that no evidence supports the concomitant use of five or more psychotropic drugs in adults or children, yet hundreds of both foster and non-foster children were prescribed such a medical regimen. Retaining capacity for congregate care. State child welfare officials in all of the eight states told us that even though they have worked to reduce congregate care placements, they believed that they still require some amount of congregate care for children and youth with specific treatment needs and that retaining sufficient congregate care capacity may be difficult. In Washington, with its already relatively low use of congregate care, some officials were concerned about retaining enough congregate care capacity to meet the needs of children and youth they thought would require some time in a group setting. One stakeholder noted that adjusting to an appropriate level of congregate care can be challenging, as congregate care providers generally need to be assured of a sufficient level of “beds filled’ to continue their operations. He added that some providers have long-standing relationships with a state or county and have an interest in continuing their operations. This stakeholder said that in his view the number of “beds” or openings in a congregate care setting may have factored into the determination of where a child or youth is placed in some situations. In such cases, he noted, the supply of available beds may have driven the placement rather than the needs of the child. However, according to a few stakeholders, congregate care providers are beginning to diversify their services, which could include providing care in a group setting as well as supports and services in a family setting. Two congregate care providers told us that their business model had changed in recent years, from predominantly caring for children residing in their facilities to providing services to children in their foster or original homes, and also planning for service provision when a child or youth left congregate care. A few stakeholders we spoke with confirmed that providers are re-evaluating their relationships with the states as states are moving toward offering a continuum of services to help youth stay out of or transition out of congregate care as quickly as possible. HHS Has Recently Taken Steps to Encourage States to Examine Their Use of Congregate Care, but Could Enhance Its Support to States HHS’s Administration for Children and Families (ACF) recently took steps to examine how states were using congregate care and as previously mentioned issued a report in May 2015 to help inform states and policymakers about the use of congregate care for foster children. HHS officials told us that the report was their initial effort to understand congregate care as a placement option for foster children because the agency had not taken a national look at congregate care previously. In the report, HHS raised concerns about some of its findings—which we discussed earlier—about the use of congregate care for children aged 12 or younger and for placements for youth who do not appear to have high clinical needs that might be better served in appropriate family settings. In addition, while the report cited that the decline in the percentage of children placed in congregate care nationwide suggested that child welfare practice is moving toward more limited use of congregate care, it also noted that the depth of improvement is not consistent across states. In addition to its findings in the report, HHS included a legislative proposal in its fiscal year 2016 budget request to increase monitoring of congregate care use and support family-based care as an alternative to congregate care. More specifically, the proposal would, among other things, amend Title IV-E to require (1) documentation to justify the use of congregate care as the least restrictive setting to meet a child’s needs, and (2) judicial review every 6 months while a child is in that placement to confirm that the placement remains the best option. It also would provide support for a specialized case management approach for caseworkers with reduced caseloads and specialized training for caseworkers and foster parents to address the needs of children. HHS estimated that these changes would increase costs in the first few years of the proposal going into effect, and that overall it would result in a reduction in costs of Title IV-E foster care maintenance payments. More specifically, HHS estimated that this proposal would increase fiscal year 2016 funding by $78 million and reduce foster care maintenance costs by $69 million over 10 years. Based on our discussions with stakeholders, we identified other areas in which state efforts could benefit from additional HHS support, independent of the legislative proposal. One stakeholder noted that the information HHS currently collects does not focus on congregate care, and there is a wide variation in state experiences, which our review of AFCARS data and HHS’s own May 2015 report confirm. However, without more information on states’ efforts to reduce their use of congregate care, HHS is unable to fully understand states’ activities in this area, including relevant changes in the states’ use of congregate care and their effect on state child welfare systems. Although HHS conducted some initial research in its May 2015 report, HHS has the opportunity to further enhance its understanding of state efforts, for example, by leveraging its CFSR process, its AFCARS database, and future research activities. Internal control standards for the federal government call for agencies to have the information needed to understand program performance. Similarly, stakeholders noted that given the relative recency of some of the state efforts and the potential for unintended consequences, HHS’s support in sharing best practices and providing technical assistance would be helpful to the states as they make changes to their systems. For example, consistent with the challenges we identified, states could benefit from HHS’s assistance in the areas of increasing capacity for specialized foster family placements and working with congregate care providers to diversify their services. As an HHS study has noted, system changes in the child welfare area can be difficult, and require leadership, stakeholder involvement, and capacity building, among other things, as well as time and sustained attention to succeed. In addition, our previous work has identified similar key practices that facilitate successful transformations, including leadership from the top, focus on and communication of key priorities, and monitoring progress, particularly because transformations may take a long time to complete. HHS officials told us they did not currently have plans to provide additional support for states related to congregate care, although with a new Associate Commissioner of Children, Youth, and Families in place as of August 2015, they may consider additional actions. Conclusion States’ foster care systems are responsible for some of the most vulnerable children in the nation. This includes responsibility for placing children removed from their homes in the most family-like settings that meet their needs. The eight states we reviewed reflect the downward trend in the use of congregate care nationwide, which could be seen as a sign of progress in states’ “right-sizing” of congregate care. At the same time, the wide variation in the percentage of foster children in congregate care among our eight—and all 50—states suggests that more progress could be made. HHS has taken an important first step by issuing its report on congregate care and recognizing that additional information is needed on how states use congregate care and what changes are appropriate. It is important that HHS continues to progress in its understanding of the national landscape of congregate care so that it can be better positioned to support states through their transitions. Significant changes in child welfare programs require thoughtful leadership, relevant information, and sustained attention. HHS’s continued leadership and support will be needed, particularly by states facing challenges in developing alternatives to congregate care, to make progress nationwide. Recommendation for Executive Action We recommend that HHS take steps to enhance its support of state actions to reduce the use of congregate care as appropriate. These steps could include: collecting additional information on states’ efforts to reduce their use of congregate care; and identifying and sharing best practices with the states and providing technical assistance that states could use to address challenges in the areas of building capacity for family placements, addressing shortages of needed services, improving assessments, and retaining sufficient numbers of congregate care providers, or other areas as needed. Agency Comments and Our Evaluation We provided a draft of this report to the Secretary of Health and Human Services, for review and comment. HHS provided general comments that are reproduced in appendix II. HHS also provided technical comments which we incorporated as appropriate. HHS concurred with our recommendation stating that it was consistent with its current approach for supporting states. HHS stated that federal law and policy make it clear that children who come into care should be placed in the least restrictive setting possible. However, it noted that states have the flexibility and discretion to make decisions for a child on a case by case basis to ensure that the best placement is made and the individual needs of the child are met. HHS also noted that to assist states in reducing their use of congregate care, the fiscal year 2016 President’s budget request includes a proposal to amend title IV-E to provide support and funding to promote family based care for children with behavioral and mental health needs as well as provide oversight of congregate care placements, as we noted in the report. Additionally, HHS stated that it offers individualized technical assistance to help child welfare agencies build capacity and improve outcomes for children and families, and it has recently begun providing tailored services to two public child welfare agencies working to reduce their use of congregate care through a Title IV-E waiver demonstration program. HHS also stated it will continue to explore research opportunities as well as how to build state capacity for family placements. We encourage HHS to identify and take additional steps to assist states with reducing their use of congregate care. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions concerning this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Types of Congregate Care Facilities Used by Four States We Visited Connecticut Acute Inpatient Psychiatric Hospital: Inpatient treatment at a general or psychiatric hospital; stabilization of psychiatric symptoms. Psychiatric Residential Treatment Facilities (PRTF): Community-based inpatient facility for children with treatment needs that require a structured 24-hour setting. Less restrictive than a hospital, but more restrictive than a residential treatment center. Residential Treatment Center: Integrated therapeutic services, education, and daily living with individually tailored treatment plans. Therapeutic Group Home: A small, four to six bed program in a neighborhood setting with intensive staffing and services. Preparing Adolescents for Self Sufficiency (PASS) Group Home: A 6-10 bed education program located in a neighborhood staffed with non-clinical paraprofessionals. Level 1 Non-Clinical Group Home: A 6-12 bed program in a neighborhood staffed with non-clinical paraprofessionals. These may have a special focus, such as transitional living apartment program or a maternity program. Short Term and Respite Home: Homes provide temporary congregate care with a range of clinical and nursing services. Also used for respite. Safe Home: Temporary service providing 24- hour care for children. To engage, stabilize, and assess each child, generate level of care recommendation, and transition to an appropriate placement. Louisiana Psychiatric Residential Treatment Facilities (PRTF): Highest level of care for youth between the ages of 8-17 with severe behavioral and emotional issues. Therapeutic Group Homes: Community based care in a home-like setting, generally for children and youth. Homes are less restrictive than PRTF, have no more than eight beds, and are run under the supervision of a psychiatrist or psychologist. Non-Medical Group Homes: Generally serve older youth that are not able to be placed in a lower level of care and do not meet the eligibility requirements for the higher level care facilities. These homes have no more than 16 beds. Maryland Alternative Living Unit: Small homes (limited to three beds) that are specifically focused on children with developmental disabilities. Diagnostic Evaluation and Treatment Program: For children with significant needs, but the needs do not meet the requirements for placement in a residential treatment facility. Group Home (also known as Residential Child Care Facilities): Traditional group homes for children with low-end needs. Medically Fragile: Similar to an alternative living unit. Therapeutic Group Home/High Intensity: Homes with a lower staff- to-child ratio, on-call social workers, and on- site licensed mental health professionals. Washington Licensed Group Home: Typically stand- alone (6-8 bed) residential home programs in a community setting. There are a few settings where multiple programs and services are delivered on site. Licensed Staff Residential Home: Typically a smaller residential home of less than six beds. These homes are in community settings and have a rotating 24 hour staff. Appendix II: Comments from the Department of Health & Human Services Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Gale Harris (Assistant Director), Anjali Tekchandani (Analyst-in-Charge), and Vernette G. Shaw made significant contributions to this report. Also contributing significantly to this report were Sarah Cornetto, Kirsten Lauber, Amber Sinclair, Greg Whitney, and Charlie Willson.
About 14 percent of the more than 400,000 children in foster care nationwide lived in congregate care at the end of fiscal year 2013, according to HHS data. Given the importance of family-based care to foster children's well-being, GAO was asked to review state use of congregate care. This report examines (1) how selected states have reduced their use of congregate care; and (2) some challenges with reducing congregate care placements, and efforts HHS has taken to help states reduce congregate care. GAO analyzed child welfare data from HHS; reviewed relevant federal laws, regulations, and documents; and interviewed state child welfare officials in eight states--Connecticut, Colorado, Kansas, Louisiana, Maryland, Minnesota, New Jersey, and Washington. In four of these states, GAO also visited and spoke with local child welfare officials and congregate care providers. The selected states varied in their use of congregate care and geographic location, but cannot be generalized nationwide. GAO also spoke with child welfare experts. Eight states GAO reviewed had a variety of efforts under way to help ensure they placed foster children in family-based settings rather than in group homes or institutions, also known as congregate care. Federal law requires that foster children have a case plan designed to achieve placement in the least restrictive (most family like) and most appropriate setting available, consistent with their needs. States' efforts to ensure appropriate placements included more oversight of decisions to place children in congregate care and the length of stay; enhanced recruiting and training for specialized foster families to care for children with serious emotional, behavioral, or medical problems; and increased supports for families in crisis. Officials in the eight states generally credited these efforts with declines in their use of congregate care—on average a 47 percent decline from fiscal years 2004 through 2013, based on the most recent available data from the Department of Health and Human Services (HHS). However, these states' percentages of foster children in congregate care still ranged from 5 percent to 34 percent, mirroring the variation nationwide in fiscal year 2013. Selected stakeholders (state officials, service providers, and experts) cited challenges to more appropriate use of congregate care, such as providing specialized training to foster families, addressing shortages in mental health and other community services, and working with congregate care providers to focus more on providing services in family settings. In a May 2015 report, HHS said that states' progress in reducing congregate care was inconsistent and recognized that additional information was needed. HHS also proposed some relevant legislative changes. Stakeholders identified other HHS actions, such as additional data analysis and sharing of best practices that would help states facing challenges to transform their use of congregate care. HHS currently does not have plans to take further actions to support states.
GAO_NSIAD-96-17
Background Active-duty military personnel are not covered by Title VII of the Civil Rights Act of 1964, as amended, or the implementing governmentwide equal employment opportunity and affirmative action regulations and guidelines of the Equal Employment Opportunity Commission. However, the Secretary of Defense has established a separate equal opportunity program with similar requirements for these personnel. In 1969, the Secretary of Defense issued a Human Goals Charter that remains the basis for DOD’s equal opportunity program. It states that DOD is to strive to provide everyone in the military the opportunity to rise to as high a level of responsibility as possible based only on individual talent and diligence. The charter also states that DOD should strive to ensure that equal opportunity programs are an integral part of readiness and to make the military a model of equal opportunity for all, regardless of race, color, sex, religion, or national origin. To help ensure equal opportunity in the services, a 1988 DOD directive and related instruction require that the services prepare annual MEOAs. In preparing their MEOAs, the services collect, assess, and report racial and gender data in 10 categories. The Deputy Assistant Secretary of Defense for Equal Opportunity (DASD(EO)) is primarily responsible for monitoring the services’ equal opportunity programs, including preparing written analyses of the services’ MEOAs and a DOD summary. As recently as March 1994, the Secretary of Defense reaffirmed DOD’s equal opportunity goals, stating that equal opportunity is a military and an economic necessity. While noting that DOD has been a leader in equal opportunity, the Secretary stated that it can and should do better. He initiated several measures, including a major DOD study looking at ways to improve the flow of minorities and women into the officer ranks from recruitment through high-level promotions. MEOAs Can Be Improved According to DASD(EO)’s Director of Military Equal Opportunity, MEOAs are the primary source of information for monitoring the services’ equal opportunity programs. While MEOAs provide some useful information, the analyses of this information did not consistently identify and assess the significance of possible racial or gender disparities. In addition, data for 9 of the 10 MEOA reporting categories was reported inconsistently among the services. For the promotion and separation categories, some key data that would be helpful in understanding the progression of minorities and women through the ranks was not required to be reported. DOD Is Not Consistently Identifying the Significance of Possible Disparities In analyzing the outcomes of an organization’s personnel actions for possible racial or gender disparities, Equal Employment Opportunity Commission guidance recommends using the racial and gender composition of the eligible pool as a basis for comparison. All other things being equal, the racial and gender makeup of persons selected for a particular action should—over time—reflect the racial and gender composition of the eligible pool. In other words, the likelihood or odds of a particular outcome occurring for a minority group should be about the same as for the majority or dominant group in the long run. When the actual odds are less and the difference is statistically significant, and patterns or trends are identified, further analysis would be necessary to determine the cause(s) of the disparity. Seven of the 10 MEOA reporting categories lend themselves to comparing the odds of a minority group member being selected to the odds of a dominant group member being selected. However, the DOD directive and the related instruction do not require such an analysis, and none was done by the services. The services did make some comparisons to the group average; that is, they compared a minority group selection rate to the overall selection rate for all groups (minority and majority). But because the minority group was usually so small compared to the total group, disparities in the minority group selection rate compared to the overall group rate often were not detected or appeared insignificant. Also, this approach is not helpful in identifying trends or patterns. Statistical significance testing can provide a basis to determine if a disparity in the odds of being selected for a minority group compared to the odds of the majority group is due to random chance. Statistical significance testing, over time, can also assist in identifying trends or patterns in equal opportunity data that may warrant further analysis. In the fiscal year 1993 MEOAs (the latest available), only the Army routinely reported statistical significance testing results. The Marine Corps and the Navy reported some statistical significance testing. The Air Force did not report any statistical significance testing. While the DOD instruction on preparing MEOAs encourages the use of statistical significance testing, its use is not required, and instructions on how to conduct such tests are not provided. All four of the officials responsible for preparing the MEOAs for their respective service said they did not have prior experience in analyzing equal opportunity data and that DOD’s instruction was not particularly helpful. Services Reported MEOA Data Differently In analyzing the services’ 1993 MEOAs, we found that the MEOA reporting requirements were addressed differently by one or more of the services in 9 of the 10 categories. Only the promotion category appeared to be consistently reported. In most instances, definitions and interpretations of what is called for were not consistent among the services. In some cases, one or more of the services did not comply with the DOD instruction. Following are examples of some of the inconsistencies we found: The Army specifically reported accessions for its professional branches, such as legal, chaplain, and medical. The other services did not. The Air Force, the Army, and the Navy reported on officers who had been separated involuntarily. But the Army did not separately report officers who had been separated under other than honorable conditions or for bad conduct. The Marine Corps did not report any separation data for officers. The Air Force, the Army, and the Navy provided enlisted and officer assignment data by race and gender. The Marine Corps combined into one figure its data on selections to career-enhancing assignments for its O-2 through O-6 officers for each racial and gender category and did not provide any information on its enlisted members. The Air Force, the Marine Corps, and the Navy reported discrimination or sexual harassment complaints by race and gender. The Army did not identify complainants by race and gender. The Army reported utilization of skills data by each racial category and for women. The Air Force reported skills data for blacks, Hispanics, and women. The Marine Corps and the Navy combined the racial categories into one figure for each skill reported and did not report on women. The Air Force and the Army included officers in their reports on discipline. The Marine Corps and the Navy did not. Certain Useful Data Is Not Required for Two MEOA Categories Two important factors in analyzing the progression of minorities and women in the services are how competitive they are for promotions and whether they are leaving the services at disproportionate rates. These factors have been of concern in the officer ranks. In March 1994, the Secretary of Defense directed that a study of the officer “pipeline” be conducted. This study is still underway but is addressing ways to improve the flow of minorities and women through the officer ranks. Although DOD’s MEOA guidance requires reporting on promotions and separations, it does not require the services to report racial and gender data for all promotions or voluntary separations. The guidance requires the services to report racial and gender data in their MEOAs for promotions that result from a centralized servicewide selection process. For enlisted members, this includes promotions to E-7, E-8, and E-9; for officers, this includes promotions to O-4, O-5, and O-6. For the most part, promotions at the lower ranks are not routinely assessed. In addition, the MEOA data for officers in each of the services and enlisted members in the Marine Corps is limited to those promotions that occurred “in the zone.” We noted that about 900, or about 8 percent, of the services’ officer promotions and about 500, or about 19 percent, of Marine Corps enlisted promotions in fiscal year 1993 were not reported and were from either below or above the zone. Without routinely assessing promotions in the lower ranks and in each of the promotion zones for possible racial or gender disparities, the services’ ability to identify areas warranting further analysis is limited. The services are also required to report in their MEOAs racial and gender data on involuntary separations, such as for reduction in force or medical reasons, but are not required to report on the great majority of separations that are for voluntary reasons. In fiscal year 1993, about 163,500 enlisted members and about 16,400 officers voluntarily left the services for reasons other than retirement. Analyzing this data for racial or gender disparities could increase the services’ understanding of who is leaving the services and help focus their efforts in determining why. DASD(EO) Has Not Analyzed the MEOAs DASD(EO) and his predecessors have not provided the services with analyses of their MEOAs and have prepared a DOD summary only on 1990 data, even though both have been required annually since fiscal year 1988. Although one Marine Corps official recalled receiving the summary, she said that it was not helpful or constructive. In addition, some of the service officials responsible for their service’s MEOAs said the assessments were done primarily to satisfy the DOD requirement. They noted that, except for the promotion category, MEOAs generally received little attention outside the services’ equal opportunity offices. Although DASD(EO) acknowledges these problems, they continue. The DOD instruction calls for the services to submit their MEOAs for the prior fiscal year by February 1 each year and for DASD(EO) to complete its analyses within 90 days. The 1993 MEOAs were not all received by DASD(EO) until May 1994. As of the end of June 1995, DASD(EO) had not provided its 1993 MEOA analyses to the services, and the 1994 MEOAs have not been completed by all the services. Analysis Shows Some Statistically Significant Disparities To identify possible disparities, we analyzed three MEOA categories—accessions, assignments, and promotions—for fiscal years 1989 through 1993. We compared each minority group—American Indian, Asian, black, and Hispanic—to the dominant white group and compared females to males. The analytical approach we used is one of several methods for analyzing and identifying trends in equal opportunity data. It compares the odds of selection from a particular racial or gender group to the odds of selection from the dominant group for a particular outcome. Used as a managerial tool, this methodology is especially well suited to analyzing various outcomes for racial and gender groups of very different sizes and selection rates. Appendix I contains a more detailed explanation of our methodology, including our rationale for using this approach rather than alternative approaches. Our analysis showed some racial or gender disparities, although the number of disparities varied considerably among the MEOA categories, across the services, and by race and gender. Appendix II presents our detailed results. Conclusions about DOD’s personnel management practices cannot be based solely on the existence of statistically significant disparities. Further analysis would be necessary to determine why the disparities occurred. Certain job criteria or selection procedures may have an adverse impact on one or more groups, but if the criteria or procedure can be shown to accurately measure required job skills, the impact could be warranted. Additionally, a group’s social characteristics may lead to disparities; for example, a group’s low interest or propensity to serve in the military could help explain its lower odds of entering the services. Accessions MEOAs did not report information on the eligible pools for accessions. At the suggestion of the DOD Office of Accession Policy, we used certain data from the Defense Manpower Data Center for the eligible pools. For enlisted accessions, we used the gender and racial makeup of persons who had taken the Armed Forces Qualification Test. This meant the individual had expressed interest in the military and had made the time and effort to take the initial tests for entrance into the services. Because comparable eligible pool data for officers was not available, the DOD Office of Accession Policy suggested we use civilian labor force data for college graduates between 21 and 35 years old as the eligible pool. This data provides a comparison to the overall racial and gender composition of this portion of the U.S. population but does not account for an individual’s interest or propensity to serve in the military, which may vary by race and gender. Using these eligible pools, we found statistically significant racial and gender disparities that may warrant further analysis. For example, in all the services, Asians had statistically significant lower odds of entering as either an enlisted member or officer in nearly all the years examined; the odds of blacks and Hispanics entering the Air Force as either an enlisted member or officer were statistically significantly lower than whites in most of the years we examined; and in the Army, Hispanics had statistically significantly lower odds than whites of entering the officer corps. Assignments For the eligible pool for career-enhancing assignments, we used the numbers of enlisted members and officers eligible for such assignments reported in each of the services’ MEOAs. In the three services we examined, we found that the odds of enlisted and officer minorities being selected for these assignments were not statistically significantly different from whites in most instances. An exception, however, was Asian officers in the Navy. As a group, they had statistically significant lower odds than whites of being selected for most assignments. In addition, the odds of Air Force and Navy women officers being selected for many of the assignments in the years we examined were statistically significantly lower than the odds of selection for their male counterparts. Promotions Like assignments, we used the eligible pool data for promotions reported in the services’ MEOAs. In about 37 percent of the enlisted (E-7, E-8, and E-9) and officer (O-4, O-5, and O-6) promotion boards we examined, one or more minority groups had statistically significant lower odds of being promoted than whites. We found statistically significant lower odds of minorities being promoted compared to whites most often (1) for blacks, (2) at the E-7 and O-4 levels, and (3) in the Air Force. On the other hand, the odds of females being promoted were not statistically significantly different or were greater than the odds for males in nearly all the enlisted and officer boards we examined. Recommendations To help make the services’ MEOAs more useful in monitoring the services’ equal opportunity programs, we recommend that the Secretary of Defense direct DASD(EO) to do the following: Devise methodologies for analyzing MEOA data that would more readily identify possible racial and gender disparities than current methods permit and establish criteria for determining when disparities warrant more in-depth analyses. The Secretary may wish to consider the methodology we used in this report, but other methods are available and may suit the purposes of MEOAs. Ensure that the services (1) use comparable definitions and interpretations in addressing the MEOA categories and (2) provide complete information for each of the MEOA categories. Prepare the analyses of the services’ annual MEOAs and the DOD summary, as required. Agency Comments and Our Evaluation In commenting on a draft of this report, DOD concurred with the report and stated that it has already initiated several efforts to make the recommended improvements. DOD’s comments are reproduced in appendix III. Scope and Methodology To evaluate whether MEOAs provided DASD(EO) with sufficient information to effectively monitor the services’ equal opportunity programs, we reviewed the services’ MEOAs for fiscal years 1989 through 1993. In addition, we analyzed the services’ fiscal year 1993 MEOA—the latest available at the time of our review—for reporting completeness and consistency. We reviewed the DOD directive and instruction governing the military’s equal opportunity program. We discussed preparation of MEOAs with cognizant officials in the services and DASD(EO)’s Office of Military Equal Opportunity. To determine whether possible racial or gender disparities in selection rates existed, we analyzed military accessions, assignments, and promotions for active-duty enlisted members and officers. We chose to analyze these categories because relatively large numbers of servicemembers were involved and, for the most part, the necessary data was readily available. For accessions, we used data from the Defense Manpower Data Center. For assignments and promotions, we used data from the services’ MEOAs. We did not independently verify the accuracy of the data. We performed our review from January 1994 to April 1995 in accordance with generally accepted government auditing standards. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Armed Services Committee and the Senate and House Committees on Appropriations; the Chairman, House Committee on National Security; the Secretaries of Defense, the Air Force, the Army, and the Navy; and the Commandant of the Marine Corps. Copies will also be made available to others upon request. Please contact me at (202) 512-5140 if you or your staff have any questions concerning this report. Major contributors to this report are listed in appendix IV. Odds Ratio Methodology The Equal Employment Opportunity Commission has established policies and procedures for federal agencies to collect and analyze data on civilian personnel actions such as hiring, assignments, and promotions to determine whether selection procedures adversely affect any race, sex, or ethnic group. Although these policies and procedures do not apply to active-duty military personnel, the Department of Defense (DOD) directive and instruction related to its military equal opportunity program set forth similar requirements. We chose not to use the “four-fifths” rule described in the Commission’s guidance for determining whether adverse impact may have occurred. As pointed out by the Commission, the four-fifths rule is a “rule of thumb” and has limitations. For example, when the relevant groups are very large—as in the military—differences in the ratio of the two selection rates greater than four-fifths may be statistically significant; that is, areas of possible adverse impact may not be detected if just the four-fifths rule is used. Therefore, to determine whether possible racial or gender disparities existed in the military services’ personnel actions that we examined, we used an “odds ratio” methodology. This methodology is especially well suited to analyzing various outcomes for racial and gender groups of very different sizes and selection rates. Use of this methodology also enabled us to do analyses that are more sensitive to changes in the relative numbers of women and minorities than the more traditional method, which compares selection rates (the number selected divided by the total number eligible). The odds of a particular group member being selected for an outcome is determined by dividing the number of individuals selected by the number not selected. An “odds ratio” is the odds of one group member being selected divided by the odds of another group member being selected for that same outcome. If the odds of being selected for both group members are equal, the ratio will be one. When the ratio is not equal to one, the methodology allows us to determine whether the difference is statistically significant, that is, whether it is likely due to random chance or not. For purposes of this report, we use the term statistically significant to denote those instances where the likelihood of the outcome having occurred randomly is less than 5 percent. Reducing the Number of Calculations The odds ratio methodology is relatively straightforward but can involve a large number of calculations and comparisons. If we had calculated odds ratios for each racial and gender group for each personnel action outcome in the three Military Equal Opportunity Assessment (MEOA) categories we examined—accessions, career-enhancing assignments, and promotions—almost 3,000 odds ratios would have been needed. Instead of performing all these calculations, we used “modeling” techniques to determine how race and gender affected the reported outcomes for the three sets of data. Once we understood the effect race and gender had on the outcomes, we had to calculate and analyze only the odds ratios that significantly affected the actual outcomes. For each personnel action, we considered five different models, as follows: Model one assumed that race and gender had no effect on the outcome of accessions, assignments, or promotions. Model two assumed that only gender had an effect—that is, all racial groups would have equal odds of being selected for the outcome, but males and females would not. Model three assumed just the opposite—males and females would have equal odds of being selected, but the racial groups would not. Model four assumed that both race and gender affect the odds of selection independently of one another. In other words, the odds ratios indicating the difference between males and females in one racial group would be the same as the corresponding ratios in the other groups. Model five assumed that both race and gender had an effect and that the two factors operated jointly. That is, the odds ratios describing racial differences varied by gender, and the odds ratios describing gender differences varied by racial group. Determining which model to use required two steps. First, using statistical software, we created a hypothetical database for each model essentially identical to the actual data but modified to reflect the assumptions we made. For example, the hypothetical database created for the third model assumed that the odds of males and females being selected would be equal (that is, the odds ratio would be 1.0). Second, the hypothetical odds ratios were compared to the actual odds ratios for each of the personnel actions. If there were significant differences, we rejected the model’s assumptions. In virtually all instances, model four was the most appropriate and preferred way to present the results. Its overall results were not significantly improved upon by any of the other models. This meant that for the personnel actions we analyzed, we only needed to calculate the odds ratios for each racial and gender group compared to whites and males, respectively (see app. II). We did not have to calculate the odds ratios for males and females within each racial group because, according to the model, the gender difference was the same across racial groups. Results of Racial and Gender Disparity Analysis This appendix presents the odds ratios we calculated for each of the three MEOA categories we examined—accessions, assignments, and promotions. Some ratios are much less than 1 (less than three one-thousandths, for example) or much greater (over 16,000, for example). Such extremes occurred when the percentage of persons selected from a small-sized group was proportionately very low or very high compared to the percentage selected from the dominant group. Our tests of statistical significance, however, took group size into account. Therefore, although many odds ratios were less than one (some much less), the disparity was not necessarily statistically significant. In the tables in this appendix, we have shaded the odds ratios that indicate possible adverse impact; that is, the ratios are less than one and statistically significant. A more in-depth analysis would be warranted to determine the cause(s) of these disparities. As discussed in appendix I, we compared the odds for females with those for males and the odds of minority racial groups with those for whites. To help the reader remember the relationships in our tables, we have labeled the top of each column listing odds ratios with the gender or racial group and symbols of what the proper comparison is. For example, F:M means the ratio compares the odds of females to males and B:W means the ratio compares the odds of blacks to whites for the particular outcome being analyzed. The odds ratios can also be used to make certain comparisons within and among the services and identify trends whether they are statistically significant or not. If the objective, for example, is to increase the representation of a particular minority group vis-a-vis whites, the odds ratio should be greater than one. When it is not, it means whites are being selected in proportionately greater numbers than the minority group. Accessions Tables II.1 and II.2 present the odds ratios for enlisted and officer accessions, respectively. We compared gender and racial data for those entering the military to the gender and racial composition of selected eligible pools. In determining what to use for the eligible pool, we conferred with officials in DOD’s Office of Accession Policy. For the enlisted member eligible pool, we used those men and women who had taken the Armed Forces Qualification Test and scored in the top three mental categories during the respective fiscal year. These were generally high school graduates who had been initially screened by the recruiter for certain disqualifying factors such as a criminal record or obvious physical disabilities. Using test takers as the eligible pool also took into account the propensity to serve in the military, since the men and women taking the test had to make the time and effort to do so. Moreover, this data was readily available from the Defense Manpower Data Center. For officers, determining a relevant eligible pool was not as precise. Officers primarily come from Reserve Officers’ Training Corps programs, officer candidate schools, and the military academies, but no information was reported on the racial and gender makeup of the programs’ applicants in the services’ MEOAs, nor was it available from the Defense Manpower Data Center. At the suggestion of DOD’s Office of Accession Policy, we used national civilian labor force gender and racial statistics for college graduates 21 to 35 years old as the eligible pool. This data was readily available from the Defense Manpower Data Center, and nearly all officers have college bachelor’s degrees and are in this age group when they enter the service. We could not account for an individual’s propensity or desire to serve as a military officer using civilian labor force data. While our analyses highlight those racial groups that entered the services’ officer corps at lower rates or odds compared to whites based on their representation in the civilian labor force, further analyses would be necessary to determine why this occurred. In both tables we present the odds ratios for females compared to males. In each of the 5 years we reviewed and across the services, the odds of women entering the services were statistically significantly lower than for men. This fact is not surprising considering that women’s roles in the military are limited and they may, as a group, have less interest or propensity to serve in the military than men. Even in recent years when the restrictions have been loosened, the services have not reported accessing more than about 14 percent of women for the enlisted ranks and about 19 percent for the officer ranks, compared to over 50 percent representation in the civilian labor force. Nevertheless, we present the data to illustrate the disparities among the services. For example, in fiscal year 1993, the odds of women in our eligible pool entering the Marine Corps as officers were less than one-tenth the odds for men. In contrast, for the same year, the odds of women entering the Air Force as officers were about one-third the odds for men. Shaded areas indicate ratios that are less than one and statistically significant. Shaded areas indicate ratios that are less than one and statistically significant. Tables II.3 through II.6 present the odds ratios for enlisted and officer career-enhancing assignments as identified by the services in their respective MEOAs. For the gender and racial makeup of the eligible pools and of who was selected, we used data reported in the MEOAs. As previously noted, the Marine Corps data for officer assignments is an accumulation of all its officers in the ranks O-2 through O-6. Although we calculated the odds ratios for this data and they are presented in table II.5, more detailed analysis by more specific assignments may be appropriate before any conclusions are drawn. In addition, the Marine Corps did not report any assignment data for its enlisted personnel. For several of the assignments, the MEOA data was insufficient for our analysis; these instances are indicated as “no data.” In others, no minority candidates were in the eligible pool, and these instances are indicated as “none” in the appropriate odds ratio column. Finally, in the Navy, combat exclusion laws prohibit women from serving aboard submarines, and this is so noted in the chief of the boat assignment for E-9s. Shaded areas indicate ratios that are less than one and statistically significant. (Continued on next page.) Females F:M Shaded areas indicate ratios that are less than one and statistically significant. Shaded areas indicate ratios that are less than one and statistically significant. (Continued on next page.) Females F:M Table II.5: Odds Ratios for Marine Corps Officer Career-Enhancing Assignments, Fiscal Years 1989-93 American Indians AI:W Shaded areas indicate ratios that are less than one and statistically significant. Shaded areas indicate ratios that are less than one and statistically significant. (Continued on next page.) Females F:M American Indians AI:W Shaded areas indicate ratios that are less than one and statistically significant. Tables II.7 and II.8 present the odds ratios for enlisted and officer promotion boards, respectively, for each of the services. For the gender and racial makeup of the eligible pools and of who was selected, we used data reported in the MEOAs. In several instances, no promotion boards were held, or data was not reported in the service’s MEOA for a particular rank, service, and year; these are noted as appropriate. In other instances, no minority group candidates were in the eligible pool for promotion to a particular rank; we have indicated these as “none” in the appropriate ratio column. Shaded areas indicate ratios that are less than one and statistically significant. (Continued on next page.) Females F:M Shaded areas indicate ratios that are less than one and statistically significant. (Continued on next page.) Females F:M Shaded areas indicate ratios that are less than one and statistically significant. Table II.8: Odds Ratios for the Services’ Officer Promotion Boards, Fiscal Years 1989-93 American Indians AI:W Shaded areas indicate ratios that are less than one and statistically significant. (Continued on next page.) Females F:M Shaded areas indicate ratios that are less than one and statistically significant. (Continued on next page.) Females F:M Comments From the Department of Defense Major Contributors to This Report National Security and International Affairs Division, Washington, D.C. General Government Division, Washington, D.C. Douglas M. Sloane, Social Science Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the services' Military Equal Opportunity Assessments (MEOA) to determine whether certain active-duty personnel data reflect racial and gender disparities within the services. GAO found that: (1) MEOA do not consistently identify and assess the significance of possible racial and gender disparities within the services; (2) MEOA categories are reported differently by each service, since the services are not required to report data on low rank promotions and voluntary separations; (3) the Deputy Assistant Secretary of Defense for Equal Opportunity (DASD EO) has not analyzed the services' MEOA or Department of Defense (DOD) summaries for fiscal year 1993; (4) there are significant statistical disparities in the number of minorities considered for accessioning, career enhancement, and promotion; and (5) some of these disparities can be attributed to job-related and societal factors.
GAO_GAO-04-344
Background The Office of the Under Secretary of Defense for Intelligence (OUSD (I)) was created in 2002 with the passage of the Bob Stump National Defense Authorization Act for Fiscal Year 2003. Among the responsibilities of OUSD (I) are the coordination and implementation of DOD policy for access to classified information. At the time of our earlier review on the clearance process, these responsibilities belonged to the Office of the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence (OASD (C3I)). Classified information is categorized into three levels—top secret, secret, and confidential—to denote the degree of protection required for information according to the amount of damage that unauthorized disclosure could reasonably be expected to cause to national defense or foreign relations. The degree of expected damage that unauthorized disclosure could reasonably be expected to cause is “exceptionally grave damage” for top secret information, “serious damage” for secret information, and “damage” for confidential information. To retain access to classified information, individuals must periodically go through the security clearance process. The time frames for reinvestigation are every 5 years for top secret, 10 years for secret, and 15 years for confidential. DOD’s personnel security clearance process has three stages: preinvestigation, which includes determining if a requirement for access exists and submitting an investigation request; the actual personnel security investigation; and adjudication, a determination of eligibility for access to classified information (see fig. 1). Since 1997, all federal agencies have been subject to a common set of personnel security investigative standards and adjudicative guidelines for determining whether service members, government employees, industry personnel, and others are eligible to receive a security clearance. In 1998, DOD formally incorporated these standards and guidelines into its regulations governing access to classified information. The security officer begins the preinvestigation stage of the clearance process by determining whether a position requires access to classified information. If so, the current or future job incumbent completes a personnel security questionnaire that asks for detailed information about a wide range of issues. The impetus for an investigation request could be a need to (1) appoint, enlist, or induct an individual into the military; (2) staff a new program or contract with an individual who has a clearance; (3) replace a cleared job incumbent with someone else; (4) raise an existing clearance to a higher level; or (5) reinvestigate a previously cleared job incumbent whose clearance is due for reinvestigation. In the investigation stage, investigative staff members seek information pertaining to the subject’s loyalty, character, reliability, trustworthiness, honesty, and financial responsibility. The level of clearance is the primary determinant of the types and sources of information gathered. For example, an investigation for a top secret clearance requires much more information than does the type of investigation required to determine eligibility for either a secret or confidential clearance. The types or sources of information might include an interview with the subject of the investigation, national agency checks (e.g., Federal Bureau of Investigations and immigration records), local agency checks (e.g., municipal police and court records), financial checks, birth date and place, citizenship, education, employment, public records for information such as bankruptcy or divorce, and interviews with references. In the adjudication stage of the security clearance process, government employees in 10 DOD central adjudication facilities use the information gathered at the investigation stage to approve, deny, or revoke eligibility to access classified information. Once adjudicated, the security clearance is then issued up to the appropriate eligibility level, or alternative actions are taken if eligibility is denied or revoked. DOD Unable to Estimate the Size of Its Clearance Backlog DOD did not know the size of its personnel security clearance backlog and has not estimated the size of the backlog since January 2000. DOD was unable to estimate the size of its backlog for overdue reinvestigations that have not yet been submitted, but our estimates for overdue submitted investigation requests and overdue adjudications were roughly 270,000 and 90,000 cases, respectively, at the end of September 2003. These estimates are not based on a consistent set of DOD-wide definitions and measures; instead, the time limits for defining and measuring the backlog varied from agency to agency. DOD Unable to Estimate the Number of Overdue Reinvestigations Not Yet Submitted DOD could not estimate the number of personnel who had not requested a reinvestigation, even though their clearances exceeded the governmentwide time frames for reinvestigation (see fig. 2). As we mentioned earlier, the governmentwide time frames for renewing clearances are 5, 10, or 15 years depending on an individual’s clearance level. We, therefore, defined this portion of the backlog as any request for reinvestigation that had not been submitted within those time frames. In our 2000 report, we indicated that DOD estimated its overdue but-not- submitted reinvestigation backlog at 300,000 cases in 1986 and 500,000 cases in 2000. Our 2000 report also noted that the 500,000-case backlog estimate was of questionable reliability because of the ad hoc methods used to derive it. Between 2000 and 2002, DOD took a number of steps to reduce this backlog, including mandating the submission of requests and requiring senior service officials to provide monthly submission progress reports. On February 22, 2002, DOD concluded this backlog reduction effort by issuing an OASD (C3I) memorandum directing that “y September 30, 2002, if a clearance is not based upon a current or pending investigation, or if the position does not support a requirement for a clearance, the clearance must be administratively terminated or downgraded without prejudice to the individual.” DOD is unable to show that the overdue but-not-submitted reinvestigations backlog was eliminated by these actions. Roughly 270,000 Submitted Requests for Investigations Overdue for Completion At the end of September 2003, the investigative portion of the backlog consisted of roughly 270,000 submitted requests for either reinvestigation or initial investigation that had not been completed within a prescribed amount of time. We calculated this estimate from information provided in response to the data requests that we made to DSS and OPM. This number represents an estimated 163,000 cases at DSS and 107,000 cases contracted to OPM that had not been completed within the time limits. In our August 2000 report, DOD stated that a vast majority of 94,000 submitted requests for reinvestigation were overdue for completion, and those cases were not part of DOD’s estimate of 500,000 overdue but-not-submitted reinvestigations discussed in the prior section. At that time, DOD had not included either submitted reinvestigations or initial investigations that exceeded specified time limits as part of the DOD-wide backlog. An estimate of the initial investigations exceeding the time limit was not a focus of that work. The existence of varying sets of time limits for completing investigations makes it difficult to develop accurate estimates of the size of DOD’s investigative backlog. DSS’s performance goals are 120 days for a periodic reinvestigation for a top secret clearance, 90 days for an initial top secret clearance, and 75 days for either a secret or confidential clearance being issued initially. In addition, some requests for investigations receive priorities over other requests. OPM has timeliness categories that DOD and other agencies use to request various types of investigations. The timeliness categories are 35 days for priority investigations, 75 days for accelerated investigations, 120 days for standard investigations, and 180 days for extended service agreements. The lack of a standard set of time limits is a long-standing problem. In 1994, the Joint Security Commission reported on this issue, and among other things (1) found there was no performance standard for timeliness in completing investigations and adjudications, (2) stated it repeatedly heard from the customer community that 90 days is an appropriate standard for completing an average investigation and adjudication, and (3) recommended “tandard measurable objectives be established to assess the timeliness and quality of investigations, adjudications, and administrative processes and appeals performed by all such organizations within DOD and the Intelligence Community.” OPM’s issuance of closed pending cases—investigations sent to adjudication facilities without one or more types of source data—presents another ambiguity in defining and accurately estimating the backlog. In our October 1999 report, we found that DSS had similarly delivered incomplete investigations to DOD adjudicators. After we recommended that DOD adjudication facility officials grant clearances only when all essential investigative work has been done, DSS monitored the cases returned from the adjudication facilities and identified reasons for the returns. Overall, about 10 percent of the 283,480 DOD cases fully closed by OPM in fiscal year 2002 were initially delivered to central adjudication facilities as closed pending cases. When measuring the timeliness of its contractors’ performance, OPM defines completed investigations as cases that (1) have the complete information required for the type of investigation, (2) are closed pending, or (3) have been discontinued. If the investigations have not been fully completed within OPM-contracted time limits, we believe that closed pending cases should be included in the investigative portion of the backlog. DOD-Wide Estimate of Adjudicative Backlog Exceeds Roughly 90,000 Cases Central adjudication facilities’ responses to our request for adjudicative backlog estimates as of September 30, 2003, indicated that roughly 90,000 completed investigations had not been adjudicated within prescribed time limits (see table 1). Differences in the sizes of the backlog at the various central adjudication facilities are due to a combination of factors. For example, the military service departments generally perform more adjudications than do DOD agencies; some facilities have increased their staffing of government employees to decrease the backlog; and some facilities have contracted for support services to decrease the backlog. We later discuss the large number of requests that have resulted over the last few years. DOD officials attributed this extra adjudicative workload to, among other things, increased operations related to the war on terrorism. An ambiguous picture of the adjudicative backlog size is present because the central adjudication facilities use different time limits to define when cases become part of the backlog. Applying the backlog criteria of one central adjudication facility to the completed investigations waiting adjudication at another facility could increase or decrease the estimated size of the DOD-wide adjudicative backlog. For instance, the Defense Industrial Security Clearance Office’s goals for completing adjudications are 3 days for initial investigations and 30 days for periodic reinvestigations, and any cases exceeding these amounts are considered a backlog. In contrast, the Defense Office of Hearings and Appeals’ goal is to maintain a steady workload of adjudicating 2,150 cases per month within 30 days of receipt, and it considers a backlog to exist when the number of cases on hand exceeds its normal workload. Thus, if the Defense Industrial Security Clearance Office’s stricter time limit were applied to the initial investigations awaiting adjudication at the Defense Office of Hearings and Appeals, the latter office’s backlog would be larger than that currently reported. Multiple Impediments Slow DOD’s Progress in Eliminating Its Backlog and Generating Accurate Backlog Estimates We have identified four major impediments that have slowed DOD’s progress in eliminating its clearance backlog and two impediments that have hindered its ability to produce accurate backlog estimates. Large Number of Clearance Requests, Limited Staffing, Existing Backlog, and No Strategic Plan for Information- Access Problems Slow DOD’s Efforts to Eliminate the Backlog In our review of documents and discussions with officials from DOD, OPM, industry associations, and investigator contractors, we identified four major impediments that have hampered DOD’s ability to eliminate its current security clearance backlog. These are: (1) the large number of new requests for clearances, (2) inadequate investigator and adjudicator workforces, (3) the mere size of the existing backlog, and (4) the lack of a strategic plan for overcoming problems by government and contractor investigators in gaining access to information from state, local, and overseas sources. Large Number of Requests for Clearances The large number of requests for security clearances hinders DOD’s efforts to draw down the number of cases in its current clearance backlog. In fiscal year 2003, the Secretary of Defense annual report to the President and Congress noted that defense organizations annually request more than 1 million security checks. These checks include investigations that are part of the personnel security clearance process as well as other investigations such as those used to screen some new recruits entering the military. Other federal agencies are also requesting a growing number of background investigations according to OPM. In our November 2003 report on aviation security, we noted that OPM had stated that (1) it received an unprecedented number of requests for background investigations governmentwide since September 2001 and (2) the large number of requests was the primary reason for delayed clearance processing. Historically, almost all of DOD’s requests for investigations were submitted to DSS. Starting in 1999, DOD contracted with OPM to complete a large number of requests for investigations as part of DOD’s effort to expand its investigative capacity and decrease its investigative backlog. OUSD (I) estimated that DOD spent over $450 million for the investigations submitted to DSS and OPM in fiscal year 2003. As table 2 shows, OUSD (I) reported that the actual number of requests submitted for investigations were approximately 700,000 in fiscal year 2001, more than 850,000 in fiscal year 2002, and more than 775,000 in fiscal year 2003. In fiscal year 2003, DSS had responsibility for a larger percentage of the total DOD investigations workload than it had in the prior 2 fiscal years. DSS supplemented its federal workforce with contracts to three private- sector investigations firms. As table 2 also indicates, the number of targeted submissions versus the actual number of submissions that DOD received varied considerably from year to year. In fiscal year 2001, DOD received fewer requests than it had expected (82 percent), and in fiscal years 2002 and 2003, it received more requests than projected (119 and 113 percent, respectively). DOD personnel, investigations contractors, and industry officials told us that the large number of requests for investigations could be attributed to many factors. For example, they ascribed the large number of requests to heightened security concerns that resulted from the September 11, 2001, terrorist attacks. They also attributed the large number of investigations to an increase in the operations and deployments of military personnel and to the increasingly sensitive technology that military personnel, government employees, and contractors come in contact with as a part of their job. While having a large number of cleared personnel can give the military services, agencies, and industry a large amount of flexibility when assigning personnel, the investigative and adjudicative workloads that are required to provide the clearances further tax DOD’s already overburdened personnel security clearance program. A change in the level of clearance being requested also increases the investigative and adjudicative workloads. A growing percentage of all DOD requests for clearances is at the top secret level. For example, in fiscal years 1995 and 2003, 17 percent and 27 percent, respectively, of the clearance requests for industry personnel were at the top secret level. This increase of 10 percentage points in the proportion of investigations at the top secret level is important because top secret clearances must be renewed twice as often as secret clearances (i.e., every 5 years versus every 10 years). According to OUSD (I), top secret clearances take eight times the investigative effort needed to complete a secret clearance and three times the adjudicative effort to review. The doubling of frequency along with the increased effort to investigate and adjudicate each top secret reinvestigation adds costs and workload for DOD. Cost. In fiscal year 2003, the costs of investigations that DOD obtained through DSS were $2,640 for an initial investigation for a top secret clearance, $1,591 for a periodic reinvestigation of a top secret clearance, and $328 for the most commonly used investigation for a secret clearance. The cost of getting and maintaining a top secret clearance for 10 years is approximately 13 times greater than the cost for a secret clearance. For example, an individual getting a top secret clearance for the first time and keeping the clearance for 10 years would cost DOD a total of $4,231 in current year dollars ($2,640 for the initial investigation and $1,591 for the reinvestigation after the first 5 years). In contrast, an individual receiving a secret clearance and maintaining it for 10 years would cost a total of $328 ($328 for the initial clearance that is good for 10 years). Time/Workload. The workload is also affected by the scope of coverage in the various types of investigations. Much of the information for a secret clearance is gathered through electronic files. The investigation for a top secret clearance, on the other hand, requires the information needed for the secret clearance as well as data gathered through time-consuming tasks such as interviews with the subject of the investigation request, references in the workplace, and neighbors. Inadequate Investigative and Adjudicative Workforces Hinder Efforts to Eliminate Backlog Another impediment to eliminating the large security clearance backlog is the inadequate size of the federal and private-sector investigative workforces relative to the large workloads that they face. The Deputy Associate Director of OPM’s Center for Investigations Services estimated that roughly 8,000 full-time-equivalent investigative personnel would be needed by OPM and DOD (together) to eliminate backlogs and deliver investigations in a timely fashion to their customers. The rough estimate includes investigators and investigative technicians. However, changes in the numbers or types of clearance requests, different levels of productivity by investigators, and other factors could greatly affect this estimated workforce requirement. As of December 2003, we calculated that DOD and OPM have around 4,200 full-time-equivalent investigators available as federal employees or currently under contract. Of this number, DSS indicated that it has about 1,200 investigators and 100 investigative technicians. In addition, DSS has the equivalent of 625 full-time investigative staff, based on 2,500 mostly part-time investigators, from its three contractors. DSS equates four of the part-time investigators to one full-time investigator. Finally, although OPM has almost no investigative staff currently, its primary contractor has approximately 2,300 full-time investigators. OPM reported that its primary contractor is adding about 100 investigators per month, but turnover is about 70 employees per month. We believe that DSS’s estimate of the number of full-time-equivalent investigators working for its contractors is imprecise because (1) an investigator may work part-time for more than one contractor and (2) the amount of time devoted to conducting investigations can vary substantially. These part-time investigators work different amounts of time each month, according to both their own preference and the number of assignments they receive from investigation contractor(s). Sometimes they are unavailable to work for one contractor because they are conducting investigations for another contractor. Officials from DSS’s investigations contractors told us that they intend to continue relying largely on staff employed on an as-needed basis. Some of the private-sector officials stated that they would incur additional financial risks if they were to use full-time investigators. Inadequate adjudicator staffing also causes delays in issuing eligibility-for- clearance decisions. Since we issued our report on DOD adjudications in 2001, the number of eligibility-for-clearance decisions has risen for reasons such as an increase in the number of completed investigations stemming from DOD’s contract with OPM and the improved operation of DSS’s Case Control Management System. Central adjudication facilities with adjudicative backlogs have taken various actions to eliminate their backlog. The Defense Office of Hearings and Appeals hired 46 additional adjudicators on 2-year term appointments, contracted for administrative functions associated with adjudication, and is seeking permission from the Office of the Secretary of Defense to hire some of its term adjudicators permanently. The Navy’s central adjudication facility contracted with three companies to provide support and hired an additional 27 full-time-equivalent civilian and military adjudicators, which helped the Navy eliminate much of its adjudicative backlog that had grown to approximately 60,000 cases by December 2002. Because the DOD Office of Inspector General is examining whether the Navy adjudicative contracts led the contractor’s staff to perform an inherently governmental function—adjudication—it is unclear whether the Army and Air Force central adjudication facilities will be able to use similar contracting to eliminate their backlogs. The 10 DOD central adjudication facilities are funded by different agencies and operate independently of one another. As a result, OUSD (I) cannot transfer backlogged cases from one facility to eliminate an adjudicative backlog at another facility. In our April 2001 report on DOD adjudications, we noted that studies issued by the Defense Personnel Security Research Center, the Joint Security Commission, and the DOD Office of the Inspector General between 1991 and 1998 had concluded that the decentralized structure of DOD’s adjudication facilities had drawbacks. Two of the studies had recommended that DOD consolidate its adjudication facilities (with the exception of the National Security Agency because of the sensitive nature of its work) into a single entity. Currently, OUSD (I) is exploring the possibility of assigning all industry adjudications to the Defense Industrial Security Clearance Office instead of having it share this responsibility with the Defense Office of Hearing and Appeals. Size of Existing Backlog Impedes Prompt Opening of New Requests for Investigation The current size of the investigative backlog impedes DOD’s ability to process new security clearance requests within the prescribed time limits. A new request might remain largely dormant for months in the investigations queue until other requests that were received earlier have been completed. This point can be illustrated by examining the results of miscommunications between OASD (C3I) and DSS regarding assigning priorities to investigations between March 2002 and March 2003. During that period, DSS placed a higher priority on completing new—versus old— requests. From March through September 2002, DSS averaged 97 days to open and complete initial investigation requests for top secret clearances; 100 days, for top secret reinvestigation; 43 days, for secret; and 44 days, for confidential. For three of the four types of investigations, DSS’s average completion times were faster than its time-based goals (120 days for a periodic reinvestigation for a top secret clearance, 90 days for an initial top secret clearance; and 75 days for either a secret or confidential clearance being issued initially). Starting in March 2003, DSS again assigned a higher priority to older requests. However, during those 12 months, from March 2002 to February 2003, the average age of the older cases increased, and it is impossible to say how much of the increase was due to the miscommunication regarding priorities, a change in the number of requests that DSS received, or some other factor. DSS staff told us that the delays in starting investigations could lead to additional delays in processing the case, particularly for military personnel who were being deployed or were moving. Therefore, DSS instituted a procedure to attempt to meet with individuals requesting an investigation before they deploy or go on extended training. Delays in starting investigations can result in extra investigative work to find the individuals at their new addresses or additional delays if investigators wait for the individuals to return from deployment or training. In some cases, however, DOD commands, agencies, and contractors have been able to obtain some investigations quickly by assigning higher priorities to certain individual investigations or types of investigations. No Strategic Plan for Overcoming Information- Access Problems Slows Investigative Process The absence of a strategic plan for overcoming problems in gaining access to information from state and local agencies also slows the speed of personnel security clearance investigations and, thereby, impedes reducing the size of the backlog. Investigators face delays in conducting background checks because of the lack of automated records in many localities, state and local budget shortfalls that limit how much time agency staff have to help investigators, and privacy concerns (e.g., access to conviction records from the courts instead of the preferred arrest records from law enforcement). This problem of accessibility to state and local information was identified in an October 2002 House Committee on Government Reform report. The report recommended that the Secretary of Defense and the Attorney General jointly develop a system that allows DSS and OPM investigators access to state and local criminal history information records. In addition, representatives from one investigations contractor noted that the Security Clearance Information Act gives only certain federal agencies access to state and local criminal records, and therefore private-sector investigators are put at a disadvantage relative to federal investigators. Another barrier to the timely closure of an investigation is a limited investigative capacity overseas, which causes delays in obtaining information from overseas investigative sources. DSS, OPM, and private- sector investigations contractors do not maintain staffs overseas to investigate individuals who are currently or were formerly stationed overseas, who have traveled or lived overseas, or who have relatives living in foreign countries. Officials at DSS and the central adjudication facilities told us that they typically ask overseas-based DOD criminal investigations personnel or State Department and Central Intelligence Agency employees to supply this type of investigative information as a collateral duty. DOD has no strategic plan for overcoming access to information problems and the delays that result, but DOD has made efforts to address selective aspects of the access problem. For example, DOD supplied us with draft legislation proposing to provide access to a central repository for driver licensing records. DOD proposed that this information be used in personnel security investigations and determinations as well as personnel investigations with regard to federal employment security checks. Also, an OUSD (I) official noted that DOD proposed a legislative change for the fiscal year 2001 authorization bill to allow easier access to records of criminal history information. Oversight Problems and Automation Delays Hinder Accurate Monitoring of Backlog Size OUSD (I) and its predecessor OASD (C3I) have not provided the oversight needed to monitor and accurately estimate the various parts of the backlog that are present throughout DOD. Also, as we documented earlier, backlog estimates are not based on a consistent set of DOD-wide definitions and measures. Knowing the accurate size of the backlog is an important step towards effectively managing and eventually eliminating the backlog. When we asked for all investigative backlog reports produced since 2000, OUSD (I) supplied January 2000 estimates as its most recent report, and the report included only reinvestigations. This finding regarding the infrequency of reporting contradicts DOD’s concurrence with our October 1999 recommendation for OASD (C3I) to improve its oversight of the investigations program and our August 2000 recommendation to design routine reports to show the full extent of overdue reinvestigations. Our April 2001 report similarly concluded that OASD (C3I) needed to provide stronger oversight and better direction to DOD’s adjudication facility officials. After a review of DOD’s personnel security investigations program, an October 2002 report by the House Government Reform Committee recommended, “The Secretary of Defense should continue to report the personnel security investigations program including the adjudicative process as a material weakness under the Federal Managers’ Financial Integrity Act to ensure needed oversight is provided to effectively manage and monitor the personnel security process from start to finish.” DOD concurred with our October 1999 recommendation to declare its investigations program as a material weakness to ensure that needed oversight is provided and that actions are taken. For fiscal years 2000 through 2003, DOD listed the personnel security program as a systemic weakness, which is a weakness that affects more than one DOD component and may jeopardize the department’s operations. Delays in implementing the joint adjudication system, JPAS, have greatly inhibited OUSD (I)’s ability to monitor overdue reinvestigations and generate accurate estimates for that portion of the backlog. Among JPAS’s intended purposes are to consolidate DOD’s security clearance data systems and provide various levels of near real-time input and retrieval of clearance-related information to OUSD (I), investigators, adjudicators, and security officers at commands, agencies, and industrial facilities. The DOD Chief Information Officer identified JPAS as a critical mission system. When we reported on the reinvestigations backlog in August 2000, the Assistant Secretary of Defense for Command, Control, Communications, and Intelligence stated that JPAS would be fully implemented in fiscal year 2001 and would be capable of providing recurring reports showing the accurate number of cleared personnel requiring a periodic reinvestigation by component and type of investigation. In early December 2003, an OUSD (I) official said current plans are to have JPAS fully operational by January 2004. The delays are caused by problems such as loading adjudicative data from each central adjudication facility’s internally developed database and historical data from the Defense Clearance and Investigation Index. Backlogs and Poor Estimates May Lead to Increased National Security Risks, Higher Costs, and Workload Uncertainties DOD’s failure to eliminate its backlog of overdue reinvestigations may heighten the risk of national security breaches. Also, backlog-related delays in issuing initial security clearances may raise the cost of doing classified work for the U.S. government. In addition, DOD’s inability to accurately determine the actual size of its clearance backlog and project the number of clearances needed results in inaccurate budget requests and staffing plans. Continued Failure to Eliminate Backlog May Lead to Increased National Security Risks and Unnecessary Costs Delays in completing reinvestigations caused by the backlog and other impediments may lead to a heightened risk of national security breaches. Such breaches involve the unauthorized disclosure of classified information, which can have effects that range from exceptionally grave damage to national security for top secret information to damage for confidential information. In 1999, the Joint Security Commission reported that delays in initiating reinvestigations create risks to national security because the longer individuals hold clearances the more likely they are to be working with critical information systems. Delays in completing initial security clearances may have an economic impact on the cost of performing classified work within or for the U.S. government. Although estimates of the total economic costs of delays in granting clearances are dated, they reflect the extent of an ongoing problem. In a 1981 report, we estimated that the DOD investigative backlog could cost nearly $1 billion per year in lost productivity. More than a decade later, the Joint Security Commission report noted that the costs directly attributable to investigative delays in fiscal year 1994 could be as high as several billion dollars because workers were unable to perform their jobs while awaiting a clearance. While newer overall cost estimates are not available, the underlying reasons—the backlog and clearance delays that prevent the employment—for the costs still exist within DOD. For instance, DSS reported that the average time required to complete an initial investigation for a top secret clearance was 454 days for fiscal year 2002 and 257 days for October 2002 through February 2003. The impact of delays in completing initial clearances affects industry, which relies on DOD to provide clearances for their employees. Representatives from one company with $1 billion per year in sales stated that their company offers a $10,000 bonus to its employees for each person recruited who already has a security clearance. Such operating costs are then passed on to government customers in the form of higher bids for contracts. In turn, the recruit’s former company may need to back-fill a position, as well as possibly settle for a lower level of contract performance while a new employee is found, obtains a clearance, and learns the former employee’s job. Also, industry representatives discussed instances where their companies gave hiring preference to personnel who could do the job but were less qualified than others who did not possess a clearance. The chair of the interagency Personnel Security Working Group noted that a company might hire an employee and begin paying that individual, but not assign any work to the individual until a clearance is obtained. Also, the head of the interagency group noted that commands, agencies, and industry might incur lost-opportunity costs if the individual chooses to work somewhere else rather than wait to get the clearance before beginning work. Poor Estimates Can Result in Inadequate Budget and Staffing DOD’s inability to accurately project its personnel security clearance workload requirements have created budgeting and staffing difficulties for DOD units involved in the clearance process. For example, in fiscal year 2000, the services and defense agencies had to limit the number of overdue reinvestigations that they submitted for investigation because they had not budgeted the additional funds needed to cover the costs of the increased workload. Differences between the targeted and actual number of investigations for fiscal years 2001 to 2003 (see table 2) also document problems with the current procedures used to project clearance requirements. Inaccurate projections of personnel security clearance workloads may have also caused the backlog to be bigger than it might otherwise be because DSS and the central adjudication facilities did not adequately plan for increases in workloads. Status Update on the Authorized Transfer of DSS Investigative Functions and Personnel to OPM In December 2003, advisors to the Director of OPM recommended that the congressionally authorized transfer of DSS investigative functions and personnel to OPM not occur—at least for the rest of fiscal year 2004—due primarily to concerns about the financial risks associated with the transfer. The advisors recommended an alternative plan that is currently being discussed by DOD and OPM officials. The alternative plan proposes that DSS investigative functions and employees stay in DOD; use the OPM case management system, which according to a DOD official would save about $100 million in costs associated with continuing to update and maintain DOD’s current case management system; and receive training to use that system from OPM. As of December 16, 2003, the Secretary of Defense had not provided Congress with the certifications required before the transfer can take place. Background of the Authorized Transfer On February 3, 2003, a DOD news release announced that the Deputy Secretary of Defense and the Director of the OPM had signed an agreement that would allow DOD to divest its personnel security investigative functions and OPM to offer positions to DSS investigative personnel. The proposal for the transfer of functions and personnel was included in DOD’s The Defense Transformation for the 21st Century Act when that legislative proposal was submitted to Congress on April 10, 2003. Also, the Secretary of Defense’s annual report to the President and to Congress for 2003 cited the transfer as an effort to “eengineer the personnel security program by seeking statutory authority to transfer the personnel security investigation function currently performed by the Defense Security Service to the Office of Personnel Management, thus streamlining activities and eliminate redundancy.” The projected savings were estimated to be approximately $160 million over the fiscal year 2004 to 2009 time frame. On November 24, 2003, the National Defense Authorization Act for Fiscal Year 2004 authorized the transfer of DSS’s personnel security investigative functions and its 1,855 investigative employees to OPM. Before the transfer can occur, the Secretary of Defense must certify in writing to the House and Senate Armed Services Committees that the following five conditions have been met: OPM is fully capable of carrying out high-priority investigations required by the Secretary of Defense within a time frame set by the Secretary of Defense; OPM has undertaken necessary and satisfactory steps to ensure that investigations performed on DOD contract personnel will be conducted in an expeditious manner sufficient to ensure that those contract personnel are available to DOD within a time frame set by the Secretary of Defense; DOD will retain capabilities in the form of federal employees to monitor and investigate DOD and contractor personnel as necessary to perform counterintelligence functions and polygraph activities of the department; The authority to adjudicate background investigations will remain with DOD, and the transfer of DSS personnel to OPM will improve the speed and efficiency of the adjudicative process; and DOD will retain within DSS sufficient personnel and capabilities to improve DOD industrial security programs and practices. The Director of OPM may accept the transfer, but such a transfer may be made only after a period of 30 days has elapsed from the date on which the defense committees receive the certification. Current Status of the Authorized Transfer Senior OPM officials recommended that the Director of OPM should not accept the transfer of DSS’s investigative functions and personnel, at least for the rest of fiscal year 2004. The OPM officials reported that OPM is not currently prepared to accept DSS’s investigative functions and staff because of concerns about financial risks associated with the authorized transfer. OPM stated that under its current system of contracting out all investigations, the contractor assumes all financial risk for completing investigations at agreed-upon prices. OPM does not believe that current productivity data for DSS staff is sufficient to indicate whether DSS staff could provide the services at the price that OPM charges its customers. Also, OPM believes that the documentation for the financial costs of automobile leases, office space, and so forth are not currently adequate to provide OPM with the assurance that it needs to accept 1,855 personnel into an agency that currently has about 3,000 employees—more than a 60 percent growth in the number of OPM employees. In a memorandum of understanding that is being finalized at OPM and DOD, OPM is offering an alternative plan for DSS’s investigative functions and staff. While we were not provided a copy of the document, OPM officials described its contents to us orally. Among other things, the plan—if approved—would include the following: DSS’s investigative functions and staff would remain part of DOD; DSS’s investigative staff would receive training from OPM on the use of OPM’s investigative procedures and OPM’s investigations management system; and OPM would allow DOD to use OPM’s investigations management system and thereby negate the need for DSS’s investigations management system, which an OUSD (I) official indicated could cost about $100 million to update and maintain over the next 5 years. A senior OPM official with whom we spoke was optimistic that the alternative plan will go to DOD for review and signature before the end of December 2003. If DOD proposes changes, the plan will need to undergo re-staffing at OPM and possibly DOD. OPM’s position described above was verified with the same OPM official on December 16, 2003. After learning of the alternative plan and the draft memorandum of understanding, we discussed both with an OUSD (I) official who has been a key negotiator with OPM. The official verified that OPM had voiced the concerns regarding risk and was preparing an alternative plan. That DOD official is optimistic that DOD will be able to provide the assurances that are needed for the authorized transfer to occur before the end of fiscal year 2004. DOD’s position was verified with the same OUSD (I) official on December 16, 2003. Conclusions DOD continues to have a personnel security clearance backlog that probably exceeds roughly 360,000 cases by some unknown number. This situation may increase risks to national security and monetary costs associated with delays in granting clearances. DOD faces many impediments as it attempts to eliminate its backlog, and these weaknesses are material to the prompt completion of clearance requests at all stages of the personnel security process. The large number of clearance requests being submitted may be the impediment that is least amenable to change. As we mentioned earlier, worldwide deployments, contact with sensitive equipment, and other security requirements underpin the need for personnel to be cleared for access to classified information. Other impediments to eliminating the backlog are formidable, but more tractable. Shortages of investigative and adjudicative staff prevent DOD from quickly completing cases in the existing backlog as well as the hundreds of thousands of new clearance requests that have been submitted during each of the last 3 years. Using the rough estimates provided by an OPM official, the shortage of over 3,500 full-time-equivalent investigative staff illustrates one area in the clearance process where supply of personnel is inadequate to meet the demand for services. DOD has not developed a strategic plan for overcoming problems in accessing information locally, at the state level, and overseas during investigations; and this lack of a strategy hinders DOD efforts to quickly complete cases and efficiently eliminate the clearance backlog. Basic to designing an efficient means for overcoming the impediments is obtaining and using accurate information regarding the backlog. Clear pictures of the backlog size will continue to be elusive if components continue to use varying backlog definitions and measures. The presence of a backlog of an imprecise size and impediments throughout the clearance process suggest systemic weaknesses in DOD’s personnel security clearance program. Key to generating those reports is the implementation of the overdue JPAS with its ability to track when reinvestigations are due. Recommendations for Executive Action Because of continuing concerns about the size of the backlog and its accurate measurement and the personnel security clearance program’s importance to national security, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Intelligence take the following four actions: Identify and implement steps to match the sizes of the investigative and adjudicative workforces to the clearance request workload; Develop a strategic plan for overcoming problems accessing data locally, at the state level, and overseas; Develop DOD-wide backlog definitions and measures, and monitor the backlog at each of the three clearance-process stages using the DOD-wide measures; and Complete the implementation of the Joint Personnel Adjudication System. Agency Comments and Our Evaluation In written comments on a draft of this report, OUSD (I) concurred with three of our four recommendations and partially agreed with our recommendation to match workforces with workload. OUSD (I) noted that (1) DOD is developing tools to predict and validate investigative requirements; (2) staffing, budgeting, and management of the investigative and adjudicative resources are the purview of the affected DOD component and investigative providers; and (3) growing a capable workforce takes time. We agree with these points, but they do not change the fact that DOD has historically had a backlog and that these issues must be dealt with timely and effectively to eliminate the backlog. As our report points out, implementation delays—such as that with JPAS—hamper efforts to accurately estimate the backlog and eliminate it. While it is true that the resources provided by DOD components play an important role in eliminating the backlog, OUSD (I) also has a critical leadership role because of its responsibility for the coordination and implementation of DOD policy for accessing classified information. Finally, the historical and continuing void between workload demand and capacity suggests that OUSD (I) needs to take supplemental steps to grow capable investigative and adjudicative workforces as we have recommended. DOD’s comments are reprinted in appendix III. DOD also provided technical comments that we incorporated in the final draft as appropriate. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days from its issue date. At that time, we will send copies of this report to the Secretary of Defense, the Office of Management and Budget, and the Office of Personnel Management. We will also make copies available to appropriate congressional committees and to other interested parties on request. In addition, the report will be available at no charge at the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-5559 or [email protected]. Key staff members contributing to this report were Jack E. Edwards, Robert R. Poetta, Frank Bowen, and Nancy L. Benco. Appendix I: Recent Recommendations Related to DOD’s Personnel Security Clearance Process This appendix lists the personnel clearance process recommendations found in recent reports from GAO, the DOD Office of Inspector General, and the House Committee on Government Reform. These verbatim recommendations are arranged according to the issuance dates of the reports. At the end of each set of recommendations, we provide comments on whether DOD concurred with the recommendations and the rationale for nonconcurrences. Recommendations from U.S. General Accounting Office, DOD Personnel: Inadequate Personnel Security Investigations Pose National Security Risks, GAO/NSIAD-00-12, Washington, D.C., October 27, 1999. Because of the significant weaknesses in the DOD personnel security investigation program and the program’s importance to national security, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense (Command, Control, Communications, and Intelligence) to report the personnel security investigation program as a material weakness under the Federal Managers’ Financial Integrity Act to ensure that the needed oversight is provided and that actions are taken to correct the systemic problems in the Defense Security Service personnel security investigation program; improve its oversight of the Defense Security Service personnel security investigation program, including approving a Defense Security Service strategic plan; and identify and prioritize overdue reinvestigations, in coordination with other DOD components, and fund and implement initiatives to conduct these reinvestigations in a timely manner. In addition, we recommend that the Secretary of Defense instruct the Defense Security Service Director, with oversight by the Assistant Secretary of Defense (Command, Control, Communications, and Intelligence) to develop a corrective action plan as required under the Federal Managers’ Financial Integrity Act that incorporates corrective actions and milestones for addressing material weaknesses in the Defense Security Service personnel security investigative program and performance measures for monitoring the progress of corrective actions; establish a strategic plan that includes agency goals, performance measures, and procedures for tracking progress in meeting goals in accordance with sound management practices and the Government Performance and Results Act; conduct analyses needed to (1) determine an appropriate workload that investigators and case analysts can manage while meeting federal standards and (2) develop an overall strategy and resource plan to improve the quality and timeliness of investigations and reduce the number of overdue reinvestigations; review and clarify all investigative policy guidance to ensure that investigations comply with federal standards; establish a process for identifying and forwarding to the Security Policy Board suggested changes to policy guidance concerning the implementation of the federal standards and other investigative policy issues; establish formal quality control mechanisms to ensure that Defense Security Service or contracted investigators perform high-quality investigations, including periodic reviews of samples of completed investigations and feedback on problems to senior managers, investigators, and trainers; establish a training infrastructure for basic and continuing investigator and case analyst training that includes formal feedback mechanisms to assess training needs and measure effectiveness, and as a high priority, provide training on complying with federal investigative standards for investigators and case analysts; and take steps to correct the case management automation problems to gain short-term capability and develop long-term, cost-effective automation alternatives. Further, we recommend that the Secretary direct all DOD adjudication facility officials to (1) grant clearances only when all essential investigative work has been done and (2) regularly communicate with the Defense Security Service about continuing investigative weaknesses and needed corrective actions. DOD concurred with all of the recommendations and described many actions already planned or underway to implement the recommendations. Recommendations from U.S. General Accounting Office, More Actions Needed to Address Backlog of Security Clearance Reinvestigations, GAO/NSIAD-00-215, Washington, D.C., August 24, 2000. To improve the management of DOD’s personnel security reinvestigation program, we recommend that the Secretary of Defense direct the Assistant Secretary of Defense (Command, Control, Communications, and Intelligence) to design routine reports with key data from the Joint Personnel Adjudication System database to show the full extent of overdue reinvestigations, including those overdue but not yet submitted for update and those in process and develop appropriate incentives to encourage agency security managers to keep information in the database current and to submit reinvestigation requests on time. Changes in existing regulations, policies, and procedures may be necessary to provide such incentives. DOD concurred with all of the recommendations. In their comments, DOD stated that those personnel who have not had a request for their periodic reinvestigation submitted to the Office of Personnel Management or the Defense Security Service by September 30, 2002, would have their security clearances downgraded or canceled. Recommendations from Department of Defense, Office of Inspector General Audit Report, Security Clearance Investigative Priorities, Report No. D-2000-111, Arlington, Va., April 5, 2000. We recommend that the Assistant Secretary of Defense (Command, Control, Communications, and Intelligence) establish an Integrated Process Team to Develop criteria for determining the highest priority mission-critical and high-risk positions based on their impact on mission-critical programs. The criteria must also include a review of the special projects at the Defense Security Service. Develop a process for relating specific clearance requests to mission-critical and high-risk positions. This process must identify specific individuals as they are submitted for initial investigations and periodic reinvestigations. The process should continually adjust the highest priority mission-critical and high-risk positions to actions that may impact them. We recommend that the Director, Defense Security Service, establish the process and metrics to ensure expeditious processing of personnel security clearance investigations in accordance with established priorities. The Assistant Secretary of Defense (Command, Control, Communications, and Intelligence) non-concurred with the first recommendation, stating that the recommendations are beyond the scope and ability of his Office to implement, especially in the near future. However, the Defense Security Service concurred with the intent of the recommendation. DOD and DSS concurred with the second recommendation. Recommendations from U.S. General Accounting Office, DOD Personnel: More Consistency Needed in Determining Eligibility for Top Secret Security Clearances, GAO/NSIAD-01-465, Washington, D.C., April 18, 2001. To provide better direction to DOD’s adjudication facility officials, improve DOD’s oversight, and enhance the effectiveness of the adjudicative process, GAO recommends that the Secretary of Defense direct the Assistant Secretary of Defense (Command, Control, Communications, and Intelligence) to establish detailed documentation requirements to support adjudication decisions, including all significant adverse security conditions and the mitigating factors relevant to each condition; require that all DOD adjudicators use common explanatory guidance, such as that contained in the Adjudicative Desk Reference; establish common adjudicator training requirements and work with the Defense Security Service Academy to develop appropriate continuing education opportunities for all DOD adjudicators; and establish a common quality assurance program to be implemented by officials in all DOD adjudication facilities and monitor compliance through annual reporting. DOD concurred with all of the recommendations and described the actions it planned to take to improve its guidance, training, and quality assurance program. Recommendations from Department of Defense, Office of Inspector General Audit Report, Tracking Security Clearance Requests, Report No. D-2000-134, Arlington, Va., May 30, 2000. We recommend that the Director, Defense Security Service, track all security clearance requests from the time they are received until the investigative cases are opened. Security clearance requests that are not opened to investigative cases, and those investigative cases that are opened without electronic requests should be included in the tracking process. Post, weekly, the names and social security numbers of all cases in process on the Extranet for Security Professionals. This entry for each name should include, at a minimum, the date that the request was loaded into the Case Control Management System, the date that the investigative case was opened, and the date that the case was closed. DOD and DSS concurred on these recommendations. Recommendation from Department of Defense, Office of Inspector General Audit Report, Program Management of the Defense Security Service Case Control Management System, Report No. D-2001-019, Arlington, Va., December 15, 2000. We recommend that the Assistant Secretary of Defense (Command, Control, Communications, and Intelligence) and the Director, Defense Security Service, prior to making further decisions on the future system architecture, analyze whether the investment for the Case Control Management System and the Enterprise System provides the best business solution when compared to alternative solutions for opening, tracking and closing personnel investigation cases. DOD and DSS concurred with this recommendation. Recommendations from U.S. House of Representatives, Committee on Government Reform, Defense Security Service: The Personnel Security Investigations Backlog Poses a Threat to National Security, Report 107-767, Washington, D.C., October 24, 2002. The Secretary of Defense should continue to report the personnel security investigations program including the adjudicative process as a material weakness under the Federal Managers’ Financial Integrity Act to ensure needed oversight is provided to effectively manage and monitor the personnel security process from start to finish. The Secretary of Defense should set priorities and control the flow of personnel security investigation requests for all DOD components. The Secretary of Defense should closely monitor the interface between JPAS and CCMS to ensure effective management of investigative and adjudicative cases and avoid further backlogs. The National Security Council should promulgate Federal standards for investigating and adjudicating personnel security clearances in a timely manner. The Secretary of Defense and the Attorney General jointly should develop a system, which allows DSS and OPM investigators access to state and local criminal history information records. DOD indicated that it does not plan to respond to these recommendations. Appendix II: Scope and Methodology To estimate the size and accuracy of the Department of Defense-wide (DOD) personnel security clearance backlog, we obtained separate estimates of the investigative and adjudicative backlogs from the Defense Security Service (DSS), the Office of Personnel Management (OPM), and DOD’s central adjudication facilities. Also, we obtained some DOD-wide information from the Office of the Under Secretary of Defense for Intelligence (OUSD (I)). As part of the estimation process, we observed the steps used to capture and process investigative information at DSS and OPM. We obtained additional information regarding issues such as number of days required to complete an investigation or adjudication, time limits (i.e., criteria) for completing investigations and adjudications, and data reliability from DSS, OPM, and the central adjudication facilities during site visits, through questionnaires, and by interviews. We conducted this work at OUSD (I), Washington, D.C.; DSS, Fort Meade, Maryland; OPM, Washington, D.C., and Boyers, Pennsylvania; Army, Navy, Air Force, National Security Agency, Defense Intelligence Agency, Joint Staff, and Washington Headquarters Services central adjudication facilities located in the Washington, D.C., metropolitan area; the Defense Industrial Security Clearance Office, Columbus, Ohio; and the Defense Office of Hearings and Appeals, Arlington, Virginia, and Columbus, Ohio. We did not request data from the National Reconnaissance Office central adjudication facility because of the sensitive nature of its operations. Reviews of GAO, House Government Reform Committee, and Joint Security Commission reports provided a historical perspective for the report. Additional context for understanding DOD’s personnel security program was obtained through a review of DOD regulations (e.g., DOD 5200.2-R), federal investigative standards, and federal adjudicative guidelines. To identify the factors that impede DOD’s ability to eliminate its backlog and accurately estimate the backlog size, we reviewed prior GAO, DOD Office of Inspector General, House Government Reform Committee, Defense Personnel Security Research Center, and Joint Security Commission reports. DSS and OPM provided procedural manuals and discussed impediments while demonstrating their automated case management systems and provided other information such as workload data in responses to written questions and in interviews. Interviews regarding impediments were also held with officials from OUSD (I); nine central adjudication facilities; the Defense Personnel Security Research Center; the Chair of the Personnel Security Working Group of the National Security Council, Washington, D.C.; investigations contractors at their headquarters: US Investigations Services, Inc.; ManTech; and DynCorp; and associations representing industry: Aerospace Industries Association, Information Technology Association of America, National Defense Industrial Association, and Northern Virginia Technology Council. Our General Counsel’s office supplied additional context for evaluating potential impediments through its review of items such as the Security Clearance Information Act and Executive Order 12968, Access to Classified Information. To identify the potential adverse effects of the impediments to eliminating the backlog and accurately estimating its size, we reviewed prior GAO and Joint Security Commission reports. We supplemented this information with recent data from DSS and OPM regarding the number of days that it took to complete various types of investigations. Also, an interview with the Chair of the Personnel Security Working Group of the National Security Council provided a governmentwide perspective on the effects of delays and backlogs. Industry representatives cited above provided other perspectives on the economic costs of delays in obtaining eligibility-for- clearance determinations. For our update on the status of the authorized transfer of DSS’s investigative functions and staff to OPM, we reviewed the National Defense Authorization Act for Fiscal Year 2004 and GAO reports on DSS and OPM operations. In addition, we reviewed planning documents such as those describing the various transfer-related action teams that OPM and DOD created; these teams included one that sought to reconcile differences in the procedures used to conduct personnel security investigations. We also conducted interviews in December with DOD and OPM to determine up-to-date perspectives regarding the authorized transfer from officials representing both agencies. We conducted our review from February 2003 through December 2003 in accordance with generally accepted government auditing standards. Appendix III: Comments from the Department of Defense Appendix IV: Related GAO Products DOD Personnel: More Consistency Needed in Determining Eligibility for Top Secret Clearances. GAO-01-465. Washington, D.C.: April 18, 2001. DOD Personnel: More Accurate Estimate of Overdue Security Clearance Reinvestigation Is Needed. GAO/T-NSIAD-00-246. Washington, D.C.: September 20, 2000. DOD Personnel: More Actions Needed to Address Backlog of Security Clearance Reinvestigations. GAO/NSIAD-00-215. Washington, D.C.: August 24, 2000. DOD Personnel: Weaknesses in Security Investigation Program Are Being Addressed. GAO/T-NSIAD-00-148. Washington, D.C.: April 6, 2000. DOD Personnel: Inadequate Personnel Security Investigations Pose National Security Risks. GAO/T-NSIAD-00-65. Washington, D.C.: February 16, 2000. DOD Personnel: Inadequate Personnel Security Investigations Pose National Security Risks. GAO/NSIAD-00-12. Washington, D.C.: October 27, 1999. Military Recruiting: New Initiatives Could Improve Criminal History Screening. GAO/NSIAD-99-53. Washington, D.C.: February 23, 1999. Background Investigations: Program Deficiencies May Lead DEA to Relinquish Its Authority to OPM. GAO/GGD-99-173. Washington, D.C.: September 7, 1999. Executive Office of the President: Procedures for Acquiring Access to and Safeguarding Intelligence Information. GAO/NSIAD-98-245. Washington, D.C.: September 1998. Privatization of OPM’s Investigations Service. GAO/GGD-96-97R. Washington, D.C.: August 22, 1996. Cost Analysis: Privatizing OPM Investigations. GAO/GGD-96-121R. Washington, D.C.: July 5, 1996. Personnel Security: Pass and Security Clearance Data for the Executive Office of the President. GAO/NSIAD-96-20. Washington, D.C.: October 19, 1995. Privatizing OPM Investigations: Perspectives on OPM’s Role in Background Investigations. GAO/T-GGD-95-185. Washington, D.C.: June 14, 1995. Background Investigations: Impediments to Consolidating Investigations and Adjudicative Functions. GAO/NSIAD-95-101. Washington, D.C.: March 24, 1995. Security Clearances: Consideration of Sexual Orientation in the Clearance Process. GAO/NSIAD-95-21. Washington, D.C.: March 24, 1995. Personnel Security Investigations: GAO/NSIAD-94-135R. Washington, D.C.: March 4, 1994. Nuclear Security: DOE’s Progress on Reducing Its Security Clearance Work Load. GAO/RCED-93-183. Washington, D.C.: Aug. 12, 1993. DOD Special Access Programs: Administrative Due Process Not Provided When Access is Denied or Revoked. GAO/NSIAD-93-162. Washington, D.C.: May 5, 1993. Personnel Security: Efforts by DOD and DOE to Eliminate Duplicative Background Investigations. GAO/RCED-93-23. Washington, D.C.: May 10, 1993. Administrative Due Process: Denials and Revocations of Security Clearances and Access to Special Programs. GAO/T-NSIAD-93-14. Washington, D.C.: May 5, 1993. Security Clearances: Due Process for Denials and Revocations by Defense, Energy, and State. GAO/NSIAD-92-99. Washington, D.C.: May 6, 1992. Due Process: Procedures for Unfavorable Suitability and Security Clearance Actions. GAO/NSIAD-90-97FS. Washington, D.C.: April 23, 1990.
Terrorist attacks and espionage cases have heightened national security concerns and highlighted the need for a timely, high-quality personnel security clearance process. However, GAO's past work found that the Department of Defense (DOD) had a clearance backlog and other problems with its process. GAO was asked to address: (1) What is the size of DOD's security clearance backlog, and how accurately is DOD able to estimate its size? (2) What factors impede DOD's ability to eliminate the backlog and accurately determine its size? (3) What are the potential adverse effects of those impediments to eliminating DOD's backlog and accurately estimating the backlog's size? GAO was also asked to determine the status of the congressionally authorized transfer of Defense Security Service (DSS) investigative functions and personnel to the Office of Personnel Management (OPM). DOD did not know the size of its security clearance backlog at the end of September 2003 and has not estimated the size of the backlog since January 2000. DOD cannot estimate the size of its backlog of overdue reinvestigations that have not been submitted for renewal, but prior estimates of this portion of the backlog suggest it was sizeable. Using September 2003 data from DSS, OPM, and nine adjudication facilities, GAO calculated the size of investigative and adjudicative portions of the backlog at roughly 270,000 and 90,000 cases, respectively. Because these estimates were made using time-based goals that varied from agency to agency, the actual backlog size is uncertain. Several impediments hinder DOD's ability to eliminate--and accurately estimate the size of--its clearance backlog. Four major impediments slowing the elimination of the backlog are (1) the large numbers of new clearance requests; (2) the insufficient investigator and adjudicator workforces; (3) the size of the existing backlog; and (4) the lack of a strategic plan for overcoming problems in gaining access to state, local, and overseas information needed to complete investigations. Two other factors have hampered DOD's ability to develop accurate estimates of the backlog size. DOD has failed to provide adequate oversight of its clearance program, including developing DOD-wide backlog definitions and measures and using the measures to assess the backlog regularly. In addition, delays in implementing its Joint Personnel Adjudication System have limited DOD's ability to monitor backlog size and track when periodic reinvestigations are due. DOD's failure to eliminate and accurately assess the size of the backlog may have adverse effects. Delays in updating overdue clearances for command, agency, and industry personnel who are doing classified work may increase risks to national security. Slowness in issuing new clearances can increase the costs of doing classified government work. Finally, DOD's inability to accurately define and measure the backlog and project future clearance requests that it expects to receive can adversely affect its ability to develop accurate budgetary and staffing plans. In December 2003, advisors to OPM's Director recommended that the authorized transfer of DOD's investigative functions and personnel to OPM should not occur for at least the rest of fiscal year 2004. That recommendation was based on uncertainties over financial risks that OPM might incur. An alternative plan being discussed by DOD and OPM calls for leaving investigative staff in DSS and giving them training for, and access to, OPM's case management system. A DOD official estimated that using the OPM system, instead of DOD's current system, would avoid about $100 million in update and maintenance costs during the next 5 years. Also, as of December 16, 2003, the Secretary of Defense had not provided Congress with certifications required prior to any transfer.
GAO_GAO-10-835
Background DOD Counternarcotics Strategy and Activities According to DOD’s Counternarcotics Strategy developed in fiscal year 2009, the department seeks to disrupt the market for illegal drugs by helping local, state, federal, and foreign government agencies address the drug trade and narcotics-related terrorism. DOD achieves this mission through three goals—detecting and monitoring drug trafficking, sharing information on illegal drugs with U.S. and foreign government agencies, and building the counternarcotics capacity of U.S. and foreign partners. DASD-CN>, with oversight from the Under Secretary of Defense for Policy, exercises management and oversight of DOD’s counternarcotics activities and performance measurement system. DASD-CN>’s responsibilities include ensuring DOD develops and implements a counternarcotics program with clear priorities and measured results. Programs, Resources, and Assessments, a division within DASD-CN>, is the lead office for the development of counternarcotics resources and plans. Among other activities, this office directs and manages the planning, programming, and budgeting system of the DOD counternarcotics program and is responsible for updating and disseminating guidance on DOD’s counternarcotics performance measurement system. DOD’s counternarcotics activities are implemented through DOD’s combatant commands, military departments, and defense agencies. According to DOD, these organizations provide assets, such as aircraft and patrol ships, military personnel, and other assistance, to support U.S. law enforcement agencies and foreign security forces in countering narcotics trafficking. In support of DOD’s counternarcotics activities, DOD reported resources totaling approximately $7.7 billion from fiscal year 2005 to fiscal year 2010, including more than $6.1 billion appropriated to its Counternarcotics Central Transfer Account and more than $1.5 billion in supplemental appropriations (see table 1). Of these resources, DOD estimated that approximately $4.2 billion were in support of its international counternarcotics activities from fiscal years 2005-2010. Previous GAO Reporting and Legislation Related to DOD’s Counternarcotics Performance Measures DOD efforts to develop performance measures for its counternarcotics activities are long-standing. We reported in December 1999 that DOD had not developed a set of performance measures to assess the impact of its counternarcotics operations, but had undertaken initial steps to develop such measures. In January 2002 and November 2005, we found that DOD was in the process of developing performance measures focused on its role of detecting and monitoring the trafficking of illegal drugs into the United States. In November 2005 we recommended that DOD, in conjunction with other agencies performing counternarcotics activities, develop and coordinate counternarcotics performance measures. In December 2006 Congress directed ONDCP—the organization that establishes U.S. counternarcotics goals and coordinates the federal budget to combat drugs—to produce an annual report describing the national drug control performance measurement system that identifies the activities of national drug control program agencies, including DOD. In May 2007 ONDCP issued guidance requiring DOD and other national drug control program agencies to annually submit to the Director of ONDCP a performance summary report including performance measures, targets, and results. In addition, ONDCP officials stated that they have recommended improvements to DOD’s performance measures, both in correspondence and in meetings with DOD staff. DOD Has Not Developed a System to Effectively Track the Progress of Its Counternarcotics Activities, but Continues to Work to Improve Its Efforts DOD does not have an effective system for tracking the progress of its counternarcotics activities; however, it continues efforts to improve the system. We have found that measuring performance provides managers a basis for making fact-based decisions. DOD has established performance measures for its counternarcotics activities and a database to collect performance information. However, these measures lack a number of attributes which we consider key to successful performance measures and, therefore, do not provide a clear indication of DOD’s progress toward its counternarcotics goals. Recognizing the need to update and improve its measures, in May 2010, DOD issued new guidance for its counternarcotics performance measurement system. However, DOD officials noted the department will faces challenges implementing the guidance. DOD Has Developed Performance Measures and a Database for Its Counternarcotics Activities We have previously reported that effective performance measurement systems include steps to measure performance, such as establishing performance measures and collecting data. In response to ONDCP’s 2007 guidance, DOD developed performance measures for its fiscal year 2007 counternarcotics activities and established a centralized database within its performance measurement system to collect data on those performance measures. The counternarcotics performance measurement system database, maintained by DASD-CN>, requires DOD components to submit performance information at specified intervals during the fiscal year, such as results for performance measures, the mechanisms used to collect results data, and future performance targets. For fiscal year 2009, DOD guidance required that all projects funded by its Counternarcotics Central Transfer Account have a performance measure. As a result, DOD reported it had 285 performance measures for its fiscal year 2009 counternarcotics activities. Of those, 239 were performance measures related to DOD’s mission of supporting U.S. agencies and foreign partners in countering narcotics trafficking. (See table 2 for examples of DOD’s counternarcotics performance measures.) DOD’s Fiscal Year 2009 Counternarcotics Performance Measures Exhibit Some, but Not All, Key Attributes of Successful Performance Measures DOD’s current set of counternarcotics performance measures varies in the degree to which it exhibits key attributes of successful performance measures. Prior GAO work has identified nine attributes of successful performance measures. Table 3 shows the nine attributes, their definitions, and the potentially adverse consequences of not having the attributes. Our analysis found that DOD’s counternarcotics performance measures lack several of the key attributes of successful performance measures. Based on our analysis of a generalizable sample of DOD’s fiscal year 2009 performance measures, we found the attributes of core program activities and linkage were generally present, but other attributes such as balance and limited overlap were missing, and attributes including governmentwide priorities, reliability, objectivity, clarity, and measurable targets were present in varying degrees. We found that the attribute of core program activities was identified in the set of measures, while balance and limited overlap did not appear to be present. Core program activities. We estimate that all of DOD’s counternarcotics performance measures cover the department’s core program activities. We have previously reported that core program activities are the activities that an entity is expected to perform to support the intent of the program, and that performance measures should be scoped to evaluate those activities. For the measures we reviewed, DOD divides its core counternarcotics activities across its 3 goals and 13 objectives (see table 2). In our analysis, we found at least one performance measure covering each of DOD’s counternarcotics objectives. Therefore, we determined that DOD’s core program activities were covered. Balance. DOD’s set of performance measures lack balance. We have previously reported that balance exists when a set of measures ensures that an organization’s various priorities are covered. According to DOD, performance measures best cover its priorities when five measurable aspects of performance, as defined by DOD—input, process, output, outcome, and impact—are present in its performance measures. As an example, “number of attendees to basic counterdrug intelligence course” is, in our determination, a measure of output, as it measures the services provided by DOD. We estimate 93 percent of DOD’s fiscal year 2009 performance measures are input, process, or output measures, while 6 percent are outcome measures and 0 percent are impact measures. Therefore, given that DOD’s set of measures is highly skewed towards input, process, and output measures and contains no impact measures, we determined that the set is not balanced by DOD’s criteria. Performance measurement efforts that lack balance overemphasize certain aspects of performance at the expense of others, and may keep DOD from understanding the effectiveness of its overall mission and goals. Limited overlap. We determined there to be overlap among DOD’s performance measures. We found instances where the measures and their results appeared to overlap with other measures and results. When we spoke with DASD-CN> officials concerning this, they stated that the set of measures could be conveyed using fewer, more accurate measures. We have reported that each performance measure in a set should provide additional information beyond that provided by other measures. When an agency has overlapping measures, it can create unnecessary or duplicate information, which does not benefit program management. Of the remaining six attributes of successful performance measures, only one attribute—linkage—was present in almost all of the measures, while the other five attributes—governmentwide priorities, reliability, objectivity, clarity, and measurable targets—appeared in varying degrees (see figure 1). DOD’s counternarcotics performance measures demonstrate linkage. We estimate that 99 percent of DOD’s measures are linked to agencywide goals and mission. DOD’s counternarcotics performance measurement system database requires that for each performance measure entered into the database, a goal and related objective of DOD’s counternarcotics mission be identified. Our analysis found that in all but one instance, linkage between DOD’s goals and performance measures is easily identified. However, DOD’s counternarcotics performance measures did not fully satisfy five attributes. Governmentwide priorities. We estimate that 41 percent of the measures we analyzed cover a broader governmentwide priority, such as quality, timeliness, efficiency, cost of service, or outcome. We determined, for example, that the governmentwide priority of “quality” was reflected in the measure “number of sensors integrated and providing reliable and dependable radar data to JIATF-S and/or host nations,” because it measures the reliability and dependability of detection services. In the majority of the instances, however, measures did not address a governmentwide priority. For example, the measure “number of trained military working dog teams trained” was determined not to cover a governmentwide priority because it does not measure the quality or efficiency of training provided. When measures fail to cover governmentwide priorities managers may not be able to balance priorities to ensure the overall success of the program. Reliability. We estimate that 46 percent of DOD’s performance measures have data collection methods indicated in the database that generally appear reliable. Reliability refers to whether a measure is designed to collect data or calculate results such that the measure would be likely to produce the same results if applied repeatedly to the same situation. For each entry in the database, users are directed to enter, among other information, one performance measure and its associated methodology, target, and result. However, in numerous instances the system contained multiple performance measures entered into fields that should contain only one measure. Such entries could result in errors of collecting, maintaining, processing, or reporting the data. Additionally, some measures did not provide enough information on data collection methods or performance targets to assure reliability. For example, a measure in the database states “continuous U.S. Navy ship presence in the SOUTHCOM area of responsibility.” The performance target listed for this measure is “3.5,” but to what 3.5 refers—such as days, number of ships, or percentage points—is not explained. Moreover, the methodology in the database for this measure is entered as “not applicable.” Therefore, the measure’s methodology does not provide insight into how DOD could measure whether or not it reached its target of 3.5. As a result, we determined that this measure did not have data collection methods to gather reliable results. We have previously reported that if errors occur in the collection of data or the calculation of their results, it may affect conclusions about the extent to which performance goals have been achieved. Objectivity. We estimate that 59 percent of DOD’s performance measures for its counternarcotics activities are objective. We have previously reported that to be objective, measures should indicate specifically what is to be observed, in which population or conditions, and in what time frame, and be free of opinion and judgment. We estimate that 41 percent of DOD’s measures are not objective and could therefore face issues of bias or manipulation. For example, a measure in the database is, “percent of inland waterways controlled by Colombian Marine Corps forces.” For this measure, no criteria for “controlled” is provided and it is not clear how the Colombian government reports the percentage of waterways under its control and over what time frame this control will occur. Clarity. We estimate that 65 percent of DOD’s performance measures exhibit the attribute of clarity. A measure achieves clarity when it is clearly stated and the name and definition are consistent with the methodology used for calculating the measure. However, we estimate that 35 percent of DOD’s measures are not clearly stated. For example, one of DOD’s measures linked to the objective of sharing information with U.S. and partner nations is “identify and establish methodology for implementation.” For this measure, no associated methodology is identified, and it is unclear what is being implemented. We have previously reported that a measure that is not clearly stated can confuse users and cause managers or other stakeholders to think that performance was better or worse than it actually was. Measurable target. We estimate that 66 percent of DOD’s measures have measurable targets. Where appropriate, performance goals and measures should have quantifiable, numerical targets or other measurable values. Some of DOD’s measures, however, lacked such targets. For example, one performance measure identified its target as “targets developed by the local commander.” As it is not quantifiable, this target does not allow officials to easily assess whether goals were achieved because comparisons cannot be made between projected performance and actual results. DOD Is Working To Improve Its Counternarcotics Performance Measures, but Implementation Challenges Exist DOD officials have acknowledged that weaknesses exist in the department’s current set of counternarcotics performance measures. In May 2010 DOD issued revised guidance for its counternarcotics performance measurement system to guide users in establishing performance measures that more accurately capture the quantitative and qualitative achievements of DOD’s activities. To do this, the guidance states that performance measures should be, among other attributes, useful for management and clearly stated. The guidance describes different types of performance measures that can be used to monitor DOD’s contribution to its strategic counternarcotics goals, such as those that measure DOD’s efficiency, capability, and effectiveness at performing its activities. Additionally, according to the guidance, DOD components should provide evidence of the quality and reliability of the data used to measure performance. However, DOD officials noted four specific challenges that the department faces in developing performance measures consistent with its revised guidance. Creating performance measures that assess program outcomes. Some DOD officials noted that, because DOD acts as a support agency to partner nations and other law enforcement entities—and the actual interdiction of drugs is conducted by other entities—measuring the outcome of DOD’s performance is difficult. While developing outcome measures can be challenging, we have found that an agency’s performance measures should reflect a range of priorities, including outcomes. Moreover, we have found that methods to measure program outcomes do exist. For example, agencies have applied a range of strategies to develop outcome measures for their program, such as developing measures of satisfaction based upon surveys of customers. In addition, officials from EUCOM, AFRICOM, and JIATF-S stated that while developing outcome performance measures can be difficult, developing such measures for support activities is possible and is done at other federal agencies. For example, EUCOM indicated it could track the outcome of the support it provides to partner nations by tracking the annual percentage increase in interdictions and arrests related to illicit trafficking. Additionally, JIATF-W indicated that it conducts quarterly command assessments of current programs, which focus on aligning resources provided by JIATF-W to the outcomes of its law enforcement partners. Implementing revisions in a timely manner. DOD officials noted that implementing revisions to the department’s performance measures in a timely fashion will be difficult given that such revisions are resource and time intensive. Further, while including dates for submission, DOD’s revised guidance does not clearly specify a time frame by which DOD components should revise the counternarcotics performance measures that are to be submitted to the database. We have previously reported that establishing timetables for the development of performance measures can create a sense of urgency that assists in the effort being taken more seriously. DASD-CN> officials noted that time frames by which DOD’s measures would be revised are being discussed. However, these officials do not expect new performance measures to be established in fiscal year 2010, and said that fiscal year 2011 would be the earliest year of full implementation of the guidance. Ensuring adequate resources are available. DOD officials noted that ensuring adequate resources—such as expertise and training in performance management—are available to develop performance measures at both DASD-CN> and the combatant commands will be a challenge. These officials noted that DOD employees tasked with developing performance measures and tracking the progress towards achieving goals are not sufficiently trained to design and monitor outcome performance measures. We have previously reported that access to trained staff assists agencies in their development of performance measures. Ensuring reliable data. DOD officials noted that ensuring data used to measure DOD performance are reliable is challenging. To measure the performance of its counternarcotics activities DOD officials told us they rely heavily on external sources of data, such as U.S. law enforcement agencies and foreign government officials. This challenge can pose issues for DOD regarding data verification and ensuring proper information is recorded for performance measures. DOD Rarely Uses the Performance Information Contained in Its Performance Measurement System to Manage Its Counternarcotics Activities and Has Applied Few Practices to Facilitate Its Use DOD makes limited use of its performance measurement system to manage its counternarcotics activities and has applied few practices to facilitate its use. We have found that the full benefit of collecting performance information is realized only when managers use the information to inform key decisions. While DOD has applied some practices to facilitate the use of the performance information in its system, it does not utilize certain key practices, such as frequently and effectively communicating performance information. Absent an effective performance management system, DOD lacks critical information to use to improve the management and oversight of its counternarcotics activities. Agencies Can Use Performance Information to Manage for Results We have previously reported that, in addition to measuring performance, effective performance measurement systems include steps to use information obtained from performance measures to make decisions that improve programs and results. We identified several ways in which agencies can use performance information to manage for results, including using data to (1) identify problems and take corrective actions, (2) develop strategy and allocate resources, and (3) identify and share effective approaches. DOD Submits Performance Reports to ONDCP, But Makes Limited Use of the Information in Its Performance Measurement System to Manage and Oversee Its Counternarcotics Activities DOD officials representing DASD-CN>, AFRICOM, CENTCOM, EUCOM, NORTHCOM, SOUTCOM, JIATF-S, and JIATF-W told us they rarely use information from DOD’s counternarcotics performance measurement system to manage counternarcotics activities. Specifically, they rarely use the system to: Identify problems and take corrective actions. Agencies can use performance information to identify problems or weaknesses in programs, to try to identify factors causing the problems, and to modify a service or process to try to address problems. DOD officials representing DASD- CN> and SOUTHCOM told us that they currently make limited use of the performance information in DOD’s performance measurement system to manage counternarcotics activities. Officials from DASD-CN> stated that they use data from the performance measurement system to produce reports for ONDCP, which may include information identifying problems in the implementation of DOD’s counternarcotics activities. However, in reviewing these documents, we found that the reports did not include a clear assessment of DOD’s overall progress toward its counternarcotics goals. For instance, the report submitted to ONDCP for fiscal year 2009 contained detailed information on 6 of DOD’s 285 counternarcotics performance measures, but did not clearly explain why the results of these 6 measures would be critical to the success of DOD’s counternarcotics program. Moreover, according to ONDCP, DOD’s reports for fiscal years 2007, 2008, and 2009 did not fulfill the requirements of ONDCP’s guidance because the reports were not authenticated by the DOD-IG. Further, officials from AFRICOM, CENTCOM, EUCOM, NORTHCOM, JIATF-S, and JIATF-W told us they do not use the DOD’s performance measurement system to manage counternarcotics activities. While these officials indicated that they submitted performance information to the system’s database as required by DOD guidance, they stated they tend to manage programs using information not submitted to the system (see table 4). For example, CENTCOM officials told us information obtained in weekly program meetings regarding the timeliness and cost of counternarcotics projects, not data sent to the system’s database, is most often used to help them identify problems and make program adjustments. Recognizing the need improve the information in the system’s database, officials from DASD-CN> told us that for fiscal year 2011 they are working with DOD components to integrate performance information into the system’s database that can be more useful for decision making. Officials from several combatant commands stated they could integrate performance information obtained from outside sources into the counternarcotics performance measurement system. Officials from JIATF- S, for example, told us they collect and analyze a variety of data on counternarcotics activities that they do not input into DOD’s counternarcotics performance measurement system. On a daily basis, JIATF-S collects information on “cases”—that is, boats or planes suspected of illegal trafficking. In addition to tracking the number of cases, JIATF-S compiles information as to whether or not a particular case was targeted, detected, or monitored, and whether or not those actions resulted in interdictions or seizures of illegal drugs. By compiling this information, officials at JIATF-S told us they can better identify program outcomes, areas in which their efforts are successful, and ways to take corrective actions. Develop strategy and allocate resources. Agencies can use performance information to make decisions that affect future strategies, planning, and budgeting, and allocating resources. DASD-CN>’s role includes both defining the strategic goals and managing the budgeting system of the DOD counternarcotics program. DOD’s counternarcotics guidance states that information from the counternarcotics performance measurement system will inform strategic counternarcotics plans, but it does not clearly state how the system will be used to inform decisions to allocate resources. Moreover, officials from DASD-CN> told us that the office does not currently link performance information from the counternarcotics performance measurement system’s database directly to budget allocation decisions. In addition, our analysis of DOD’s fiscal year 2011 Drug Interdiction and Counterdrug Activities Budget Estimates— which provides details on DOD’s fiscal year 2011 budget request for its counternarcotics activities—identified no clear link between budget allocation decisions and performance information in the system’s database. DOD officials told us they plan to incorporate performance information from the counternarcotics performance measurement system into future budget requests provided to Congress. Identify and share effective approaches. We have reported that high- performing organizations can use performance information to identify and increase the use of program approaches that are working well. According to DOD’s counternarcotics performance measurement system guidance, DASD-CN> will use performance information submitted to the system’s database to compile reports for ONDCP, which DASD-CN> has done. However, DASD-CN> officials told us they do not currently use the system to produce reports for DOD components, which could assist in identifying and sharing effective approaches between DOD’s components. While indicating performance reports could be a useful tool, officials from several DOD components told us they had not received such reports from DASD-CN>. DOD’s May 2010 guidance does not state whether the system will be used to produce such reports in the future. DOD Has Applied Few Practices to Facilitate the Use of Its Counternarcotics Performance Measurement System We have found that agencies can adopt practices that can facilitate the use of performance data. These include (1) demonstrating management commitment to results-oriented management; (2) aligning agencywide goals, objectives, and measures; (3) improving the usefulness of performance data to better meet management’s needs; (4) developing agency capacity to effectively use performance information; and (5) communicating performance information within the agency frequently and effectively. As part of its role overseeing DOD’s counternarcotics activities, DASD- CN> manages the DOD counternarcotics performance measurement system. DASD-CN> applies some practices to facilitate the use of its counternarcotics performance measurement system. For example, DASD- CN> has recently taken steps to demonstrate management commitment by issuing revised guidance emphasizing the development of improved performance measures and, according to DASD-CN> officials, conducting working groups with some DOD components to assist them in revising performance measures. Moreover, DASD-CN> officials told us they are taking steps to increase staffing to better oversee the performance measurement system. We have found that the commitment of agency managers to result-oriented management is critical to increased use of performance information for policy and program decisions. Further, DASD-CN> has created a results framework that aligns agencywide goals, objectives, and performance measures for its counternarcotics activities. As we have previously reported, such an alignment increases the usefulness of the performance information collected by decision makers at each level, and reinforces the connection between strategic goals and the day-to-day activities of managers and staff. However, DASD-CN> has not applied certain key practices to facilitate the use of data, such as improving the usefulness of performance information in its performance measurement system, developing agency capacity to use performance information, and communicating performance information frequently and effectively. Furthermore, DOD officials told us they face challenges using DOD’s performance measurement system to manage their activities due to (1) the limited utility of the performance measures and data currently in DOD’s counternarcotics database, (2) insufficient capacity to collect and use performance information, and (3) infrequent communication from DASD- CN> regarding performance information submitted to the database. For instance, DOD’s guidance emphasizes the development of performance measures that are, among other attributes, useful for management and supported by credible data. However, DOD officials from several combatant commands told us that the performance measures and targets currently in the system are of limited utility and will need to be revised. Moreover, officials from several DOD components emphasized the need to build additional capacity to use performance data, such as receiving training on how to revise performance standards and measures. We have found that the practice of building analytical capacity to use performance information—both in terms of staff trained to do analysis and availability of research and evaluation resources—is critical to an agency using performance information in a meaningful way. Finally, DOD components told us that they received little feedback or direction from DASD-CN> regarding performance information they submitted to the system. We have previously reported that improving the communication of performance information among staff and stakeholders can facilitate the use of performance information in key management activities. For more information see table 5. Conclusions DOD reported more than $1.5 billion in fiscal year 2010 for its counternarcotics activities, but has not yet developed an effective performance measurement system to readily inform progress toward the achievement of its counternarcotics goals. We have previously reported that performance measurement systems include steps to measure performance to gauge progress and use the information obtained to make key management decisions. DOD acknowledges weaknesses in its performance measurement system and has taken steps to improve the system, such as revising its guidance for the development of performance measures and holding working groups with DOD components. However, its current set of measures lack key attributes of successful performance measures, such as balance, objectivity, and reliability. Moreover, DOD infrequently uses the information presently in its counternarcotics performance measurement system and has yet to fully apply key practices to facilitate its use. Absent an effective performance measurement system, DOD lacks critical performance information to use to improve its management decisions, eliminate wasteful or unproductive efforts, and conduct oversight of its activities. Recommendations for Executive Action To improve DOD’s performance measurement system to manage and oversee its counternarcotics activities, we recommend that the Secretary of Defense take the following two actions: 1. To address weaknesses identified in DOD’s counternarcotics performance measurement system, we recommend that the Secretary of Defense direct the Deputy Assistant Secretary for Counternarcotics and Global Threats to review the department’s performance measures for counternarcotics activities and revise the measures, as appropriate, to include the key attributes of successful performance measures previously identified by GAO. 2. To address factors associated with the limited use of DOD’s counternarcotics performance measurement system, we recommend that the Secretary of Defense direct the Deputy Assistant Secretary for Counternarcotics and Global Threats to apply practices that GAO has identified to facilitate the use of performance data. Agency Comments and Our Evaluation We provided a draft of this report to DOD and ONDCP for their review and comment. We received written comments from DOD, which are reprinted in appendix II. DOD concurred with our recommendations, and stated it has developed and begun to implement a plan to improve the quality and usefulness of its counternarcotics performance measurement system. ONDCP did not provide written comments. We received technical comments from DOD and ONDCP, which we have incorporated where appropriate. We are sending copies of this report to interested congressional committees, the Secretary of Defense, and the Director of the Office of National Drug Control Policy. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4268 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Section 1016 of the National Defense Authorization Act for Fiscal Year 2010 directed GAO to report on the Department of Defense’s (DOD) performance measurement system used to assess its counternarcotics activities. In response to this mandate, we examined the extent to which (1) DOD’s counternarcotics performance measurement system enables DOD to track progress and (2) DOD uses performance information from its counternarcotics performance measurement system to manage its activities. Our work focused on the efforts of DOD to develop an effective counternarcotics performance measurement system. Within DOD, we spoke with officials from several relevant components involved in the management, oversight, and implementation of DOD’s counternarcotics activities, including the Office of the Deputy Assistant Secretary of Defense for Counternarcotics and Global Threats (DASD-CN>), U.S. Africa Command (AFRICOM), U.S. Central Command (CENTCOM), U.S. European Command (EUCOM), U.S. Northern Command (NORTHCOM), U.S. Southern Command (SOUTHCOM), the Joint Interagency Task Force- South (JIATF-S), the Joint Interagency Task Force-West (JIATF-W), and the DOD Inspector General (DOD-IG). We also discussed DOD efforts with officials from the Office of National Drug Control Policy (ONDCP), the organization that establishes U.S. counternarcotics goals and coordinates the federal budget to combat drugs. To examine the extent to which DOD’s counternarcotics performance measurement system enables the department to track its progress we analyzed DOD strategy, budget, and performance documents, such as DOD’s Counternarcotics Strategy, Drug Interdiction and Counterdrug Activities Budget Estimates, and Performance Summary Reports. We reviewed relevant DOD and ONDCP guidance on performance measures, such as DOD’s Standard Operating Procedures for the Counternarcotics Performance Metrics System and ONDCP’s Drug Control Accounting circular. Further, we evaluated a generalizable random sample of DOD’s fiscal year 2009 counternarcotics performance measures (115 of 239 measures) to assess the extent to which these measures adhered to GAO criteria on the key attributes of successful performance measures. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results at a 95 percent confidence interval (e.g., plus or minus 6 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. To evaluate the sample, two analysts independently assessed each of the performance measures against nine attributes of successful performance measures identified by GAO. Those analysts then met to discuss and resolve any differences in the results of their analyses. A supervisor then reviewed and approved the final results of the analysis. In conducting this analysis, we analyzed information contained in DOD’s counternarcotics performance measurement system database and spoke with DOD officials responsible for managing counternarcotics activities and entering information into the database. We did not, however, review supporting documentation referenced but not included in the system’s database, nor did we assess other databases that might exist at the DOD component level. We also discussed DOD’s performance measures with cognizant officials from ONDCP and several DOD components, including DASD-CN>, AFRICOM, CENTCOM, EUCOM, NORTHCOM, SOUTHCOM, JIATF-S, JIATF-W, and the DOD-IG. To evaluate the extent to which DOD uses performance information from its counternarcotics performance measurement system to support its mission, we held discussions with officials from DOD components— including DASD-CN>, AFRICOM, CENTCOM, EUCOM, NORTHCOM, SOUTHCOM, JIATF-S, and JIATF-W—to determine the ways in which these components use information from DOD’s system, as well as other sources of performance information. We also examined DOD’s Performance Summary Reports and fiscal year 2011 Drug Interdiction and Counterdrug Activities Budget Estimates to assess the extent to which these materials reported that DOD used performance information from its counternarcotics performance measurement system database. Further, we analyzed the extent to which DOD applies key management practices previously identified by GAO to facilitate the use of performance information from its counternarcotics performance measurement system. We also traveled to Tampa, Miami, and Key West, Florida where we visited CENTCOM, SOUTHCOM, and JIATF-S. In these visits, we met with DOD officials responsible for management and implementation of counternarcotics activities to discuss DOD’s use of performance data to support its counternarcotics mission. To determine the completeness and consistency of DOD funding data, we compiled and compared data from DOD with information from cognizant U.S. agency officials in Washington, D.C. We also compared the funding data with budget summary reports from the ONDCP to corroborate their accuracy. Although we did not audit the funding data and are not expressing an opinion on them, based on our examination of the documents received and our discussions with cognizant agency officials, we concluded that the funding data we obtained were sufficiently reliable for the purposes of this report. We conducted this performance audit from December 2009 to July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Defense Appendix III: Contact and Staff Acknowledgments Contact Staff Acknowledgments In addition to the individual named above, Juan Gobel, Assistant Director; Elizabeth Curda; Martin de Alteriis; Karen Deans; Mark Dowling; Justin Fisher; Richard Geiger; Eileen Larence; Marie Mak; Christopher Mulkins; John Pendleton; Elizabeth Repko; and Mark Speight made key contributions to this report.
The Department of Defense (DOD) leads detection and monitoring of aerial and maritime transit of illegal drugs into the United States in support of law enforcement agencies. DOD reported resources of more than $1.5 billion for fiscal year 2010 to support its counternarcotics activities. Congress mandated GAO report on DOD's counternarcotics performance measurement system. Specifically, this report addresses the extent to which (1) DOD's counternarcotics performance measurement system enables DOD to track progress and (2) DOD uses performance information from its counternarcotics performance measurement system to manage its activities. GAO analyzed relevant DOD performance and budget documents, and discussed these efforts with officials from DOD and the Office of National Drug Control Policy (ONDCP). DOD does not have an effective performance measurement system to track the progress of its counternarcotics activities; however, it continues efforts to improve the system. GAO has previously reported that measuring performance provides managers a basis for making fact-based decisions. DOD has established performance measures for its counternarcotics activities and a database to collect performance information, including measures, targets, and results. However, these measures lack a number of attributes, such as being clearly stated and objective, which GAO considers key to successful performance measures. In May 2010, DOD issued new guidance for its counternarcotics performance measurement system. However, DOD officials noted the department will face challenges implementing the guidance. These challenges include creating performance measures that assess program outcomes and ensuring adequate resources, such as expertise in performance management, are available to develop measures. DOD rarely uses the information in its performance measurement system to manage its counternarcotics activities and has applied few practices to facilitate its use. GAO has found that the full benefit of collecting performance information is realized only when managers use it to inform key decisions. However, DOD officials responsible for counternarcotics activities throughout the department told us they rarely use data submitted to the system to manage activities. Rather, they tend to manage programs using data not submitted to the system, such as information obtained in weekly program meetings regarding the cost and timeliness of projects. Moreover, officials responsible for oversight of DOD's activities stated they use the system to develop reports for ONDCP, but not to allocate resources. While DOD has applied some practices to facilitate the use of the performance information in its system, it does not utilize certain key practices identified by GAO, such as frequently and effectively communicating performance information. Absent an effective performance management system, DOD lacks critical information to use to improve the management and oversight of its counternarcotics activities.
GAO_GAO-11-50
Background DOD operates a worldwide health care program, through which it provides medical care and assistance to 9.6 million active duty service members, their families, and other eligible beneficiaries. Its health care operations are significant, involving approximately 135,000 personnel in approximately 700 Army, Navy, and Air Force medical facilities in 12 domestic regions, as well as European, Pacific, and Latin American regions. The department’s fiscal year 2010 budget for providing health care services was about $49 billion. DOD’s health care program is a responsibility of the Office of the Undersecretary of Defense for Personnel and Readiness. Within the Office of the Undersecretary is the Office of the Assistant Secretary of Defense for Health Affairs, which is responsible for the department’s Military Health System (MHS) program. MHS has two missions: wartime readiness (maintaining the health of service members and treating wartime casualties) and peacetime care (providing for the health care needs of the families of active-duty members, retirees and their families, and survivors). The Assistant Secretary of Defense for Health Affairs establishes policy regarding health care for all DOD beneficiaries and also plans and budgets for health care operations and maintenance. At the same time, each military service has its own medical department that operates medical facilities (referred to as military treatment facilities) and recruits and funds military medical personnel. Currently, the military treatment facilities include 59 military hospitals and 650 medical and dental clinics. DOD provides about half of MHS services through these military facilities, supplementing this by contracting for health services with civilian contract providers. Active- duty members are required to obtain care at military treatment facilities if such care is available; in contrast, retirees and dependents may obtain care at either military facilities or through civilian contract providers. History of DOD’s Electronic Health Record System To facilitate the delivery of medical services, in 1988, DOD initiated the acquisition of an electronic health record system to support all of its hospitals and clinics. This system, the Composite Health Care System (CHCS), was intended to be the primary medical information system deployed worldwide to support the department’s hospitals and clinics. DOD envisioned that it would provide automated support for patient administrative functions (such as registrations, admission, and disposition); ordering and retrieving results of laboratory and radiology procedures; ordering and recording prescriptions; and patient appointment scheduling. CHCS was deployed in 1993; however, it was supported by numerous stand-alone medical information systems, such as the department’s Ambulatory Data System, Preventive Health Care Application, and Nutrition Management Information System, and was not designed to facilitate the exchange of information from one system or military treatment facility to the next. Specifically, CHCS was facility-centric, in which each facility stored only its own medical information for patients using different data standards. Therefore, if a medical provider wanted to obtain complete information about a patient, a query would have to be made to each of the CHCS locations—a time- and resource-intensive activity. Additionally, when a patient moved to another region, the electronic records did not transfer across the CHCS locations because of the different data standards at each location. The lack of an integrated system perpetuated the reliance on paper-based records, leading DOD to pursue a comprehensive electronic health care record. To this end, in 1997, the department initiated the CHCS II program to address the need for a comprehensive, lifelong, computer-based health care record for every service member and their beneficiaries. The vision for CHCS II was to provide access to a patient’s health care information with a single query by providers in military treatment facilities. Specifically, with this system, DOD planned to provide worldwide access to outpatient, inpatient, dental, and vision records, and to make them available 24 hours a day, 7 days a week. This new system was to be accomplished with the use of a centralized repository of all health care information derived using common data standards. The system was to build on capabilities of existing systems, subsuming their functionality over time, while adding new functionality to meet mission needs. CHCS II’s architecture was to be an open system, client-server design of three levels: the user (client) workstation at various DOD locations, the DOD computers’ (servers’) operating system and storage hardware and software, and a clinical data repository at a remote computing center where the information would be stored. The department had planned to connect all workstations at an installation’s hospital or clinic to the servers through the installation’s local or wide area network. It had planned to divide the system acquisition into seven software releases to be delivered incrementally by June 2006 at an estimated cost of $4.3 billion (in 1998 dollars). The department’s original plan had called for deploying a prototype system in October 1998 and beginning deployment of the initial version in about April 1999. However, the department did not meet its schedule to deliver initial CHCS II system capabilities and associated mission benefits by April 1999; it reported that the initial deployment was delayed by 6 months because of a failure to meet initial performance requirements and changes in system requirements. In July 2000, the department redefined its plans for the system to include adopting a new technical architecture, establishing a means for controlling changes to requirements, and committing to the incremental release of system capabilities. It also delayed the decision date for deploying the initial system capabilities (for outpatient documentation) to January 2001—21 months later than its original commitment for the system. However, the department did not meet this commitment, and subsequently established a new plan that called for incrementally deploying functionality to achieve the system’s full operational capability. Delivery of the system was to commence in July 2003 and was to be completed by September 2007, yielding four blocks of capabilities that would incrementally populate the system’s electronic health record at a revised estimated life-cycle cost of $3.8 billion through 2017. Block 1 was to make outpatient information available worldwide on a continuous basis through the electronic health record system (as opposed to CHCS legacy functionality which only made records available at a single location), provide encounter documentation, aid in order entry/results retrieval, assist in encounter coding support, provide alerts and reminders (such as drug interaction alerts and special duty status), facilitate role- based security, and establish a health data dictionary and a master patient index. Block 2 was to provide automated clinical practice guidelines, optometric documentation, and dental documentation. Block 3 was to replace CHCS ancillary functionality for results retrieval and order entry for outpatient encounters such as laboratory and automatic pathology, pharmacy, and radiology. Block 4 was to provide for inpatient order entry and management, including inpatient clinical and critical care documentation. When delivered, the system was to allow users to create and store computer-based patient records using workstation- and computer-based software packages. Each facility’s workstations and servers were to be connected via each installation’s local or wide area networks. Further, each installation was to be connected through a wide area network to a defense computing center where the patient records would be stored in a database known as the clinical data repository. DOD intended that medical providers would ultimately be able to access a patient’s computer-based record from any military treatment facility, no matter where the patient was being or had been treated. According to program documentation, the department began worldwide deployment of Block 1 in January 2004. It completed the deployment of this block in December 2006. However, program officials stated that users experienced numerous performance problems with the capabilities that were delivered, which impacted its usability, speed, and availability. Specifically, the department reported experiencing the following problems with the delivery of Block 1: Usability. The system did not support varied clinical workflow to meet the needs of various types of practitioners, had missing or incomplete clinical capabilities (e.g., consult and referrals management, ancillaries, specialty workflow support), did not support fully unified or user- customizable patient data, and did not have a user-friendly interface. Speed. The system did not have the speed or performance to efficiently support the clinicians’ workflow in certain environments and was affected by problems such as coding and infrastructure which impacted its speed. Availability. The system was not reliable on a 24-hour-a-day, 7-day-a- week basis; it had no backup for disaster recovery; and the data repository experienced system shutdowns and functional interruptions. As a result of the system problems associated with Block 1, DOD set a new date for system completion—September 2011—and increased the projected life-cycle cost of the system to approximately $5 billion, which it attributed primarily to the need for increased operations and maintenance for Block 1. The department also took a number of other steps with regard to the initiative. Specifically, in May 2005, it terminated plans for deploying the Block 4 inpatient functionality with the intent of moving this functionality into Block 3. However, due to continuing performance problems with the functionality that had been delivered, and because the Block 3 deployment had exceeded the department’s 5-year limit for achieving initial operational capability by January 2008, DOD terminated Block 3 (laboratory, radiology, and pharmacy) as well. This action left only one of the four planned blocks—Block 2—for implementation. Although the department reduced the scope of the initiative to only two blocks, the estimated life-cycle costs were revised back to the original $3.8 billion (through 2021). However, the department encountered performance problems with the Block 2 dental module as well and, in December 2009, MHS senior leadership implemented a strategic pause in its further deployment. Beyond these actions, the department took other steps over the course of the initiative. Specifically, in November 2005, the Assistant Secretary of Defense for Health Affairs announced a change in the name of the system from CHCS II to AHLTA, but did not give a specific reason for doing so. Further, as part of its attempt to improve the system, DOD awarded several contracts between fiscal year 2006 and fiscal year 2009 for a total of approximately $40 million to address performance problems and implement software enhancements. The contractors began deployment of these software enhancements (which DOD referred to as AHLTA 3.3) in December 2008. DOD’s Acquisition Process for Its Electronic Health Record To acquire its electronic health record system, DOD used several contractors and types of contracts. These included fixed-price, time-and- materials, and cost-plus-fixed-fee contracts, each of which involved a different level of cost or performance risk for the government. The prime developer and lead integrator for CHCS II, Integic (acquired by Northrop Grumman in 2005), was awarded a time-and-materials contract for about $65.4 million in 1997 and was tasked to perform systems engineering, requirements analysis, architecture evaluation, software design and development, engineering and development testing, test and evaluation, maintenance, site installation and implementation, and training. Contracts for system development and integration continued through fiscal year 2009. DOD also used noncompetitive contracts for the development of the system. According to the program office, 11 noncompetitive contracts and task or delivery orders, totaling approximately $44.6 million, were awarded for the system from fiscal year 2004 through fiscal year 2012. Program officials stated that the noncompetitive contracts were awarded on the basis that (1) DOD’s need for the supplies or service was so urgent that providing each awardee under a multiple award contact a fair opportunity would have resulted in unacceptable delays; (2) only one awardee was capable of providing the supplies or services required at the level of quality required because the supplies or services ordered were unique or highly specialized; or (3) an order was a logical follow-on to an order already issued under the contract. According to AHLTA program documentation, the system acquisition was guided by the defense acquisition system, which is documented in the department’s DOD 5000.02 Instructions. The defense acquisition system consists of five key program life-cycle phases and three related milestone decision points that major acquisitions must meet in order to proceed to the next phase of the acquisition. At each milestone point, the program is reviewed by a milestone decision authority to determine whether it can move to the next life-cycle phase. The five phases of the defense acquisition are as follows: 1. Materiel solution analysis: The purpose of this phase is to assess, through an analysis of alternatives, potential solutions to satisfy an approved capability need. 2. Technology development: The purpose of this phase is to determine and mature the appropriate set of technologies to be integrated into the investment solution by iteratively assessing the viability of the various technologies while simultaneously refining user requirements. To enter this phase, a program must have an approved analysis of alternatives and pass milestone A. To exit this phase, the acquisition must demonstrate affordable technology. 3. Engineering and manufacturing development: The purpose of this phase is to develop a system or an increment of capability, and demonstrate integrated system design through developer testing to show that the system can function in its target environment. To enter this phase, a program must have approved requirements and pass milestone B. To exit this phase, the acquisition must meet performance requirements in the intended environment. 4. Production and deployment: The purpose of this phase is to achieve an operational capability that satisfies the mission needs, as verified through independent operational test and evaluation, and to implement the system at all applicable locations. To enter this phase, a program must have completed development testing and pass milestone C. To exit this phase, the system must be deployed and ready to operate for all users. 5. Operations and support: The purpose of this phase is to operationally sustain the system in the most cost-effective manner over its life cycle. DOD criteria do not require that the milestone decision authority conduct milestone reviews during the period after a system has been deployed and stabilized. For the purpose of conducting milestone reviews, AHLTA was assigned the highest level of oversight for DOD information system acquisitions. As such, oversight was provided within the Office of the Secretary of Defense. Management Structure for AHLTA Various DOD units were involved in acquiring and deploying AHLTA. As the principal advisor to the Assistant Secretary of Defense for Health Affairs and to the DOD medical leaders on all matters related to information management and information technology, the MHS chief information officer (CIO) has primary responsibility for overseeing the acquisition, development, testing, and deployment of AHLTA to the military treatment facilities. Key offices within the Office of the MHS CIO perform critical information management and information technology functions to support AHLTA, including the Joint Medical Information Systems Office, which is responsible for the testing, implementation, training, fielding of system components, operations, maintenance, and ultimate disposal of system components. Also within MHS, the Composite Health Care System (CHCS) II Program Office was established in January 1997 to provide direct management of the project; it had operational responsibility for the acquisition and deployment of the electronic health record, as well as the migration of the numerous standalone clinical information systems. In fiscal year 2000, the CHCS II program office was renamed the Clinical Information Technology Program Office (CITPO). In 2008, with the merger of CITPO and the MHS Theater Medical Information Program Office—Joint, the office is now called the Defense Health Information Management System (DHIMS). To provide oversight in accordance with DOD’s defense acquisition system, the Assistant Secretary of Defense for Networks and Information Integration, within the Office of the Secretary of Defense, was designated the milestone decision authority responsible for deciding at each acquisition cycle milestone whether the project could proceed to the next milestone. The project also received oversight from several other bodies, including the Human Resources Management Investment Review Board, headed by the MHS CIO, and the Overarching Integrated Project Team, which evaluated project performance in accordance with DOD 5000 and approved acquisition program baselines and acquisition decision memorandums. Table 1 summarizes the assignment of responsibilities for AHLTA among the various DOD units. Previous Reviews of DOD’s Electronic Health Record Initiatives Highlighted Management Deficiencies and Risks DOD’s Inspector General and we have previously reported on the department’s actions toward acquiring its new health care information system and have noted the need for improvement in key management areas, such as project management, contract management, and risk management. In reporting on the department’s efforts in January l999, the Inspector General noted that the project management system for the acquisition (called CHCS II at the time of the report) was not complete. While finding that DOD had taken positive actions to manage the acquisition, the report noted that the department had not established a project management control system to evaluate and measure the program’s performance. In addition, the report stated that the program’s funding visibility was limited because DOD was combining funding for sustaining the system with modernization funding for CHCS and other clinical business area automated systems. The Inspector General made recommendations related to designing and implementing a project management control system, the reporting of funding for the system, and providing milestone exit criteria that demonstrated the level of performance, accomplishments, and progression. Further, in May 2006, the Inspector General conducted an evaluation of the project’s program requirements, the related acquisition strategy, and system testing to determine whether the system was being implemented to meet cost, schedule, and performance requirements. While the report found that the program management office was using risk mitigation techniques, such as risk management, lessons learned, and performance monitoring, the program remained at high risk because of the complexities of integrating commercial, off-the-shelf software into the existing program. In particular, the report noted that the program office had not identified any mitigation strategies to reduce and control program risk related to integration of commercial, off-the-shelf software for the third block of functionality. As a result, the Inspector General concluded that the program was vulnerable to continued increases in cost, extended schedules for implementation, and unrealized goals in performance from underestimating the difficulties of integrating commercial, off-the-shelf products. Subsequently, the program office developed mitigation strategies, but the Inspector General reported that they were inadequate and did not follow risk management guidance, including identifying significant activities and milestones. Accordingly, the Inspector General recommended that the program office develop more robust mitigation strategies in accordance with the program office’s risk management plan. We have also reported on DOD’s management of the system acquisition, noting the need for improvements. For example, in 2002, we reported that, because the department had not estimated the cost of delivering the initial system capabilities, it had lacked a cost commitment against which to measure progress. In addition, we noted that program benefits were in question since measurements had not yet begun and that costs were about two-and-a-half times the l998 estimate. Further, DOD had initially identified a single economic justification for the entire project, which had been used as the basis for its system releases, and had not treated the releases as separate investment decisions. Finally, DOD had not followed performance-based contracting practices, resulting in the risk that the system would take longer to acquire and cost more than necessary. Accordingly, we recommended that DOD expand its use of best practices in managing the system by (1) modifying the project’s investment strategy to justify investment in each system release before beginning development and measuring return on investment and (2) employing performance- based contracting practices where possible on all future delivery orders. The department agreed with these recommendations and took actions to update and validate its life-cycle cost estimate in September 2002. This was used by the department to approve the deployment of the system release. Also, the department employed performance-based contracting practices, such as using performance standards, quality assurance plans, and contractor incentives on CHCS II delivery orders. AHLTA Has Limited Capabilities and Continues to Experience Performance Problems Despite having obligated approximately $2 billion over the 13-year life of its initiatives to acquire and operate an electronic health record system, as of September 2010, DOD continued to experience performance problems with the one block of AHLTA functionality (Block 1) that it had fully deployed and with a second block of functionality (Block 2) that it had partially deployed. Further, after having terminated its plans for deploying the two other blocks of functionality (Block 3 and Block 4) that were intended to be part of the system, the department has identified April 2011 as the date by which it now expects to achieve full operational capability of the scaled-backed AHLTA system. Program officials told us they are taking steps to stabilize the existing system capabilities through 2015, as the department proceeds with plans to pursue yet another new electronic health record system. In deploying Block 1, the department reported that it achieved all of the planned outpatient capabilities for direct patient care, including encounter documentation, order entry and results retrieval, encounter coding support, consult tracking, and alerts and reminders. According to the department, it deployed the AHLTA outpatient documentation capability worldwide, providing 77,000 clinicians with the ability to document over 148,000 outpatient encounters daily. The department stated that medical providers can access the patient’s computer-based record from any military treatment facility. Also, DOD currently shares a significant amount of patient information with the Department of Veterans Affairs, including outpatient pharmacy data, laboratory results, and radiology results on shared and separated service members. In addition, with the deployment of Block 2, including enhancements to Block 1, dental capabilities were provided to 73 of 375 dental treatment facilities, allowing graphical dental charting, order and entry results retrieval, and automated dental readiness classification. In this regard, the capabilities were deployed to 46 Air Force dental medical facilities, 25 Navy facilities, and 2 Army facilities. Further, program officials stated that in October 2009, because of technical and functionality upgrades made over time to the legacy Spectacle Request Transmission System, funding was ceased for optometric capabilities for Block 2. The department stated that it plans to achieve full operational capabilities by April 2011. Table 2 shows the capabilities planned and delivered for Blocks 1 and 2. Nonetheless, program officials, as well as users of the system, acknowledged that problems with the system’s performance have persisted. During a demonstration of the system’s operation in April 2010, medical providers discussed problems with AHLTA, including limitations in its availability and usability. For example, the providers participating in the demonstration stated that it is time-consuming to document encounters using AHLTA because of the time required to enter information and navigate through the application screens. Thus, they sometimes must document portions of an outpatient encounter after the patient leaves. In their experience, using the system at the time of the encounter would take attention away from the patient for unacceptable periods of time. Also, they stated that when system downtime occurs, providers can neither access patient data nor electronically document care; in these instances, medical notes are recorded manually and later entered in the system after it returns to operation—an inefficient process. As noted in the earlier discussion, since fiscal year 2006 the department has been taking steps to address performance problems and enhance existing system capabilities. DOD is proceeding with what it refers to as a “stabilization effort” to continue making improvements to the system and provide ongoing capabilities until a new system is acquired. According to DOD officials, the estimated cost of this effort for fiscal year 2010 through fiscal year 2015 is $826.3 million. The stabilization effort is expected to improve the speed, availability, and usability of the system; moreover, according to officials in the Office of the Deputy Secretary of Defense, the stabilization effort is expected to allow the department to meet its near- term needs and implement additional enhancements to support its future system. DOD Has Initiated Planning Activities for the EHR Way Ahead Because AHLTA has consistently experienced performance problems and has not delivered the full operational capabilities intended, DOD has initiated plans to develop a new electronic health record system. This new initiative is called the Electronic Health Record (EHR) Way Ahead. As with AHLTA, department officials stated that the new electronic health record system is expected to be a comprehensive, real-time health record for active and retired service members, their families, and other eligible beneficiaries. They added that the new system is being planned to address the capability gaps and performance problems of previous iterations, and to improve existing information sharing between DOD and the Department of Veterans Affairs and expand information sharing to include private sector providers. Thus far, the department has taken several steps to launch its acquisition of the new system. Specifically, in February 2010 it established the EHR Way Ahead Planning Office to identify options for the future electronic health record system. The planning office currently resides within the MHS Joint Medical Information Systems Program Executive Office under the Office of the CIO. In May 2010, the department approved plans to assess solutions for the new electronic health record system. In this regard, the planning office began conducting an analysis of alternatives to provide guidance on selecting a technical solution. According to planning officials, efforts to develop the analysis of alternatives are being supervised by the Office of the Assistant Secretary of Defense for Health Affairs, and this analysis is expected to define and evaluate reasonable alternatives for meeting the capability requirements. The analysis is currently scheduled to be completed by December 2010. To facilitate the analysis of alternatives, planning officials stated that they had identified system capabilities needed to meet the department’s medical mission. They added that a list of the “top 10” priority capabilities for a new system had been developed based on the gaps identified in prior iterations of their electronic health systems. (These priorities are summarized in table 3.) According to planning documents, following completion of the analysis, DOD expects to select a technical solution and to develop and release a delivery schedule. DOD’s fiscal year 2011 budget request includes $302 million for the EHR Way Ahead initiative. For fiscal year 2012, the department intends to submit an updated budget request and the schedule for delivery of the EHR Way Ahead based on the results of the analysis of alternatives. AHLTA Performance Was Hindered by Weaknesses in Key Acquisition Management and Planning Processes The success of a large information technology project such as AHLTA is dependent on an agency possessing capabilities to effectively plan and manage acquisitions, design the associated systems, define and manage system requirements, and use effective measures to gauge user satisfaction. In the case of AHLTA, weaknesses in these key management areas contributed to DOD delivering a system that provided fewer capabilities than originally expected, experienced persistent performance problems, and ultimately, did not fully meet the needs of its intended users. Alleviating these areas of weakness will be essential to the success of further initiatives, including the AHLTA stabilization effort and the EHR Way Ahead, that the department undertakes in pursuit of its electronic health record system capabilities. Project Plan Was Incomplete and Not Maintained Program management principles and best practices emphasize the importance of having a project management plan in place that, among other things, establishes a complete description that ties together all program activities and evolves over time to continuously reflect the current status and desired end point of the project. An effective plan is comprised of a description of the program’s scope, cost, lines of responsibility and authority, management processes, and schedule. Such a plan incorporates all the critical areas of system development and is to be used as a means of determining what needs to be done, by whom, and when. Other guidance, such as our Information Technology Investment Management framework, states that effective program oversight of IT projects and systems, including those in operation and maintenance, involves maintaining approved project management plans that include expected cost and schedule milestones and measurable benefit and risk expectations. However, officials did not follow best practices in developing a project management plan to guide the department’s electronic health record system. Although the department established a project management plan, it did not include several standard components such as the project’s scope, a requirements management plan, cost estimates and baseline, a schedule, and a staffing management plan. In addition, although DOD identified the plan as a keystone document for guiding the project, the plan was last revised in 2005 and was not updated during subsequent development work and the operations and maintenance phase to reflect significant changes to the program. These changes included termination and postponement of planned capabilities, and revisions to the acquisition processes used to guide the AHLTA program. As a result, a plan was not in place to effectively guide the program throughout these changes. Moreover, there is no such plan to guide current activities associated with the stabilization effort, which, as discussed previously, involves attempts to address system performance problems and enhance functionality. According to program officials, the project management plan was last revised in 2005 before their focus shifted to addressing the system performance problems that occurred as a result of completing Block 1 deployment in December 2006. Nevertheless, significant changes occurred to the program’s scope, cost, and schedule after Block 1 deployment, and the agency lacked a current and complete plan to guide activities and measure program progress. Going forward, developing and maintaining a comprehensive project plan will be an essential tool for overseeing the AHLTA stabilization effort, which is to provide crucial improvements to the system and act as a bridge over the next 5 years to the deployment of the EHR Way Ahead system. Further, having a comprehensive and current project plan for the EHR Way Ahead program will help to guide the project and provide oversight of the project’s progress. Without a project management plan that reflects the status and goals of the project, DOD increases the risk that stakeholders will not have the insight into program status that is needed to exercise effective oversight of both the AHLTA stabilization effort and the EHR Way Ahead acquisition. DOD Lacked a Systems Engineering Plan to Guide the Electronic Health Record System’s Design According to industry best practices, systems engineering governs the total technical and managerial effort required to transform a set of user requirements and expectations into specific capabilities and, ultimately, into a system design that will meet users’ needs. Systems engineering practices include developing solutions for achieving system performance requirements such as system availability, and ensuring compatibility when integrating multiple systems and their components. Further, DOD guidance states that a tailored and detailed systems engineering plan is a critical tool for guiding systems engineering practices throughout the life of an acquisition program. Having such a plan is particularly important for a system characterized by significant technical complexities. DOD’s electronic health record system design reflected numerous technical complexities, such as the need to capture, manage, and share health information across a worldwide network that must be available 24 hours a day, 7 days a week, and that is to serve a transient patient population. In addition, the system design involved a network that had to be integrated with a central patient database and multiple nonstandard hardware and software platforms, such as commercial, off-the-shelf products at over 800 military treatment facilities. Nonetheless, although the program office recognized these types of system complexities as being part of the electronic health record system design, the office never established a tailored systems engineering plan to guide the acquisition, or to facilitate the resolution of the many performance problems that have plagued the system since its initial deployment. In this regard, a particularly troublesome area for the department has been in deploying enhancements to the system. For example, following Block 1 deployment in 2006, the department implemented local cache servers in an attempt to improve the system’s operational availability. According to the department, the specific purpose of the local cache servers had been to mitigate the need to access patient medical information in the central data repository during system outages. However, after the servers were deployed, DOD realized that the placement of the servers within the system architecture did not resolve the problem and created a single point of failure. Rather than yield operational improvements, department officials acknowledged that these actions resulted in additional challenges, including the need for a costly local cache server redesign, which was begun in fiscal year 2009. Program documentation noted that the local cache server effort was probably one of the most difficult engineering challenges that the program office had faced so far. Further, as various issues were faced, it became increasingly clear that detailed planning in the earlier stages was not what it could have been. In April 2010, clinicians demonstrating the system at the Bethesda Naval Medical Center stated that the servers continued to be a major contributing factor to system availability issues. The lack of a systems engineering plan to guide the program office through this type of complexity is particularly notable in light of the DOD Inspector General’s report of 2006, which stated that inadequate planning for technical complexities significantly impacts the cost, schedule, and performance of a program. The report further stated that the AHLTA program office had underestimated the technical complexity of integrating products with the electronic health record system and, as a result, remained at high risk for continued cost increases, schedule overruns, and unrealized performance goals. In discussing this matter, agency officials stated that a tailored systems engineering plan had not been developed to guide the design of AHLTA because such a plan was not required when the system was originally planned. Specifically, the officials stated that, it was not until February 2004 that DOD issued a policy requiring that a systems engineering plan be in place for acquisition programs’ milestone reviews; but all milestone reviews for AHLTA had been completed prior to this time. However, current DOD guidance emphasizes the need for a tailored systems engineering plan to guide all systems engineering practices, including those that occur after the completion of milestone reviews. Without a tailored systems engineering plan to guide the program’s efforts to address long-standing system performance problems as part of the AHLTA stabilization efforts, the department may continue to be challenged in achieving the desired results. Further, in planning for the acquisition of the new EHR Way Ahead system, it will be essential that the department establish early in the process and have in place a detailed and tailored plan to avoid encountering technical challenges similar to those of the AHLTA program, and thus again failing to meet users’ needs. Weaknesses in DOD’s Requirements Processes Impacted AHLTA’s Usability According to recognized guidance, using disciplined processes for developing and managing requirements can help reduce the risks of developing a system that does not meet user and operational needs. Requirements should serve as the basis for establishing agreement between users and developers and a shared understanding of the system to be developed. Effective requirements development practices include, among other things, involving users in identifying requirements throughout the project’s life cycle to ensure system requirements are complete and accurately reflect their needs. Effective requirements management practices include maintaining bidirectional traceability of requirements to ensure that system-level requirements can be traced both backward to high-level operational requirements, and forward to low-level system design specifications. For the AHLTA acquisition, program documentation revealed that users were not adequately involved throughout the requirements development process. According to the documentation, users did not seek involvement in the requirements development process and system developers did not seek user input when making changes to requirements. As a result, requirements were neither complete nor sufficiently detailed to guide system development, and did not adequately provide a shared understanding between the users and developers of how the system was to be developed. Program documentation noted that requirements often were not adequately specified and did not adequately reflect user needs. In particular, the program documentation revealed that, while users were involved in developing an initial set of requirements used to make system acquisition decisions, they were largely not involved in identifying new requirements and making changes to existing ones while the system was being developed and deployed. In certain instances, because users were involved only at the beginning and end of the requirements development process, they were only able to determine that capabilities would not meet their needs after those capabilities had already been deployed. For example, when the dental application was in the process of being deployed to Army, Navy, and Air Force sites, the MHS senior leadership voted to halt further training and implementation because users reported that the capabilities were not complete and did not address their needs. Consequently, alternate dental solutions will be explored as part of the analysis of alternatives for the EHR Way Ahead, resulting in additional costs and delays in deploying dental capabilities that will meet users’ requirements. Since the initial deployment, the department has taken steps to increase user involvement in defining requirements. For example, to better involve users in the requirements process and identify issues with system usability, the program office held conferences in 2006 at which users identified over 200 new requirements for inclusion in the system. Program officials stated that the requirements identified during the conference were used to develop the AHLTA 3.3 software release. However, our evaluation of the requirements traceability matrix used to develop the AHLTA 3.3 release showed that bidirectional traceability had not been fully established; thus, the requirements were not always linked to high- level operational requirements or to more detailed design specifications. Without adequate traceability, the department cannot ensure that all agreed-upon requirements will be developed, fully tested, and work as intended. In addition, the department has plans for making improvements in the requirements management process in its MHS Information Management/Information Technology Strategic Plan 2010–2015 and includes a goal to improve the requirements management process to enable greater participation of system users. According to the plan, this will improve the value, quality, timeliness, and stakeholder ownership of the resulting system. However, because the department is in the early stages of implementing improvements for greater user participation, it is too early to determine their effectiveness. As the department proceeds with the AHLTA stabilization effort and the new EHR Way Ahead system, ensuring that user needs are met will be essential to effective and cost-efficient delivery of system capabilities. Until the department ensures that a requirements development process with adequate user involvement is in place, it will continue to lack a vital tool for ensuring the efficient and effective delivery of electronic health record system capabilities that will meet the needs of its users. Efforts to Improve User Satisfaction Were Not Guided by Effective Planning DOD has stated that the success of AHLTA can be gauged by improvements in user satisfaction and user acceptance, among other things. In this regard, effectively managing program improvement activities to improve user satisfaction requires planning and executing such activities in a disciplined fashion. The Software Engineering Institute’s IDEALSM21 model is a recognized approach for managing efforts to make system improvements. According to this model, user satisfaction improvement efforts should include a written plan that serves as the foundation and basis for guiding improvement activities, including obtaining management commitment to and funding for the activities, establishing a baseline of commitments and expectations against which to measure progress, prioritizing and executing activities and initiatives, determining success, and identifying and applying lessons learned. Through such a structured and disciplined approach, improvement resources can be invested in a manner that produces optimal results. The Software Engineering Institute is a federally funded research and development center established at Carnegie Mellon University to address software engineering practices. IDEALSM is a service mark of Carnegie Mellon University and stands for initiating, diagnosing, establishing, acting, and leveraging. For more information on this model, see IDEALSM: A User’s Guide for Software Process Improvement (CMU/SEI-96-HB-001). satisfaction levels in July 2007 after overall user satisfaction had declined to its lowest point in more than 2 years. Between 2005 and 2007 the program office collected user satisfaction feedback through online user surveys, and used the data to identify areas for system improvements and to measure progress toward improving satisfaction. The results of the surveys showed not only that users rated their overall satisfaction level with the system between below average and average, but that user satisfaction levels had declined to a low point with the results of the final survey report of July 2007. Thus, as shown in figure 1, the program office was not able to improve user satisfaction during this time period. According to program officials, they have implemented a major effort toward improving user satisfaction with the AHLTA 3.3 software release. The improvements associated with this software release began as early as 2006 and include features such as improved medical coding support and increased speed of the order entry connection, as well as other changes to improve users’ satisfaction with the system’s performance and capabilities. Yet, program officials did not provide evidence of a plan to guide these efforts or a schedule for implementing these improvements, and it is unclear how specific capabilities of the software release will be used to address specific user concerns. The lack of a documented plan to guide user satisfaction improvement activities is of particular significance because users have continued to express their dissatisfaction with the system. Program officials stated that additional online user satisfaction surveys were not conducted after 2007 because users had grown weary of the surveys and efforts to address user feedback from the existing survey results are ongoing. The next online survey is expected to be conducted after full deployment of AHLTA 3.3, but a schedule has not yet been set. Given the history of system performance problems and the extent to which users have not been able to effectively and efficiently use AHLTA, it is critical that the department identify and implement system improvements in a disciplined and structured fashion. Without a documented improvement plan, efforts to improve user satisfaction, including those associated with the ongoing AHLTA stabilization effort, may be reduced to trial and error, and the office cannot adequately ensure that it is effectively investing program resources on improvement efforts that will result in a system that satisfies users. Further, since increasing user satisfaction is a key goal for the EHR Way Ahead, it is critical that a disciplined approach is established and maintained throughout the program’s life cycle. MHS Lacks Assurance of a Disciplined Acquisition Management Process to Guide Its Electronic Health Record Initiative The use of disciplined processes to guide the effort of acquiring and implementing a major system has been shown to increase the likelihood of achieving intended results and reduce the risks associated with an acquisition to acceptable levels. Although there is no standard set of practices that will ever guarantee success, several organizations, such as Carnegie Mellon University’s Software Engineering Institute and the Institute of Electrical and Electronics Engineers (IEEE), as well as individual experts, have identified and developed the types of policies, procedures, and practices that have been demonstrated to reduce development time and enhance effectiveness. The key to having a disciplined system development effort is to have disciplined processes in multiple areas, including project planning, requirements management, systems engineering, system testing, and risk management. Because change in a program is constant, effective processes should be implemented in each of these throughout the project life cycle. Effectively implementing the disciplined processes necessary to reduce project risks to acceptable levels is difficult because a project must effectively implement several best practices, and inadequate implementation of any one may significantly reduce or even eliminate the positive benefits of the others. Recognizing weaknesses in its acquisition of systems such as AHLTA, MHS has been taking steps to institutionalize more disciplined management processes across all of its programs. In March 2008 the MHS CIO identified an approach for improving its management processes that included aligning MHS processes with best practices outlined in the Software Engineering Institute’s Capability Maturity Model Integration (CMMI) for Acquisition. In support of the approach, certain program offices, including DHIMS (the program office responsible for the AHLTA acquisition), were selected for an internal evaluation to identify areas for improvement in the existing MHS processes. The assessment, which was conducted in May 2008, identified weaknesses in processes such as project management, requirements development, and project monitoring and control, among others. It also identified weaknesses in MHS’s oversight of the implementation of these processes within program offices. Specifically, the assessment identified weaknesses in the area of Process and Product Quality Assurance, which is supposed to provide staff and management with objective insight into processes associated with work products. The assessment found little evidence that process evaluations were performed across the organization, quality assurance audits were conducted, and noncompliance issues were tracked and reported. In response to the assessment, officials stated that they established a plan for addressing the identified weaknesses. Specifically, their goal was to achieve CMMI’s “maturity level 2” for processes such as project planning and acquisition requirements development. Level 2 processes are “managed” processes, or processes that are planned and executed in accordance with policy; employ skilled people who have adequate resources to produce controlled outputs; involve relevant stakeholders; are monitored, controlled, and reviewed; and are evaluated for adherence to their process description. The department planned to conduct a formal external assessment of the maturity of its processes by December 2008. Program officials stated that they provided guidance and assistance for program offices to adopt practices associated with CMMI maturity level 2 processes. However, they have yet to perform the planned external assessment of their processes, and there is therefore little assurance that improvements have been carried out. As the department proceeds with the AHLTA stabilization effort, it is critical that it have disciplined processes in place to avoid past problems with not delivering system improvements as planned. Further, as the department is allocating resources to and planning for the EHR Way Ahead acquisition, it is critical that it have disciplined management processes in place to avoid repeating the mistakes of the past. Until the department ensures that these disciplined and managed processes are in place, it risks delivering another system with limited functionality and performance problems and that does not meet the needs of its users. Conclusions After over a decade of effort, DOD has not accomplished what it set out to achieve in acquiring a comprehensive electronic health record system. While it has delivered a number of outpatient capabilities, weaknesses in key management areas hindered its ability to deliver the full complement of intended capabilities and to ensure that the capabilities it has delivered meet required performance parameters. The program office did not maintain a comprehensive and current project management plan, a critical document that provides stakeholders insight into the project’s plans and status. Also, despite the department’s need to deliver a complex, worldwide system, it did not develop a systems engineering plan to help address the technical aspects of the project, and it continues to experience problems with system availability, speed, and usability. Further, the system requirements were too general and did not adequately reflect user needs. Although the department has collected user feedback, it did not establish a comprehensive plan for improving user satisfaction with the system. Recognizing weaknesses in acquisition management areas, the MHS CIO issued guidance for improving its management processes, but it has not performed the planned external assessment that it needs to certify that these improvements have been made or established a date for doing so. As DOD continues to invest significant resources in a stabilization effort to address shortcomings of AHLTA and plan for the acquisition of a new electronic health record system, it is imperative that the department take immediate steps to improve its management of the initiative. Until it does so, it risks a continuation of the problems it has already experienced, which could again prevent DOD from delivering a comprehensive health record system for serving its service members and their families. Recommendations for Executive Action To help guide and ensure the successful completion of the AHLTA stabilization effort, we recommend that the Secretary of Defense, through the Assistant Secretary of Defense for Health Affairs, direct the MHS CIO to take the following six actions: Develop and maintain a comprehensive project plan that includes key elements, such as the project’s scope, cost, schedule, and risks and update the plan to provide key information for stakeholders on the project’s plans and status. Develop a systems engineering plan in accordance with DOD guidance to address the technical complexities of delivering a worldwide electronic health record system. Ensure that its requirements development process involves system users throughout the development process, to obtain an understanding of what will satisfy their needs. Ensure the establishment of bidirectional traceability for all system requirements. Develop and document a plan for improving user satisfaction that prioritizes improvement projects; identifies needed resources; includes schedules for improvement efforts, including future user feedback surveys; and links efforts to measurable outcomes and specific user needs. Establish acquisition management processes in accordance with industry best practices, including identifying milestones and a completion date for the external evaluation that MHS’s processes are at maturity level 2 of the Capability Maturity Model Integration for Acquisition. Further, to help ensure that the EHR Way Ahead does not have shortfalls similar to those experienced with AHLTA, we recommend that the above six management practices be implemented as part of the planning for this important initiative. Agency Comments and Our Evaluation The Deputy Assistant Secretary of Defense (Force Health Protection and Readiness), performing the duties of the Assistant Secretary of Defense (Health Affairs), provided written comments on a draft of this report. In its comments, the department agreed with our six recommendations and described actions planned to address them. Specifically, to help guide and ensure the successful completion of the AHLTA stabilization effort, DOD stated that it will develop and maintain a comprehensive project plan in accordance with our recommendation and DOD acquisition program guidelines. It also stated that it plans to develop a systems engineering plan to address the technical complexities of the project in accordance with current DOD requirements. Further, to obtain an understanding of system users’ needs, the department stated that it plans to engage users and manage the requirements development process in accordance with our recommendation. The department stated that it will ensure that bidirectional traceability is performed for all system requirements. Regarding its intent to develop and document a plan for improving user satisfaction, including identifying needed resources and a schedule for improvement, the department stated that it will augment its current user feedback plan to include these and other key elements, such as measurable outcomes. Further, in response to the need to establish acquisition management processes in accordance with industry best practices, at maturity level 2, the department said it plans to establish a milestone for completing the external review in accordance with Capability Maturity Model guidelines. Finally, the department stated that it will ensure that the six recommendations are implemented as part of future EHR Way Ahead initiative. To the extent that the department follows through on implementing the recommendations, it should be better positioned to deliver a comprehensive electronic health care record for serving its service members and others entitled to military health care. DOD also provided technical comments on our draft report. In these comments, DOD said it took exception to several inaccurate, misleading, and subjective statements provided in the report. The department said that GAO’s statements conflicted with the extensive volume of programmatic documentation, written responses, and consistent interview feedback provided during the course of the audit. In particular, the department believed that the report did not sufficiently reflect AHLTA’s operational capabilities and its benefit to DOD’s worldwide health care operations. While we agree that the department provided substantial documentation, we believe that our analysis of the information received supports our findings. Where appropriate, however, we have made revisions to statements in the report to update our discussions of AHLTA’s operational capabilities and the program’s management. The department’s written comments are reproduced in appendix II. The department also provided technical comments, which we have incorporated in the report as appropriate We are sending copies of the report to appropriate congressional committees, the Secretary of Defense, and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) determine the Department of Defense’s (DOD) status in implementing the Armed Forces Health Longitudinal Technology Application (AHLTA) system, (2) determine the department’s plans for acquiring a new electronic health record system, and (3) evaluate the department’s acquisition management for its electronic health record system. To determine the department’s status in implementing the AHLTA system, we reviewed project status reports, acquisition decision memorandums, quarterly defense acquisition executive summaries, monthly in-progress review reports, monthly contractor performance reports, and overarching integrated project team meeting minutes. We supplemented these reviews with interviews of DOD officials in the Defense Health Information Management System (DHIMS) Program Office, including the DHIMS Program Manager, Deputy Program Manager, and Director of Products Branch officials with whom we discussed the project’s cost and schedule, as well as the planning, development, and deployment of the original and current release of AHLTA. We also attended two demonstrations of AHLTA: at the program office located in Falls Church, Virginia, and at the National Naval Medical Center in Bethesda, Maryland. We observed demonstrations of AHLTA system functionality and held discussions with system users. We also observed a daily technical review meeting with technical staff from the Army, Navy, and Air Force in which the discussion largely focused on the reporting of issues that caused the system to be unavailable to users at various locations for up to 24 hours. The discussion also included identification of known root causes of the availability problems (e.g., incorrectly configured firewalls, tripped network circuits, and problems with virtual private networks) and planned actions to address the issues. To determine the department’s plans for acquiring a new system, we reviewed Electronic Health Record (EHR) Way Ahead planning documents. Specifically, we reviewed the acquisition decision memorandum issued by the milestone decision authority, the Joint Requirements Oversight Council-approved Initial Capabilities Document to identify EHR needs, and the Capabilities-Based Assessment. We also reviewed the analysis of alternatives procedures for guidance on determining a technology solution for the new EHR. We also reviewed department briefings issued between 2008 and 2010, as well as a prepared statement to Congress from 2009 on preliminary plans for the EHR Way Ahead. These documents provided a high-level overview of the need and the goals for the new system, as well as plans for the system’s enterprise architecture and expected capabilities. We supplemented our review by interviewing officials from the EHR Way Ahead planning office, including the department’s Acting Chief Information Officer, the DHIMS Program Manager, and the DHIMS Deputy Program Manager. To evaluate the department’s acquisition management for its electronic health record system initiative, we evaluated key practices used by the agency against best practices. In this regard, we examined practices related to project management planning, systems engineering planning, system requirements development and management, and user satisfaction improvement planning and compared the agency’s work with agency policy, guidance, and recognized best practices. Specifically: To assess DOD’s project planning for AHLTA, we compared the program’s project management plan against relevant guidance, including the Military Health System’s project management process area description and our Information Technology Investment Management framework for assessing and improving process maturity. We assessed the agency’s approach to systems engineering by comparing program documentation such as acquisition strategies and the AHLTA project management plan to systems engineering guidance from the Defense Acquisition University on systems engineering. We also reviewed relevant agency policies, such as DOD Instruction 5000.02 which discusses the use of systems engineering across the acquisition life cycle and memorandums from the Office of the Under Secretary of Defense on a 2004 revision to the policy regarding use of a systems engineering plan, to determine whether the AHLTA program was guided by appropriate systems engineering planning documents such as a systems engineering plan. Regarding requirements development, we reviewed program procedures describing the processes for developing requirements and reviewed relevant external evaluations of the effectiveness of those processes against recognized guidance. Specifically, we reviewed an external evaluation of the requirements development processes including the 2002 Carnegie Mellon External Assessment of the AHLTA program office and the process area description or requirements management. We also reviewed the 2008 internal assessment of requirements management; a 2009 concept of operations document for a more integrated, departmentwide requirements development process; and the 2010 Joint Requirements Oversight Council-approved Initial Capabilities Document, which identifies past challenges with the department’s requirements processes. In addition, we analyzed the requirements traceability matrix for the most recent version of AHLTA to determine the extent to which bidirectional traceability had been performed. We also reviewed program documentation relative to requirements development and user community participation. In addition, we interviewed process improvement officials, including the cognizant official from the Office of the Chief Information Officer (CIO) about internal acquisition process evaluations and their results and the status of plans for improving acquisition management processes. We then compared the department’s current approach to requirements development and management with best practices identified in the Software Engineering Institute’s Capability Maturity Model Integration for Acquisition. To assess the department’s approach to improving user satisfaction, we reviewed and analyzed program documentation pertaining to the collection, analysis, and utilization of AHLTA user satisfaction feedback such as seven survey reports and a postimplementation review that were produced between 2005 and 2007 and compared the agency’s approach to best practices such as the Office of Management and Budget’s Capital Programming Guide and Standards and Guidelines for Statistical Surveys. We also reviewed lessons learned reports from 2006 through 2008 and a user conference briefing from 2006 that identified areas of user dissatisfaction. In addition, we reviewed program office documents that identified improvement initiatives such as the AHLTA 3.3 software release and the deployment of local cache servers, which were intended to improve user satisfaction. We supplemented our review by interviewing program officials, including the DHIMS Program Manager and Deputy Program manager, to determine the extent to which user satisfaction improvement efforts and initiatives have been guided by documented plans. We then compared the department’s approach to improving user satisfaction with the Software Engineering Institute’s IDEALSM1 model, which is a recognized approach for managing process improvement efforts such as managing improvements to user satisfaction. The Software Engineering Institute is a federally funded research and development center established at Carnegie Mellon University to address software engineering practices. IDEAL is a service mark of Carnegie Mellon University and stands for initiating, diagnosing, establishing, acting, and leveraging. For more information on this model, see IDEALSM: A User’s Guide for Software Process Improvement (CMU/SEI-96-HB-001). DOD’s 2008 internal assessment related to acquisition management processes, action plans, and tasks planned for process improvement. We supplemented our analysis with interviews with officials in the DHIMS Office, including, the Program Manager, Deputy Program Manager, Director of Products Branch and Engineering and Resources offices. We also obtained written responses from the responsible program manager or subject matter expert for areas of our review. These responses were approved by the MHS CIO or the Program Executive Officer, Joint Medical Information Systems/Deputy MHS CIO. We did not conduct an independent validation of the life-cycle costs and obligations provided to us by DOD. We conducted this performance audit at the DHIMS Program Office in Falls Church, Virginia, and the National Naval Medical Center, in Bethesda, Maryland, from September 2009 through October 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our objectives. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Cynthia J. Scott (Assistant Director); Harold Brumm, Jr.; Neil Doherty; Ronalynn Espedido; Rebecca Eyler; Nancy Glover; Joel Grossman; Linda Kochersberger; Lee McCracken; Madhav Panwar; Donald Sebers; Sylvia Shanks; Adam Vodraska; Daniel Wexler; and Robert Williams, Jr. made key contributions to this report.
The Department of Defense (DOD) provides medical care to 9.6 million active duty service members, their families, and other eligible beneficiaries worldwide. DOD's Military Health System has long been engaged in efforts to acquire and deploy an electronic health record system. The latest version of this initiative--the Armed Forces Health Longitudinal Technology Application (AHLTA)--was expected to give health care providers real-time access to individual and military population health information and facilitate clinical support. However, the system's early performance was problematic, and DOD recently stated that it intended to acquire a new electronic health record system. GAO was asked to (1) determine the status of AHLTA, (2) determine DOD's plans for acquiring its new system, and (3) evaluate DOD's acquisition management of the initiative. To do this, GAO reviewed program plans, reports, and other documentation and interviewed DOD officials. After obligating approximately $2 billion over the 13-year life of its initiative to acquire an electronic health record system, as of September 2010, DOD had delivered various capabilities for outpatient care and dental care documentation. DOD had scaled back other capabilities it had originally planned to deliver, such as replacement of legacy systems and inpatient care management. In addition, users continued to experience significant problems with the performance (speed, usability, and availability) of the portions of the system that have been deployed. DOD has initiated efforts to improve system performance and enhance functionality and plans to continue its efforts to stabilize the AHLTA system through 2015, as a "bridge" to the new electronic health record system it intends to acquire. According to DOD, the planned new electronic health record system--known as the EHR Way Ahead--is to be a comprehensive, real-time health record for service members and their families and beneficiaries. The system is expected to address performance problems, provide unaddressed capabilities such as comprehensive medical documentation, capture and share medical data electronically within DOD, and improve existing information sharing with the Department of Veterans Affairs. As of September 2010, the department had established a planning office, and this office had begun an analysis of alternatives for meeting the new system requirements. Completion of this analysis is currently scheduled for December 2010. Following its completion, DOD expects to select a technical solution for the system and release a delivery schedule. DOD's fiscal year 2011 budget request included $302 million for the EHR Way Ahead initiative. Weaknesses in key acquisition management and planning processes contributed to AHLTA having fewer capabilities than originally expected, experiencing persistent performance problems, and not fully meeting the needs of users. (1) A comprehensive project management plan was not established to guide the department's execution of the system acquisition. (2) A tailored systems engineering plan did not exist to guide the technical development of the system, an effort that was characterized by significant complexity. (3) Requirements were incomplete and did not sufficiently reflect user and operational needs. (4) An effective plan was not used to improve users' satisfaction with the system. DOD has initiated efforts to bring its processes into alignment with industry best practices. However, it has not carried out a planned independent evaluation to ensure it has made these improvements. Until it ensures that these weaknesses are addressed, DOD risks undermining the success of further efforts to acquire electronic health record system capabilities.
GAO_GAO-06-718
Background In the 21st century, older Americans are expected to comprise a larger share of the population, live longer, and spend more years in retirement than previous generations. The share of the U.S. population age 65 and older is projected to increase from 12.4 percent in 2000 to 19.6 percent in 2030 and continue to grow through 2050. At the same time, life expectancy is increasing. By 2020, men and women reaching age 65 are expected to live another 17 or 20 years, respectively. Finally, falling fertility rates are contributing to the increasing share of the elderly population. In the 1960s, the fertility rate was an average of three children per woman. Since the 1970s, the fertility rate has hovered around two children per woman, meaning relatively fewer future workers are being born to replace retirees. The combination of these trends is expected to significantly increase the elderly dependency ratio—the number of people age 65 and over in relation to the number of people age 15 to 64. In 1950, there was 1 person age 65 or over for every 8 people age 15 to 64. By 2000, the elderly dependency ratio had risen to 1 person age 65 for every 5 people of traditional working age, and by 2050 this ratio is projected to rise further to about 1 elderly to every 3 working age people (see fig. 1). Consequently, relatively fewer workers will be supporting those receiving Social Security and Medicare benefits, which play an important role in helping older Americans meet their retirement needs. By causing a large shift in the U.S. population’s age structure, some have suggested that the baby boom generation may affect stock and other asset markets when this cohort retires. This concern stems from hypothetical spending and saving patterns over people’s lifetimes, which economists describe in the “life cycle” model. The model hypothesizes that people attempt to smooth their consumption over their lifetime. As individuals’ earnings typically grow over their working life, this suggests that younger workers, with relatively low earnings, may save relatively little or borrow to finance current consumption (or to buy a house); older workers may save significantly more in preparation for retirement; and retirees may spend down their savings. The model therefore predicts that the saving rate is hump-shaped over an individual’s lifetime. Over the course of their lives, individuals make decisions about not only how much to save but also how to distribute their savings among a mix of assets, such as stocks, bonds, real estate, and bank accounts. For example, older workers are expected to shift their portfolios toward less volatile assets, such as bonds or cash accounts, because they will tend to prefer assets with a more predictable flow of income since they will have less time to weather potential price declines in riskier assets such as stocks. In addition to their saving and consumption patterns, baby boomers also may affect stock returns in particular through broader macroeconomic channels. Stocks represent claims on the profits earned by firms, and in the long run the returns on these assets should reflect the productivity of the firms’ capital. Generally, economic theory states that capital becomes more productive with more and better quality labor to use that capital. Because the baby boom retirement is expected to reduce the growth rate of the U.S. labor supply, it may reduce returns to capital, which could reduce the returns to stocks. More generally, investors may price stocks in relation to the underlying value of the firm, taking into account the value of firm’s current assets and stream of future profits. Financial Evidence from Baby Boomers and Current Retirees Does Not Suggest a Sharp Decline in Asset Prices Our analysis of national survey data indicates that the baby boom generation is not likely to precipitate a sharp and sudden decline in financial asset prices as they retire. Our analysis of the 2004 SCF shows that just 10 percent of boomers own more than two-thirds of this generation’s financial assets, excluding assets held indirectly in DB pensions. These wealthiest boomers may be able to support themselves on the income from these investments without spending them down significantly. About one-third of all boomers do not own any stocks, bonds, mutual funds, or retirement accounts. As with the prior generation, baby boomers may continue to accumulate financial assets in retirement and liquidate their assets only gradually with the hope of leaving bequests. The gradual entry of the boomers over a 19-year period into retirement should further reduce the likelihood of a sudden decline in asset prices. Further, boomers have indicated that they plan to retire later than generations that retired in the recent past, with almost half not planning to leave full-time employment until age 65 or later. Many may also continue to work throughout retirement, reducing or delaying their need to sell financial assets. Housing represents a greater share of total wealth for most baby boomers than do financial assets, and therefore the housing markets present more financial risk to most boomers than the financial markets. Concentration of Financial Assets among a Minority of Baby Boomers May Lessen Their Market Effect The potential for the baby boom generation to precipitate a market meltdown in retirement may be substantially reduced by the fact that a small minority of this population holds the majority of the generation’s financial assets. According to our analysis of the 2004 SCF, the wealthiest 10 percent of boomers owned over two-thirds of the approximately $7.6 trillion held by boomers in stocks, bonds, mutual funds, Individual Retirement Accounts (IRAs), and other account-type retirement savings plans in 2004. This wealthiest group held $1.2 million, on average, in these financial assets, plus over $2 million in other assets such as home equity and other investments. Figure 2 shows the concentration of financial assets among boomers. This concentration of wealth is very similar to that of current retirees and could mitigate a sharp and sudden impact on financial asset prices if wealthy boomers need not spend down their financial assets in retirement. Research on current retirees indicates that the wealthiest of these individuals tend to not sell their financial assets, contrary to what the life-cycle model would predict; instead, they choose to live from the income these assets generate. Our analysis of the 2004 SCF also found that of the wealthiest 10 percent of current retirees born before 1946, less than 16 percent spent money from their savings and investments over and above their income during the previous year. In this same group, over 65 percent responded that their income in 2003 exceeded their spending, indicating that they had accumulated more assets without having a net sale from their holdings. The possibility of an asset meltdown is further reduced by the fact that those households that would seem more likely to need to sell their financial assets in retirement do not collectively own a large portion of the total stocks and bonds in the market. Although the majority of baby boomers hold some financial assets in a variety of investment accounts, the total holdings for all boomer households, $7.6 trillion, account for roughly one-third of the value of all stocks and 11 percent of bonds outstanding in the U.S. markets, and the wealthiest boomers own most of these assets (see figs. 3 and 4). Those households that are most likely to spend down their assets in retirement—those not in the top 10 percent by wealth—collectively hold just 32 percent of all baby boomer financial assets. As a group, the influence of these households on the market is less substantial. One-third of this group does not own any stocks, bonds, mutual funds, or retirement accounts, and among those who do, their total holdings are relatively small, with their median holdings totaling $45,900. Analysis of Current Retiree Behavior Reveals a Pattern of Continued Accumulation and Slow Spending of Assets Our analysis of national data on the investment behavior of current retirees reveals an overall slow spending down of assets in retirement, with many retirees continuing to purchase stocks. To the extent that baby boomers behave like current retirees, a rapid and mass sell off of financial assets seems unlikely. In examining retiree holdings in stocks, using biennial data spanning 1994 to 2004 from HRS, we found that many people continue to buy stocks in retirement. More than half of retirees own stocks outside of an IRA, Keogh, or pension account and, among this group, approximately 57 percent purchased stocks at some point over the 10-year period in retirement. We found that from 2002 to 2004 the stock ownership for most of these retirees either increased or remained at the same level. Among those who owned stock, almost 31 percent reported buying stocks during this 2-year period, while just fewer than 26 percent reported selling. For the retirees who both bought and sold stocks, approximately 77 percent purchased at least as much value in stock as they sold. Additionally, although retirees might be expected to have a low tolerance for market risk and will therefore divest themselves of equities in favor of bonds, the SCF data does not suggest such a major reallocation. Comparing households’ holdings in stocks and bonds by age, we found only a small difference in aggregate stock and bond allocation across portfolios. Specifically, data from the 2004 SCF shows that of total wealth among households headed by people over age 70, more is invested in stocks than bonds. In 2004, households headed by those over age 70 had roughly 60 percent of their investments in stocks and 40 percent invested in bonds, while those households headed by someone aged 40 to 48 held 68 percent of their portfolios in stocks and 32 percent in bonds. Our finding that retirees slowly spend down assets is consistent with the results of several academic studies. One recent study that examined asset holdings of elderly households suggests there is a limited decline in financial assets as households age. Prior work also finds evidence that retirees spend down at rates that would leave a considerable portion of their wealth remaining at the end of average life expectancy and a significant number of retirees continue to accumulate wealth at old ages. For example, a 1990 study estimated that most single women would have approximately 44 percent of their initial wealth (at age 65) remaining if they died at the average age of life expectancy. Other studies have shown that over the last several decades the elderly have drawn down their lump- sum wealth at relatively conservative rates of 1 to 5 percent per year. Retirees may spend down assets cautiously as a hedge against longevity risk. Private annuities, which minimize longevity risk, are not widely held by older Americans. As life expectancy increases and people spend more years in retirement, retirees will need their assets to last a longer period of time and, thus, should spend them down more slowly. The average number of years that men who reach age 65 are expected to live has increased from 13 in 1970 to 16 in 2005, and is projected to increase to 17 by 2020. Women have experienced a similar rise—from 17 years in 1970 to over 19 years in 2005. By 2020, women who reach age 65 will be expected to live another 20 years. Another factor that may explain the observed slow spending down of assets among retirees is the bequest motive. National survey data show that many retirees intend to leave a sizeable bequest, which may explain their reluctance to spend down their wealth. Because more than three- quarters of retirees have a bequest motive, many may never sell all of their assets. To the extent that retirees bequeath their assets instead of selling them for consumption, the result could be an intergenerational transfer rather than a mass sell-off that would negatively affect asset markets. In addition to current retirees, data from the HRS indicates that the majority of older baby boomers (those born between 1946 and 1955) expect to leave a bequest. Approximately 84 percent of these baby boomers expect to leave a bequest, while 49 percent expect the bequest to be at least $100,000. It is important to note that the baby boom generation’s asset sale behavior in retirement might differ from that of recent generations of retirees. First, fewer baby boomers are covered by DB plans that typically pay a regular income in retirement and increasingly have DC plans that build up benefits as an account balance. To the extent that this shift means that boomers have an increased share of retirement wealth held as savings instead of as income, this may require boomers to sell more assets to produce retirement income than did previous generations. Second, unanticipated expenses, such as long-term care and other health care costs, may make actual bequests smaller than expected. Although 2002 HRS data indicates that only 8 percent of the leading edge of baby boomers have long-term care insurance, recent projections show that 35 percent of people currently age 65 will use nursing home care. If boomers are confronted with higher than expected health care costs in retirement, they would have a greater need to spend down their assets. Defined Benefit Pension Plans Unlikely to Sell Off Large Amounts of Stocks Solely as a Response to Boomer Retirement Households are not the only holders of financial assets that might shift or draw down their holdings as the baby boomers age. DB pension plans, which promise to provide a benefit that is generally based on an employee’s salary and years of service, hold assets to pay current and future benefits promised to plan participants, which are either current employees or separated or retired former employees. According to Federal Reserve Flow of Funds Accounts data, private-sector plans as a whole owned $1.8 trillion in assets in 2005. Of this amount, plans held approximately half in stocks. According to the Employee Benefit Research Institute (EBRI), federal government DB plans contained an additional $815 billion in assets as of 2004. However, most of these DB plans invest in special Treasury securities that are non-marketable. State and local plans held an additional $2.6 trillion in assets; however, the data do not separate DB and DC assets for these plans. If DB plans hold approximately 85 percent of state and local plan assets, as is the case for federal government plans, and if DB plans held approximately half of their assets as equities, this would mean state and local plans held an estimated $1.1 trillion in equities. Thus, public and private DB plans held an estimated approximate value of $2 trillion in stocks. Because of the number of boomers, we would expect that, as they retire, DB plans would pay out an increasing amount of benefits. This demographic shift could cause plans to sell some of their holdings to provide current benefits. Indeed, a 1994 study projected that the pension system would cease to be a source of saving for the economy roughly in 2024. We would also expect plans to convert some stocks to less volatile assets, such as cash and bonds, to better ensure that plans have sufficient money to pay current benefits. While DB plans may shift their assets in response to demographic changes, it is unclear whether they would cause major variations in stock and bond prices. First, even though DB plans hold about $2 trillion in stocks, this sum still represents a relatively small fraction of total U.S. stock wealth ($16.1 trillion, as of 2004). Further, there are reasons why DB plans may not appreciably shift their investments away from stocks. While the baby boom retirement may increase the number of persons receiving benefits, the DB participant pool has been aging long before the baby boom approached retirement. The percentage of private-sector DB participants made up of retirees has climbed steadily for the past 2 decades, from 16 percent in 1980 to over 25 percent in 2002. Over this time, we have observed little evidence of a shift in investments by private DB plans away from stocks and toward fixed-income assets. In 1993, private DB plans held just below half of their assets in stocks, about the same proportion as today; in 1999, at the recent stock market’s peak, plans held about 58 percent of assets in stocks. Gradual Entry into Retirement and Subsequent Employment Plans Suggest a Cumulative Rather Than Sudden Effect on Markets The gradual transition of the baby boomers into retirement suggests that the sale of their financial assets will be spread out over a long period of time, which mitigates the risk of a shock to financial markets. The baby boom generation spans a 19-year time period—the oldest baby boomers will turn age 62 in 2008, becoming eligible for Social Security benefits, but the youngest baby boomers will not reach age 62 until 2026. Among boomers in the U.S. population in 2004, the peak birth year was 1960, as seen in figure 5, and these boomers will turn age 62 in 2022. As boomers gradually enter retirement, the share of the population age 65 and older is projected to continue increasing until about 2040, at which point it is expected to plateau, as seen in figure 6. Thus, the aging of the baby boom generation, in conjunction with the aging of the overall U.S. population, is a cumulative development rather than a sudden change. In addition, the expected increase in the number of baby boomers working past age 62 may also reduce the likelihood of a dramatic decline in financial asset prices. An increase in employment at older ages could facilitate the accumulation of financial assets over a longer period of time than was typical for earlier generations (albeit also needing to cover consumption over a longer life expectancy). Furthermore, continuing to work for pay in retirement, often called partial or phased retirement, would reduce the need to sell assets to provide income. In fact, some degree of extended employment has already been evident since the late 1990s, as seen in figure 7. From 1998 to 2005, the labor force participation rate of men and women age 65 and older increased by 20 percent and 34 percent, respectively. Survey data show that such a trend is expected to continue: Data from the 2004 SCF indicate that the majority of boomers intend to work past age 62, with boomers most commonly reporting they expect to work full time until age 65. Almost 32 percent of boomers said they never intend to stop working for pay. Another study by the AARP in 2004 found that many baby boomers expect to go back to work after they formally retire—approximately 79 percent of boomers said they intend to work for pay in retirement. Other research has shown that about one- third of those who return to work from retirement do so out of financial necessity. These developments suggest that baby boomers may be less inclined to take retirement at age 62. However, some boomers may not be able to work as long as they expect because of health problems or limited employment opportunities. To the extent that these boomers follow through on their expressed plans to continue paid work, their income from earnings would offset some of their need to spend down assets. The Role of Housing, a Key Asset for Baby Boomers in Retirement Security, Continues to Evolve Housing represents a large portion of most baby boomers’ wealth and their management and use of this asset may have some effect on their decisions to sell assets in the financial markets. For a majority of boomers, the primary residence accounts for their largest source of wealth— outstripping DC pensions, personal savings, vehicles, and other nonfinancial assets. Home ownership rates among boomers exceed 75 percent, and recent years of appreciation in many housing markets have increased the net wealth of many boomers. This suggests that a price decline in housing, a prospect that many analysts appear to be concerned about, could have a much greater impact on the overall wealth of boomers than a financial market meltdown. While research has suggested that baby boomers have influenced housing demand and, in turn, prices, assessing the potential impact of the baby boom retirement on the housing market is beyond the scope of our work. Interestingly, according to experts we interviewed, equity in the primary residence has not historically been viewed by retirees as a source of consumable wealth, except in the case of financial emergencies. Reverse mortgages, which do not require repayment until the owner moves from the residence or dies, could grow more attractive for financing portions of retirement spending, particularly for those baby boomers who are “house rich but cash poor” and have few other assets or sources of income. For boomers who do own financial assets, an expansion of the reverse mortgage market might reduce their need to sell financial assets rapidly. However, boomers also appear to be carrying more debt than did previous generations. Our analysis of the SCF data shows that the mean debt-to- asset ratio for people aged 52 to 58 rose from 24.5 percent in 1992 to 70.9 percent in 2004. To the extent that baby boomers continue to be willing to carry debt into retirement, they may require more income in retirement to make payments on this debt. Researchers and Financial Industry Representatives Largely Foresee Little to No Impact on Financial Markets as the Baby Boomers Retire Researchers and financial industry representatives largely expect the U.S. baby boom’s retirement to have little or no impact on the stock and bond markets. A wide range of studies, both simulation-based and empirical, either predicted a small, negative impact or found little to no association between the population’s age structure and the performance of financial markets. Financial industry representatives whom we interviewed also generally expect the baby boom retirement not to have a significant impact on financial asset returns because of the concentration of assets among a minority of boomers, the possibility of increased global demand for U.S. assets, and other reasons. Broadly consistent with the literature and views of financial industry representatives, our statistical analysis indicates that past changes in macroeconomic and financial factors have explained more of the variation in historical stock returns than demographic changes. Variables such as industrial production and dividends explained close to half of the variation in stock returns, but changes in the population’s age structure explained on average less than 6 percent. If the pattern holds, our findings indicate that such factors could outweigh any future demographic effect on stock returns. Academic Studies Largely Foresee Little to No Baby Boom Retirement Effect on the Financial Markets With few exceptions, the academic studies we reviewed indicated that the retirement of U.S. baby boomers will have little to no effect on the financial markets. Studies that used models to simulate the market effects of a baby boom followed by a decline in the birth rate generally showed a small, negative effect on financial asset returns. Similarly, most of the empirical studies, which statistically examined the impact of past changes in the U.S. population’s age structure on rates of return, suggested that the baby boom retirement will have a minimal, if any, effect on financial asset returns. Simulation-Based Studies Thirteen studies that we reviewed used models of the economy to simulate how a hypothetical baby boom followed by a baby bust would affect financial asset returns. The simulation models generally found that such demographic shifts can affect returns through changes in the saving, investment, and workforce decisions made by the different generations over their lifetime. For example, baby boomers cause changes in the labor supply and aggregate saving as they progress through life, influencing the demand for assets and productivity of capital and, thus, the rates of return. Specifically, the models predicted that baby boomers cause financial asset returns to increase as they enter the labor force and save for retirement and then cause returns to decline as they enter retirement and spend their savings. According to a recent study surveying the literature, such simulation models suggest, on the whole, that U.S. baby boomers can expect to earn on their financial assets around half a percentage point less each year over their lifetime than the generation would have earned absent a baby boom. In effect, for two investors—one of whom earns 7 percent and the other earns 6.5 percent annually over a 30-year period—the former investor would earn $6.61 for every dollar saved at the beginning of the period and latter investor would earn $5.61 for every dollar saved. None of the simulation-based studies concluded that the U.S. baby boom retirement will precipitate a sudden and sharp decline in asset prices, and some studies presented their results in quantitative terms. One of the studies, for example, predicted that the baby boom’s retirement would at worst lower stock prices below what they would otherwise be by roughly 16 percent over a 20-year period starting around 2012. This decline, however, is equivalent to around 0.87 percent each year—somewhat small in comparison to real annual U.S. stock returns, which have averaged about 8.7 percent annually since 1948. The study therefore concluded that the size of the decline is much too small to justify the term “meltdown.” Moreover, another study predicted that baby boomers can expect the returns on their retirement savings to be about 1 percentage point below their current annual returns. The study’s lower returns reflect the decline in the productivity of capital that results from fewer workers being available (due to the baby boom retirement) to put the capital to productive use. A third study’s results suggest that fluctuations in the size of the different generations induce substantial changes in equity prices, but the study does not conclude that the baby boom’s retirement will lead to a sharp and sudden decline in asset prices. The simulation models we reviewed, by design, excluded or simplified some factors that were difficult to quantify or involved uncertainty that may cause the models to overstate the baby boom’s impact on the markets. For example, some models assumed that baby boomers will sell their assets solely to a relatively smaller generation of U.S. investors when they retire. Some researchers have noted that if China and India were to continue their rapid economic growth, they may spur demand for the assets that baby boomers will sell in retirement. Supporting this view, other research suggests that global factors may be more important than domestic factors in explaining stock returns in developed countries. Some models assumed that individuals in the same generation enter the labor force at the same time, work a fixed amount, and retire at the same time. In reality, some may work full or part-time after reaching retirement age. Likewise, the baby boomers’ children, rather than working a fixed amount, may delay their entry into the labor force and take advantage of job opportunities created by retiring baby boomers. These factors could dampen the effect of the baby boomer retirement on the markets. A few of the models neglect that some investors may be forward-looking and anticipate the potential effect of the aging baby boomers on the markets. To the extent that such investors do so, current financial asset prices would reflect, at least partially, the future effect of the baby boom’s retirement and thus dampen the event’s effect on asset prices when it actually occurs. Finally, the models typically do not include a significant increase in immigration, but such an outcome would increase the labor force and be expected to raise the productivity of capital and, thus, the return on financial assets. Empirical Studies Seven empirical studies of the U.S. financial markets we reviewed suggested, on average, that the retirement of U.S. baby boomers will have a minimal, if any, impact on financial asset returns. These studies specifically tested whether changes in the U.S. population’s age structure have affected stock returns or bond yields or both over different periods, ranging from 1910 to 2003. These studies focused primarily on changes in the size of the U.S. middle age population (roughly age 40 to 64) or its proportion to other age segments of the population. People in this age group are presumably in their peak earning and saving years and, thus, expected to have the most significant impact on financial asset returns. These empirical studies are inherently retrospective. Therefore, care must be taken in drawing conclusions about a future relationship between demographics and asset performance, especially given that the historical data do not feature an increase in the retired population of the magnitude that will occur when the U.S. baby boomers retire. However, the significant shift in the structure of the population that occurred as the boomers entered the labor force and later their peak earning years should provide an indication of how demographic change influences financial asset returns. For stocks, four of the seven studies found statistical evidence implying that the past increases in the relative size of the U.S. middle age population have increased stock returns. This finding supports the simulation-model predictions that a relative decrease in the middle age population—as is expected to occur when baby boomers begin to retire— will lower stock returns. In contrast, two of the studies found little evidence that past changes in the U.S. middle age population have had any measurable effect on stock returns. Finally, the remaining study found evidence implying that a relative decrease in the U.S. middle age population in the future would increase, rather than decrease, stock returns. For the four studies whose statistical results implied that the baby boom retirement will cause stock returns to decline, we determined that the magnitude of their demographic effect, on balance, was relatively small. Using U.S. Census Bureau data, we extrapolated from three of the four studies’ results to estimate the average annual change in returns of the Standard and Poor’s (S&P) 500 Index that the studies would have attributed to demographic changes from 1986 to 2004. During this period, baby boomers first began to turn age 40 and the proportion of individuals age 40 to 64 went from about 24.5 percent of the population to about 32 percent. We found two of the studies’ results show that the increase in the middle age population from 1986 to 2004 led stock returns, on average, to increase by 0.19 and 0.10 percentage points each year, respectively. We found that the third study’s results showed a much larger average annual increase of about 6.7 percentage points from 1986 to 2004. To put these three estimates into context, the average annual real return of the S&P 500 Index during this period was around 10 percent. The last estimate, however, may exaggerate the probable impact of the baby boom retirement on stock returns. The fourth study’s methodology did not allow us to use U.S. census data to estimate the effect of its results on stock returns from 1986 to 2004. Nonetheless, the study estimated that demographically driven changes in the demand for stocks can account for about 77 percent of the annual increase in real stock prices between 1986 and 1997 and predicted that stock prices will begin to fall around 2015 as a result of falling demographic demand. Besides testing for the effect of demographic shifts on stock returns, five of the seven studies included bonds in their analyses and largely found that the baby boom retirement will have a small effect or no effect on bond yields. Three studies found statistical evidence indicating that the past increase in the relative size of the U.S. middle age population reduced long-term bond yields. In turn, the finding suggests that the projected decrease in the middle age population in the future would raise yields. Extrapolating the results of one study, we find its estimates imply that the increase in the U.S. middle age population from 1986 to 2004 reduced long- term bond yields by about 0.42 percentage points each year, compared to actual real yields that averaged 3.41 percent over the same time period. The other two studies tested how the demographic shift affected long-term bond prices rather than yields, but an increase in prices would, in effect, reduce yields. We found that the results of one of the studies showed that the demographic shift from 1986 to 2004 raised bond prices by only about 0.05 percentage points each year. The other study’s methodology did not allow us to estimate the effect, but the study estimated that demographically driven changes in the demand for bonds can account for at least 81 percent of the annual increase in real bond prices between 1986 and 1997 and predicted that bond prices will begin to fall around 2015 as a result of falling demographic demand. In contrast to these studies, two studies found little statistical evidence to indicate that past changes in the middle age population have had any measurable effect on long-term bond returns. Financial Industry Representatives Do Not Expect Baby Boom Retirement to Have a Significant Financial Market Impact The financial industry representatives with whom we met generally told us that they do not expect U.S. baby boomers to have a significant impact on the financial markets when they retire. They cited a number of factors that could mitigate a baby boom induced market decline, many of which we discussed earlier. For example, some mentioned the concentration of assets among a minority of households, the long time span over which boomers will be retiring, and the possibility for many boomers to continue working past traditional retirement ages. Some also noted that baby boomers will continue to need to hold stocks well into retirement to hedge inflation and to earn a higher rate of return to hedge the risk of outliving their savings, reducing the likelihood of a sharp sell-off of stock. A number of representatives cited developments that could increase the demand for U.S. assets in the future, such as the continued economic growth of developing countries and an increase in immigration. Finally, several commented that interest rates, business cycles, and other factors that have played the primary role in influencing financial asset returns are likely to overwhelm any future demographic effect from changes in the labor force or life cycle savings behavior. Broad Economic Factors Will Likely Have a Greater Impact on Financial Markets Than Will Demographics Our statistical analysis indicates that macroeconomic and financial factors explain more of the variation in historical stock returns than population shifts and suggests that such factors could outweigh any future demographic effect on stock returns. In addition, factors not captured by our model were also larger sources of stock return variation than the demographic variables we selected. We undertook our own statistical analysis, because many of the empirical studies we reviewed either did not include relevant variables that influence stock returns in their models or included them but did not discuss the importance of these variables relative to the demographic variables. To broaden the analysis, we developed a statistical model of stock returns based on the S&P 500 Index to compare the effects of changes in demographic, macroeconomic, and financial variables on returns from 1948 to 2004. As shown in figure 8, fluctuations in the macroeconomic and financial variables that we selected collectively explain about 47 percent of the variation in stock returns over the period. These variables are the growth rate of industrial production, the dividend yield, the difference between interest rates on long- and short-term bonds, and the difference between interest rates on risky and safe corporate bonds—all found in previous studies to be significant determinants of stock returns. These variables are likely to contain information about current or future corporate profits. In contrast, our four demographic variables explained only between 1 percent and 8 percent of the variation in the annual stock returns over the period. These variables were based on population measures found to be statistically significant in the empirical studies we reviewed: the proportion of the U.S. population age 40 to 64, the ratio of the population age 40 to 49 to the population age 20 to 29, and annual changes in the two. Note, however, that almost half of the variation in stock returns was explained by neither the macroeconomic and financial variables nor the demographic factors we tested, a finding that is comparable to similar studies. Hence, some determinants of stock returns remain unknown or difficult to quantify. Not explained by selected variables (average) The statistical model shows that financial markets are subject to a considerable amount of uncertainty and are affected by a multitude of known and unknown factors. However, of those known factors, the majority of the explanatory power stems from developments other than domestic demographic change. Simply put, demographic variables do not vary enough from year to year to explain the stock market ups and downs seen in the data. This makes it unlikely that demographic changes, alone, could induce a sudden and sharp change in stock prices, but leaves open the possibility for such changes to lead to a sustained reduction in returns. At the same time, fluctuations in dividends and industrial production, which are much more variable than demographic changes, may obscure any demographic effect in future stock market performance. For example, a large recession or a significant reduction in dividends would have a negative effect on annual returns that would likely overwhelm any reduction in returns resulting from the baby boom retirement. Conversely, an unanticipated increase in productivity or economic growth would be expected to increase returns substantially and likely dwarf the effect of year-over-year changes in the relative size of the retired population. Baby Boomers and Future Generations Likely to Increasingly Rely on Their Own Savings, Placing Greater Importance on Rates of Return and Financial Management Skills While the baby boom retirement is not likely to cause a sharp decline in asset prices or returns, the retirement security of boomers and future generations will likely depend increasingly on individual saving and rates of return as guaranteed sources of income become less available. This reflects the decline of coverage by traditional DB pension plans, which typically pay a regular income throughout retirement, and the rise of account-based DC plans. Uncertainties about the future level of Social Security benefits, including the possible replacement of some defined benefits by private accounts, and the projected increases in medical and long-term care costs add to the trend toward individuals taking on more responsibility and risk for their retirement. All of these developments magnify the importance of achieving rates of return on savings high enough to produce sufficient income for a secure retirement. In this environment, individuals will need to become more educated about financial issues, both in accumulating sufficient assets as well as learning to draw them down effectively during a potentially long retirement. As Baby Boomers Retire, Fewer May Receive Income from Traditional Pensions Changes in pension design will require many baby boomers and others to take greater responsibility in providing for their retirement income, increasing the importance of rates of return for them. The past few decades have witnessed a dramatic shift from DB plans to DC plans. From 1985 to 2004, the number of private sector DB plans has shrunk from about 114,000 to 31,000. From 1985 to 2002 (the latest year for which complete data are available), the number of DC plans almost doubled, increasing from 346,000 to 686,000. Furthermore, the percentage of full-time employees participating in a DB plan (at medium and large firms) declined from 80 to 33 percent from 1985 to 2003, while DC coverage increased from 41 to 51 percent over the period. The shift in pension design has affected many boomers. According to the 2004 SCF, about 50 percent of people older than the baby boomers reported receiving benefits from a DB plan, but fewer than 44 percent of baby boomers have such coverage. However, within the baby boom generation, there is a noticeable difference: 46 percent of older boomers (born between 1946 and 1955) reported having a DB plan, while only 39 percent of young boomers (born between 1956 and 1964) had a DB plan (see table 1). According to the SCF, the percentage of households age 35 to 44 with a DC plan increased from 18 percent in 1992 to 42 percent in 2001. The shift from DB to DC plans places greater financial management responsibility on a growing number of baby boomers and makes their retirement savings more dependent on financial market performance. Unlike DB plans, DC plans do not promise a specified benefit for life. Rather, DC plan benefits depend on the amount of contributions, if any, made to the DC plan by the employee and the employer, and the returns earned on assets held in the plan. Because there is no guaranteed benefit, the responsibility to manage these assets and the risk of having insufficient pension benefits at retirement falls on the individual. Similar to DB plans, some DC plans offer their participants the option of converting their balance into an annuity upon retirement, but DC plan participants typically take or keep their benefits in lump-sum format. Small changes in average rates of return can affect the amount accumulated by retirement and income generated during retirement. For example, if a boomer saved $500 each year from 1964 until retirement in 2008 and earned 8 percent each year, he or she would accumulate almost $209,000 at retirement. The same worker earning 7 percent each year over the same period would accumulate only $153,000 at retirement, a difference in total saving of 27 percent. Moreover, rates of return can have a similar affect on retirement income. With $209,000 at retirement, the retiree could spend $19,683 each year for 20 years if he or she continued to earn 8 percent each year in retirement. If the annual rate of return dropped one percentage point to 7 percent, the same amount of retirement savings would generate only about $18,412 each year for 20 years, a difference of 6.5 percent in annual retirement income. Retirees depending on converting savings to income are particularly dependent on rates of return, since they may have limited employment options. Similarly, workers nearing retirement may be more affected by fluctuations in rates of return than younger workers, who would have more working years to make up any declines or losses. Although DC plans place greater responsibility on individuals for their retirement security, statistics indicate that so far at least some have yet to fully accept it. First, many workers who are covered by a DC plan do not participate in the plan. Recent data indicate that only about 78 percent of workers covered by a DC plan actually participate in the plan. Second, even among baby boom participants, many have not saved much in these accounts. Figure 9 shows the percentage of boomers with account balances in their DC pensions and IRAs, which are personal accounts where individuals can accumulate retirement savings. Over one-half of households headed by someone born from 1946 to 1955 did not have a DC pension; for those that did have a DC pension, their median balance was $58,490, an amount that would generate just a $438 monthly annuity starting at age 65. Similarly, only 38 percent reported having an IRA, and the median IRA balance among those participating was only $37,000, an amount that would generate a monthly annuity of only $277. These statistics may not provide a complete picture for some individuals and households, since those with a small DC plan account balance also may have a DB plan and thus may not have the same need to contribute to their account. However, EBRI found that, as of 2004, median savings in 401(k) accounts, a type of DC plan, were higher for every age group up to age 64 for those with a DB plan than those with only a 401(k). Also, the median balances for those with only 401(k) plans may not be enough to support them in retirement. For families with the head of family age 55 to 64 in 2004 with only a 401(k), EBRI estimated that their median balance was $50,000; for those age 45 to 54, the median was $40,000. While many in these age groups could continue to work for several years before reaching retirement age, without substantially higher savings, these households may be primarily dependent on income from Social Security during retirement. Extending our analysis of the allocation of baby boomer assets generally reveals that financial assets are, in general, a small portion of boomers’ asset portfolios. Among all boomers, housing is the largest asset for the majority of households, with vehicles making up the second largest portion of wealth. Figure 10 shows the allocation of baby boomer assets among housing, cash, savings, pensions, vehicles, and other assets. Not including the top quartile by wealth, savings and pensions, the portions of wealth that are invested in stocks and bonds are a small portion of overall wealth, constituting no more than 20 percent of total gross assets per household. Among the bottom two quartiles by wealth, on average boomers have more of their wealth invested in their personal vehicle (automobile or truck), which depreciates over time, than in either savings or pensions, assets that generally appreciate over time. Overall, the finding that most boomers do not hold a significant amount of financial assets, measured both by account balance and by percentage of total assets, mitigates this generation’s potential effect on the asset markets as boomers retire and highlights the fact that many boomers may enter retirement without adequate personal savings. Financial Stress on Social Security, Medicare, and Health Expenditures May Create Uncertainties for Some Baby Boomers and Future Generations The uncertainties surrounding the future financial status of Social Security, the program which provides the foundation of retirement income for most retirees, also presents risks to baby boomers’ retirement security. These benefits are particularly valuable because they provide a regular monthly income, adjusted each year for inflation, to the recipient and his survivors until death. Thus, Social Security benefits provide some insurance against outliving one’s savings and against inflation eroding the purchasing power of a retiree’s income and savings. Such benefits provide a unique retirement income source for many American households. Social Security, however, faces long-term structural financing challenges that, if unaddressed, could lead to the exhaustion of its trust funds. According to the intermediate assumption projections of Social Security’s 2006 Board of Trustees’ Report, annual Social Security payouts will begin to exceed payroll taxes by 2017, and the Social Security trust fund is projected to be exhausted in 2040. Under these projections, without counterbalancing changes to benefits or taxes, tax income would be enough to pay only 74 percent of currently scheduled benefits as of 2040, with additional, smaller benefit reductions in subsequent years. These uncertainties are paralleled, if not more pronounced, with Medicare, the primary social insurance program that provides health insurance to Americans over age 65. Medicare also faces very large long-term financial deficits. According to the 2006 Trustees report, the Hospital Insurance Trust Fund is projected to exhaust itself by 2018. The challenges stem from concurrent demographic trends—people are living longer, spending more years in retirement, and have had fewer children—and from costs for health care rising faster than growth in the gross domestic product. These changes increase benefits paid to retirees and reduce the number of people, relative to previous generations, available to pay to support these benefits. These financial imbalances have important implications for future retirees’ retirement security. While future changes to either program are uncertain, addressing the financial challenges facing Social Security and Medicare may require retirees to receive reduced benefits, relative to scheduled future benefits, while workers might face higher taxes to finance current benefits. In addition, some proposals to reform Social Security incorporate a system of individual accounts into the current program that would reduce scheduled benefits under the current system, perhaps with protections for retirees, older workers, and low-wage workers, and make up for those reductions to some degree with income from the individual accounts. Like DC plans generally, these accounts would give the individual not only the prospect for higher rates of return but also the risk of loss, placing additional responsibility and risk on individuals to provide for their own retirement security. Similarly, tax-preferred health savings accounts are a type of personal account to allow enrollees to pay for certain health-related expenditures. The worsening budget deficits that are expected to result if fiscal imbalances in Social Security and Medicare are not addressed could have important effects on the macroeconomy. By increasing the demand for credit, federal deficits tend to raise interest rates, which are mitigated to the extent that foreign savings flow into the United States to supplement scarce domestic savings. If foreigners do not fully finance growing budget deficits, upward pressure on interest rates can reduce domestic investment in productive capacity. All else equal, these higher borrowing costs could discourage new investment and reduce the value of capital already owned by firms, which should be reflected in reduced stock prices as well. The fiscal challenges facing Medicare underscore the issue of rising retiree health costs generally. Rising health care costs have made health insurance and anticipated medical expenses increasingly important issues for older Americans. Although the long-term decline in the percentage of employers offering retiree health coverage has appeared to have leveled off in recent years, retirees continue to face an increasing share of costs, eligibility restrictions, and benefit changes that contribute to an overall erosion in the value and availability of coverage. A recent study estimated that the percentage of after-tax income spent on health care will almost double for older individuals by 2030 and that after taxes and health care spending incomes may be no higher in 2030 than in 2000 for a typical older married couple. People with lower incomes will be the most adversely affected. The study projected that by 2030, those in the bottom 20 percent of the income distribution would spend more than 50 percent of their after-tax income on insurance premiums and out-of-pocket health care expenses, an increase of 30 percentage points from 2000. The costs of healthcare in retirement, especially long-term and end-of-life care, are a large source of uncertainty for baby boomers in planning their retirement financing, as typical private and public insurance generally does not cover these services. Nursing home and long-term care are generally not covered under Medicare but by Medicaid, which is the program that provides health insurance for low-income Americans. Medicaid eligibility varies from state to state, but generally requires that a patient expend most of their financial assets before they can be deemed eligible for benefits. Most private long-term care insurance policies pay for nursing home and at- home care services, but these benefits may be limited, and few elderly actually purchase this type of coverage, with a little over 9 million policies purchased in the United States by 2002. Thus, health care costs may cause some baby boomers without long-term care insurance to rapidly spend retirement savings. Baby Boomers and Future Generations May Increasingly Rely on Their Own Investment Decisions, Highlighting Importance of Financial Literacy With more individuals being asked to take responsibility for saving for their own retirement in a DC pension plan or IRA, financial literacy and skills are becoming increasingly important in helping to ensure that retirees can enjoy a comfortable standard of living. However, studies have found that many individuals have low financial literacy. A recent study of HRS respondents over age 50 found that only half could answer two simple questions regarding compound interest and inflation correctly, and one-third could answer these two questions and another on risk diversification correctly. Other research by AARP of consumers age 45 and older found that they often lacked knowledge of basic financial and investment terms. Similarly, a survey of high school students found that they answered questions on basic personal finance correctly only about half of the time. Baby boomers approaching retirement and fortunate enough to have savings may still face risks from failing to diversify their stock holdings. In one recent survey, participants perceived a lower level of risk for their company stock than for domestic, diversified stock funds. However, investors are more likely to lose their principal when investing in a single stock as opposed to a diversified portfolio of stocks, because below average performance by one firm may be offset by above average performance by the others in the portfolio. In addition, holding stock issued by one’s employer in a pension account is even more risky because if the company has poor financial performance, it could result in both the stock losing value and the person losing his job. One consequence of this poor financial literacy may be investors holding a substantial part of their retirement portfolio in employer stock. EBRI reported that the average 401(k) investor age 40 to 49 had 15.4 percent of her portfolio in company stock in 2004; the average investor in his 60’s still had 12.6 percent of her assets in company stock. Perhaps of greater concern, the Vanguard Group found that, among plans actively offering company stock, 15 percent of participants had more than 80 percent of their account balance in company stock in 2004. Concluding Observations Our findings largely suggest that baby boomers’ retirement is unlikely to have a dramatic impact on financial asset prices. However, there appear to be other significant retirement risks facing the baby boom and future generations. The long-term financial weaknesses of Social Security and Medicare, coupled with the uncertain future policy changes to these programs’ benefits, and the continued decline of the traditional DB pension system indicate a shift toward individual responsibility for retirement. These trends mean that rates of return will play an increasingly important role in individuals’ retirement security. For those with sufficient income streams, this new responsibility for retirement will entail a lifetime of financial management decisions—from saving enough to managing such savings to generate an adequate stream of income during retirement, the success of which will directly or indirectly be dependent on rates of return. Given the potential impact of even a modest decline in returns over the long run on savings and income, market volatility, and uncertainties about pensions, Social Security, and Medicare, the onset of the baby boom retirement poses many questions for future retirement security. The performance of financial and other asset markets provides just one source of risk that will affect the retirement income security of baby boomers and ensuing generations. For those with financial assets, choices they make about investments play a critical role not just in having adequate savings at retirement but also in making sure their wealth lasts throughout retirement. That Americans are being asked to assume more responsibility for their retirement security highlights the importance of financial literacy, including basic financial concepts, investment knowledge, retirement age determination, and asset management in retirement. Government policy can help, policies that encourage individuals to save more and work longer (for those who are able) and that promote greater education about investing and retirement planning that can help ensure higher and more stable retirement incomes in the future. Although individual choices about saving and working will continue to play a primary role in determining retirement security, the high percentage of boomers who have virtually no savings, assets, or pensions will face greater difficulties in responding to the new retirement challenges. For this group, the federal government will play an especially key role in retirement security through its retirement and fiscal policies. The challenges facing Social Security and Medicare are large and will only grow as our population ages. Legislative reforms to place Social Security and Medicare on a path towards sustainable long-term solvency would not only reduce uncertainty about retiree benefits, particularly for those Americans who own few or no assets, but also help address the federal government’s long-term budget imbalances that could affect the economy and asset markets. Ultimately, retirement security depends on how much society and workers are willing to set aside for savings and retirement benefits and on the distribution of retirement risks and responsibilities among government, employers, and individuals. One of Congress’s greatest challenges will be to balance this distribution in a manner that achieves a national consensus and helps Americans keep the promise of adequate retirement security alive in the 21st century. Agency Comments We provided a draft of this report to the Department of Labor, the Department of the Treasury, the Department of Housing and Urban Development, and the Social Security Administration, as well as several outside reviewers, including one from the Board of Governors of the Federal Reserve System. Labor, Treasury, and SSA and the outside reviewers provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Labor, the Secretary of the Treasury, the Secretary of the Housing and Urban Development Department, and the Commissioner of the Social Security Administration, appropriate congressional committees, and other interested parties. We will also make copies available to others on request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions concerning this report, please contact Barbara Bovbjerg at (202) 512-7215 or George Scott at (202) 512-5932. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made contributions are listed in appendix VI. Appendix I: Scope and Methodology To analyze whether the retirement of the baby boom generation is likely to precipitate a dramatic drop in financial asset prices, we relied primarily on information from two large survey data sets. We calculated the distribution of assets and wealth among baby boomers and existing retirees and bequest and work expectations of baby boomers from data from various waves of the Federal Reserve’s Survey of Consumer Finances (SCF). This triennial survey asks extensive questions about household income and wealth components; we used the latest available survey from 2004 and previous surveys back to 1992. The SCF is widely used by the research community, is continually vetted by the Federal Reserve, and is considered to be a reliable data source. The SCF is believed by many to be the best source of publicly available information on U.S. household finances. Some caveats about the data should be kept in mind. Because some assets are held very disproportionately by relatively wealthy families, the SCF uses a two part sample design, one of which is used to select a sample with disproportionate representation of families more likely to be relatively wealthy. The two parts of the sample are adjusted for sample nonresponse and combined using weights to provide a representation of families overall. In addition, the SCF excludes one small set of families by design. People who are listed in the October issue of Forbes as being among the 400 wealthiest in the United States are excluded. To enable the calculation of statistical hypothesis tests, the SCF uses a replication scheme. A set of replicate samples is selected by applying the key dimensions of the original sample stratification to the actual set of completed SCF cases and then applying the full weighting algorithm to each of the replicate samples. To estimate the variability of an estimate from the SCF, independent estimates are made with each replicate and with each of the multiple imputations; a simple rule is used to combine the two sources of variability into a single estimate of the standard error. We also analyzed recent asset sales by retirees and work and bequest expectations of baby boomers, as well as gathered further financial information on baby boomers and older generations, from data from the Health and Retirement Study (HRS) from 1994 to 2004. The University of Michigan administers the HRS every 2 years as a panel data set, surveying respondents every two years starting in 1992 about health, finances, family situation, and many other topics. Like the SCF, the HRS is widely used by academics and continually updated and improved by administrators. We also received expert opinions on the likely impact of the baby boom retirement on asset and housing markets from interviews with various financial management companies, public policy organizations, and government agencies, particularly those agencies dealing with housing. To assess the conclusions of academics researchers and outside experts on the financial impacts of the baby boom retirement, we read, analyzed, and summarized theoretical and empirical academic studies on the subject. Based on our selection criteria, we determined that these studies were sufficient for our purposes but not that their results were necessarily conclusive. We also interviewed financial industry representatives from mutual fund companies, pension funds, life insurance companies, broker- dealers, and financial industry trade associations. We also did our own analysis of the historical importance of demographics and other variables on stock returns by collecting demographic, financial, and macroeconomic data and running a regression analysis. We performed data reliability assessments on all data used in this analysis. To assess the role rates of return will play in providing retirement income in the future, we synthesized findings from the analysis of financial asset holdings to draw conclusions about the risk implications for different subpopulations of the baby boom and younger generations. We also used facts and findings on pensions and Social Security (from past GAO reports and the academic literature) and insights from interviews with outside experts to extend and support our conclusions. We conducted our work between August 2005 and June 2006 in accordance with generally accepted government auditing standards. Appendix II: Bibliography of Simulation- Based and Empirical Studies Abel, Andrew B. “Will Bequests Attenuate the Predicted Meltdown in Stock Prices When Baby Boomers Retire?” The Review of Economics and Statistics, vol. 83, no. 4 (2001): 589-595. Abel, Andrew B. “The Effects of a Baby Boom on Stock Prices and Capital Accumulation in the Presence of Social Security.” Econometrica, vol. 71, no. 2 (2003): 551-578. Ang, Andrew and Angela Maddaloni. “Do Demographic Changes Affect Risk Premiums? Evidence from International Data.” Journal of Business, vol. 78, no. 1 (2005): 341-379. Bakshi, Gurdip S. and Zhiwu Chen. “Baby Boom, Population Aging, and Capital Markets.” Journal of Business, vol. 67, no. 2 (1994): 165-202. Bergantino, Steven M. “Life Cycle Investment Behavior, Demographics, and Asset Prices.” Ph.D diss., Massachusetts Institute of Technology, 1998. Börsch-Supan, Axel. “Global Aging: Issues, Answers, More Questions.” Working Paper WP 2004-084. University of Michigan Retirement Research Center (2004). Börsch-Supan, Axel, Alexander Ludwig, and Joachim Winter. “Aging, Pension Reform, and Capital Flows: A Multi-Country Simulation Model.” Working Paper No. 04-65. Mannheim Research Institute for the Economics of Aging (2004). Brooks, Robin J. “Asset Market and Savings Effects of Demographic Transitions.” Ph.D diss., Yale University, 1998. Brooks, Robin. “What Will Happen to Financial Markets When the Baby Boomers Retire?” IMF Working Paper WP/00/18, International Monetary Fund (2000). Brooks, Robin. “Asset-Market Effects of the Baby Boom and Social- Security Reform.” American Economic Review, vol. 92, no. 2 (2002): 402- 406. Brooks, Robin. “The Equity Premium and the Baby Boom.” Working Paper, International Monetary Fund, 2003. Bütler, Monika, and Philipp Harms. “Old Folks and Spoiled Brats: Why the Baby-Boomers’ Savings Crisis Need Not Be That Bad.” Discussion Paper No. 2001-42. CentER, 2001. Davis, E. Phillip and Christine Li. “Demographics and Financial Asset Prices in the Major Industrial Economies.” Working Paper. Brunel University, West London: 2003. Erb, Claude B., Campbell R. Harvey, and Tadas E. Viskanta. “Demographics and International Investments.” Financial Analysis Journal, vol. 53, no. 4 (1997): 14-28. Geanakoplos, John, Michael Magill, and Martine Quinzii. “Demography and the Long-Run Predictability of the Stock Market.” Cowles Foundation Paper No. 1099. Cowles Foundation for Research in Economics, Yale University: 2004. Goyal, Amit. “Demographics, Stock Market Flows, and Stock Returns.” Journal of Financial and Quantitative Analysis, vol. 39, no. 1 (2004): 115-142. Helmenstein, Christian, Alexia Prskawetz, Yuri Yegorov. “Wealth and Cohort Size: Stock Market Boom or Bust Ahead?” MPIDR Working Paper WP 2002-051. Max-Planck Institute for Demographic Research, 2002. Lim, Kyung-Mook and David N. Weil. “The Baby Boom and the Stock Market Boom.” Scandinavian Journal of Economics, vol. 105, no. 3 (2003): 359-378. Macunovich, Diane. “Discussion of Social Security: How Social and Secure Should It Be?” In Social Security Reform: Links to Saving, Investment, and Growth. Steven Sass and Robert Triest, eds., Boston: Federal Reserve Bank of Boston (1997): 64-74. Poterba, James M. “Demographic Structure and Asset Returns.” The Review of Economics and Statistics, vol. 83, no. 4 (2001): 565-584. Poterba, James M. “The Impact of Population Aging on Financial Markets.” Working Paper 10851. Cambridge, Mass.: National Bureau of Economic Research, 2004. Yoo, Peter S. “Age Distributions and Returns of Financial Assets.” Working Paper 1994-002A. St. Louis: Federal Reserve Bank of St. Louis, 1994. Yoo, Peter S. “Population Growth and Asset Prices.” Working Paper 1997- 016A. St. Louis: Federal Reserve Bank of St. Louis, 1997. Young, Garry. “The Implications of an Aging Population for the UK Economy.” Bank of England Working Paper no. 159. Bank of England, London: 2002. Appendix III: Summary of the Simulation- Based and Empirical Studies Assessing the Impact of a Baby Boom on Financial Markets Demographic variable(s) Asset variable(s) Demographic changes predicted future changes in the equity premium in the international data but only weakly in the U.S. data. In the United States, increases in the average age of persons older than age 20 predicted a higher risk premium. Demographic variable(s) Asset variable(s) In the United States, the increase in the demand for stocks and bonds based on demographic changes increased stock and bond prices but had no effect on the equity premium. The increase in people age 40 to 64 relative to the rest of the population increased stock and bond prices, particularly in the United States. Also, the increase in people 40 to 64 relative to people over 65 increased the equity premium. The relative increase in people age 40 to 64 increased stock prices and decreased long-term bond yields in the United States and other countries. In the United States, the relative increase in the population age 40 to 49 increased stock returns. The results for the other countries included in the study were mixed. Percentage change and level of population age 25 to 44, age 45 to 64, and age 65 and over. In the United States, the relative increase in persons age 45 to 64 increased the equity premium. Average age of person over age 25 Demographic variable(s) Asset variable(s) In the United States, the increase in people age 45 and 66 decreased stock returns. In the United States, the relative increase in people age 40 to 64 decreased short- term government bond returns but had no effect on long-term government bond or stock returns. In the United States, the relative increase in people age 45 to 54 decreased annual returns of short and intermediate-term government bonds but had no effect on the annual returns of stock and long-term government or corporate bonds. Appendix IV: Econometric Analysis of the Impact of Demographics on Stock Market Returns This appendix discusses our analysis of the impact of demographics and macroeconomic and financial factors on U.S. stock market returns from 1948 to 2004. In particular, we discuss (1) the development of our model used to estimate the relative importance of demographics and other factors in determining stock market returns, (2) the data sources, and (3) the specifications of our econometric model, potential limitations, and results. GAO’s Econometric Model of the Effects of Demographic, Macroeconomic, and Financial Factors on Stock Market Returns We developed an econometric model to determine the effects of changes in demographic, macroeconomic, and financial variables on stock market returns from 1948 to 2004. Our independent empirical analysis is meant to address two separate but related questions: Are the demographic effects on stock returns found in some of the empirical literature still apparent when additional control variables— macroeconomic and financial indicators known to be associated with stock returns—are present in the regression analysis? How much of the variation in stock returns is explained by those macroeconomic and financial indicators as compared to demographic variables? Answering the first question serves to address the possibility of omitted variable bias in simpler regression specifications. For example, studies by Poterba; Geanakoplos, Magill, and Quinzii (hereafter, GMQ); and Yoo use only demographic variables as their independent variables. The omission of relevant variables in regressions of this kind will result in biased estimates of the size and significance of the effects under investigation. Answering the second question serves to put the influence of demographics on stock returns in perspective: How much of stock market movements are explained by demographics as opposed to other variables? To answer the questions we include a series of demographic variables from the literature we reviewed in a multivariable regression model. We relied primarily on information in a seminal study done by Eugene Fama to develop our model. Data and Sample Selection We analyzed the determinants of real (adjusted for inflation) total (including both price changes and dividends) returns of the Standard and Poor’s (S&P) 500 Index from 1948 to 2004. We chose the S&P 500 Index as our dependent variable not only because it is widely regarded as the best single gauge of U.S. equities market and covers over 80 percent of the value of U.S. equities but also because S&P 500 Index mutual funds are by far the largest and most popular type of index fund. Due to changes in the structure of financial markets over time, we chose a shorter time horizon to minimize the likelihood of a structural break in the data. For our independent variables, we selected macroeconomic and financial variables that economic studies have found to be important in explaining stock returns and were used in Fama’s analysis to determine how much of stock market variation they explained. We selected two demographic variables, the proportion of the population age 40-64 and the ratio of the population age 40-49 to the population age 20-29 (the middle-young or “MY” ratio), that had statistically significant coefficients in several of the empirical studies that we reviewed. Table 1 presents the independent and dependent variables in our model and their data sources. For consistency, we estimate the equation four times using both levels and changes in the two demographic variables. Model Specification, Limitations, and Estimation where r is real stock market returns during calendar year t, x are four control variables (the dividend yield, the term spread, shocks to the default spread, and growth of industrial production, respectively) adapted from Fama’s study, y is the demographic variable, and ε is the error at time t. The error structure is modeled assuming White’s heteroskedasticity-consistent covariance matrix. We first estimate the equation without a demographic variable to measure the proportion of variation explained by macroeconomic and financial indicators, followed by estimating the regression equation four separate times to include each of the demographic measures. For the benchmark model, we find no evidence of serial autocorrelation or deviations from normality. Despite standard diagnostics and careful regression specification, some limitations of our analysis remain. We cannot be certain that we have chosen the best variables to represent the aspects of the economy that move the stock market or the demographic variables that may influence stock returns as well. We have attempted to choose appropriate variables based on the existing empirical and theoretical literature on the economic and demographic determinants of stock returns. Nevertheless, even these variables may be measured with error. Generally, measurement errors would cause us to underestimate the importance of those variables that have been measured with error. This would be most problematic in the case of our demographic variables, though measurement error in our economic and financial control variables actually makes our estimates conservative. Nevertheless, we assessed the reliability of all data used in this analysis, and found all data series to be sufficiently reliable for our purposes. As a result, we believe that the limitations mentioned here (and related to the direction of causality in industrial production mentioned above) do not have serious consequences for the interpretation of our results. The regression results are presented in tables 2 through 6 below. Our results are consistent with the literature on the determinants of stock market returns, especially Fama’s study, in that several of our macroeconomic and financial variables are statistically significant, and they account for a substantial proportion (roughly 47 percent) of the variation in stock returns. The coefficient of determination in Fama’s study could be higher due to the inclusion of more industrial production leads. The finding in Davis and Li’s study that the 40-64 population had a statistically significant impact on stock returns is not robust to alternative specifications, as demonstrated in Table 6. The proportion of the population 40-64 is no longer a statistically significant determinant of stock returns, and the inclusion of the variable improves the R-squared by less than 1.5 percent. However, changes in the 40-64 population are significant, and account for an additional 8 percent of the variation in stock returns. The MY ratio and changes in the MY ratio are statistically significant, as seen in Tables 5 and 6, and the model with changes in the MY ratio accounts for a higher proportion of the variation in stock returns than the model estimated with the level of the ratio. Appendix V: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the contacts above, Kay Kuhlman, Charles A. Jeszeck, Joseph A. Applebaum, Mark M. Glickman, Richard Tsuhara, Sharon Hermes, Michael Hoffman, Danielle N. Novak, Susan Bernstein, and Marc Molino made important contributions to this report.
The first wave of baby boomers(born between 1946 and 1964) will become eligible for Social Security early retirement benefits in 2008. In addition to concerns about how the boomers' retirement will strain the nation's retirement and health systems, concerns also have been raised about the possibility for boomers to sell off large amounts of financial assets in retirement, with relatively fewer younger U.S. workers available to purchase these assets. Some have suggested that such a sell-off could precipitate a market "meltdown," a sharp and sudden decline in asset prices, or reduce long-term rates of return. In view of such concerns, we have examined (1) whether the retirement of the baby boomers is likely to precipitate a dramatic drop in financial asset prices; (2) what researchers and financial industry participants expect the effect of the boomer retirement to have on financial markets; and (3) what role rates of return will play in providing retirement income in the future. We have prepared this report under the Comptroller General's authority to conduct evaluations on his own initiative as part of the continued effort to assist Congress in addressing these issues. Our analysis of national survey and other data suggests that retiring boomers are not likely to sell financial assets in such a way as to cause a sharp and sudden decline in financial asset prices. A large majority of boomers have few financial assets to sell. The small minority who own most assets held by this generation will likely need to sell few assets in retirement. Also, most current retirees spend down their assets slowly, with many continuing to accumulate assets. If boomers behave the same way, a rapid and large sell off of financial assets appears unlikely. Other factors that may reduce the odds of a sharp and sudden drop in asset prices include the increase in life expectancy that will spread asset sales over a longer period and the expectation of many boomers to work past traditional retirement ages. A wide range of academic studies have predicted that the boomers' retirement will have a small negative effect, if any, on rates of return on assets. Similarly, financial industry representatives did not expect the boomers' retirement to have a big impact on the financial markets, in part because of the globalization of the markets. Our statistical analysis shows that macroeconomic and financial factors, such as dividends and industrial production, explained much more of the variation in stock returns from 1948 to 2004 than did shifts in the U.S. population's age structure, suggesting that demographics may have a small effect on stock returns relative to the broader economy. While the boomers' retirement is not likely to cause a sharp and sudden decline in asset prices, the retirement security of boomers and others will likely depend more on individual savings and returns on such savings. This is due, in part, to the decline in traditional pensions that provide guaranteed retirement income and the rise in account-based defined contribution plans. Also, fiscal uncertainties surrounding Social Security and rising health care costs will ultimately place more personal responsibility for retirement saving on individuals. Given the need for individuals to save and manage their savings, financial literacy will play an important role in helping boomers and future generations achieve a secure retirement.
GAO_GAO-07-989T
Background According to the Department of Labor, in 2005, about 60 percent of U.S. women age 16 and older were in the workforce, compared to 46 percent in 1975. Some U.S. employers offer alternative work arrangements to help workers manage both work and other life responsibilities. One type of alternative work arrangement allows workers to reduce their work hours from the traditional 40 hours per week, such as with part-time work or job sharing. The Family and Medical Leave Act (FMLA) of 1993 requires most employers to provide workers 12 weeks of unpaid leave from work for a variety of reasons, such as childbirth, caring for relatives with serious health conditions, or other personal reasons, such as their own serious health condition or the adoption of a child, and employers must guarantee workers a similar job upon return. Some arrangements adopted by employers, such as flextime, allow employees to begin and end their workday outside the traditional 9-to-5 work hours. Other arrangements, such as telecommuting from home, allow employees to work in an alternative location. Child care facilities are also available at some workplaces to help workers with their care giving responsibilities. In addition to benefiting workers, these arrangements may also benefit employers by helping them recruit and retain workers. The federal government also provides child care subsidies for certain low- income families, and tax breaks for most parents, both to support their ability to work and to balance work-family responsibilities. Under programs funded by the Child Care and Development Fund, Temporary Assistance for Needy Families (TANF) and state resources, states have the flexibility to serve certain types of low-income families. The Head Start program provides comprehensive early childhood education and development services to low-income preschool children, on a part- or full- day basis. Last, the Child and Dependent Care Tax Credit allows parents to reduce their tax on their federal income tax return if they paid someone to care for a child under age 13 or a qualifying spouse or dependent so they could work or look for work. In addition, the federal government offers workforce development and training programs designed to assist low-wage/low-skilled workers in the United States. The Workforce Investment Act (WIA) of 1998 requires states and localities to bring together a number of federally funded employment and training services into a statewide network of one-stop career centers. Low-skilled workers and dislocated workers can choose the training they determine best for themselves, working in consultation with a case manager. Additionally, the federal government provides tax breaks and incentives for companies to hire low-income workers, public assistance recipients, and workers with disabilities. Most of the countries we studied are members of the European Union, which provides minimum standards or basic rights for individuals across member states. For example, the 1997 directive on equal treatment of part- time work mandates that people holding less than full-time jobs be given prorated pay and benefits without discrimination. EU directives are generally binding in terms of the results to be achieved, but an opt-out option occasionally allows member states to delay action. Additionally, in 2000, member states have agreed to increase the number of women in employment, the number of adults in lifelong learning, and the provision of child care by the end of the decade. The EU offers financial support to its member states to help them succeed in employment goals. Other differences are relevant to consideration of the workforce attachment policies of our study countries. Although U.S. women have high levels of educational attainment, their workforce participation, in general, is lower than that of the countries we studied. While a higher education level is associated with greater likelihood of labor force participation, labor force participation for U.S. women is lower than that in any of our study countries except Ireland and New Zealand (see table 1). However, working women in the United States are more likely to work full-time than those in all other study countries except Sweden or Denmark. In the Netherlands, a country where 36 percent of all employment is part-time, women constitute more than three-quarters of employees working less than 30 hours per week. Differences in taxation across countries reflect economic and social priorities. The ratio of total tax revenues to gross domestic product (GDP) is a commonly used measure of state involvement in national economies. Countries with high tax-to-GDP ratios generally pay more from the public budget for services that citizens would have to pay for themselves—or do without—in lower-taxed countries. In 2004, Sweden had the highest tax revenue as a percentage of GDP among our study countries, at 50.4 percent. Denmark came next at 48.8 percent, followed by France at 43.4 percent. The United States had the lowest tax revenue as a percentage of GDP in 2004, at 25.5 percent. (See table 2.) Countries Have Various Policies and Practices That May Help Some Women and Low-Wage/Low- Skilled Workers Enter and Remain in the Labor Force Governments and employers in the countries we studied developed a variety of laws, government policies, and formal and informal practices, including periods of paid leave (such as maternity, paternity, or parental leave), flexible work schedules, child care, and training that may help women and low-wage/low-skilled workers enter and remain in the labor force. In addition to family leave for parents, countries provide other types of leave, and have established workplace flexibility arrangements for workers. All of the countries also subsidize child care for some working parents through a variety of means, such as direct benefits to parents for child care and tax credits. Last, governments and employers have a range of training and apprenticeship programs to help unemployed people find jobs and to help those already in the workforce advance in their careers. Family Leave Policies Differed in Four Major Ways, Including Eligibility Requirements and Length, Payment, and Flexibility Many countries have developed and funded parental leave policies to assist employees in combining their work and family lives, recognizing, in part, the need to promote women’s participation in the labor force. A 1996 directive of the European Council requires all countries in the EU— including each of the European countries we reviewed—to introduce legislation on parental leave that would provide all working parents the right at least 3 months of leave—preferably paid—to care for a new baby. In the United States, the FMLA allows approximately 3 months of unpaid leave. Some of the countries we studied are social welfare states, and generally fund family leave payments through tax revenues and general revenues. For example, Canada, the UK, and the Netherlands fund paid leave policies in part through national insurance programs, which use payroll taxes paid by employers and employees. Denmark’s paid maternity, paternity, and parental leaves are financed by income tax revenues through an 8 percent tax on all earned income. Many national leave policies in our study countries require employees to work for a period of time before they can take leave, giving employers assurances that employees are committed to their jobs. For example, in Denmark, employed women with a work history of at least 120 hours in the 13 weeks prior to the leave are allowed 18 weeks of paid maternity leave. In some countries, though, all parents are entitled to take family leave. In Sweden, all parents are entitled to parental benefits whether or not they are working. In the UK, by law, all expectant employees can take up to 52 weeks of maternity leave, regardless of how long they have worked for their employer. To enhance workers’ ability to take leave, the countries we studied replace all or part of the wages they forgo while on leave. Dutch employees on maternity leave and their partners are entitled to receive 100 percent of their wages, up to a maximum. In the UK, women who meet qualifying conditions of length of service and who earn a minimum amount for the national insurance system can receive up to 90 percent of their earnings. In Ireland, women can generally be paid at 80 percent of earnings, subject to their contributions into the social insurance system. However, employers may offer more leave than legally required. Leave is often intended to help parents care not just for newborns. In the Netherlands, Sweden, Denmark, and the UK, parents have the option of using their leave flexibly by dividing it into discrete parts, sometimes with the consent of an employer. In the Netherlands, for example, parents may divide the leave into a maximum of three parts and can take the leave simultaneously or following one another. The Netherlands, Sweden, and Denmark allow parents the use of parental leave until their child turns either 8 or 9, while the UK allows the use of parental leave until a child turns 5. Further, some countries allow workers to take leave to care for other family members. In Canada, all employees are eligible to take 8 weeks of unpaid leave to provide care and support to a seriously ill family member or someone considered as a family member. In other countries, the leave is more limited. New Zealand requires that all employers provide a minimum of 5 days of paid sick leave for an eligible employee’s own illness or to care for family members. Flexible Working Arrangements Help Employees Balance Work and Private Responsibilities A few countries have also developed national policies that promote flexible work opportunities, apart from leave. Dutch law gives eligible employees the right to reduce or increase working hours for any reason. Employers can deny the request only if the change would result in a serious obstacle, such as not having enough other workers to cover the hours an employee wishes to reduce. Similarly, British law allows workers to request changes to the hours or location of their work, to accommodate the care of children and certain adults. According to government officials from the UK Departments of Trade and Industry, and Communities and Local Government, this law provides the government with a cost-effective means to help women return to work. Although similar to the law in the Netherlands, this law gives employers in the UK more leeway to refuse an employee’s request. Flexible working opportunities for employees are often adjusted or developed by individual employers. Many employers extended the Right to Request provisions to all employees, for example. In other cases, employers have developed new opportunities. One local government employer in the UK offers employees the ability to take a career break for up to 5 years to care for children or elders, with the right to return to the same position. Employees of the organization are also able to take time off when children are home on holidays, share the responsibilities of one position with another employee through the practice of job sharing, and vary their working hours. In Denmark, a large employer allowed an employee who was returning to work from a long-term illness to gradually increase her working hours until she reached a full-time schedule over the course of several months. Flexible working arrangements in the United States have been adopted by some employers, but are not mandated in federal law. Child Care Policies Assist Working Parents All of our study countries have made a public investment in child care, a means of allowing women to access paid employment and balance work and family, according to the European Commission. In Canada, the government provides direct financial support of $100 a month to eligible parents for each child under 6. In New Zealand, support is available through a child care tax credit of $310 per year to parents who have more than $940 in child care costs. Researchers have reported that, like leave benefits, early childhood education and care services in European countries are financed largely by the government. According to these researchers, funding is provided by national, state, or regional and local authorities, and the national share typically is dominant in services for preschool-age children. These researchers also reported that care for very young children and, to a lesser extent, for preschool children is partially funded through parental co-payments that cover an average of 15 percent to 25 percent of costs. In some countries the provision of early childhood care and education is viewed as a social right, in others as a shared responsibility. In Sweden and Denmark, parents are guaranteed a place in the state child care system for children of a certain age, according to the European Commission. More than 90 percent of Danish children are in publicly supported child care facilities, according to a Danish researcher. Other countries view the provision of child care as a responsibility shared among government, employers, and parents. In the Netherlands, overall, employers, employees, and the government are each expected to pay about one-third of child care costs, according to a report by the European Commission. Aside from public support for child care, some employers in the countries we reviewed offered additional resources for their employees’ child care needs. For example, although not mandated to do so by law until January 2007, many employers in the Netherlands had been contributing towards their employees’ cost for child care. In the Netherlands, about two-thirds of working parents received the full child care contribution from their employers, according to a recent survey. In addition, a Canadian union negotiated employer subsidies to reimburse some child care expenses for its members, according to union representatives. Training Programs Can Be Targeted at the Unemployed or Low- Skilled Workers Our study countries provide services in a variety of ways to help both the unemployed and low-skilled workers to develop their skills. The percentage of GDP that each country spends on training programs varies. (See table 3.) To help the unemployed develop the skills necessary to obtain work, our study countries provided various services, including providing training directly and giving employers incentives to provide training or apprenticeships. In Denmark, to continue receiving unemployment benefits after 9 months, the unemployed are required to accept offers, such as education and training, to help them find work. Particular groups of the unemployed that may face difficulty in finding employment, such as women and the low-skilled, may be offered training sooner. Employers in Denmark may receive wage subsidies for providing job-related experience and training to the unemployed, or for providing apprenticeships in fields with a shortage of available labor. In the United States, training services generally are provided by WIA programs, which are provided by government. Local governments and private entities also seek to help the unemployed obtain and upgrade skills. For example, a local government council in the UK provides unemployed women training in occupations in which they are underrepresented, such as construction and public transport. While the women are not paid wages during the typical 8-12 weeks of training, they may receive unemployment insurance benefits as well as additional support for child care and transportation. Additionally, a privately run association in the Netherlands provides entrepreneurial training to women who have been on public assistance for at least 10 years to start their own businesses, according to an organization official. Both of these initiatives were funded jointly by the local governments and the European Social Fund. Our study countries also have training initiatives focused on those already in the workforce. For example, Canada introduced an initiative to ensure that Canadians have the right skills for changing work and life demands. The program’s goal is to enhance nine essential skills that provide the foundation for learning all other skills and enable people to evolve with their jobs and adapt to workplace changes, according to the government. Denmark has had a public system in place since the mid-1960s that allows low-skilled workers to receive free education, wage subsidies, and funding for transportation costs. About one-half of unskilled workers took part in training courses that were either publicly financed or provided privately by employers in the past year, according to a Danish researcher. The UK has also developed an initiative which offers employers training assistance to meets their needs. The UK’s Train to Gain program, based on an earlier pilot program, provides employers free training for employees to achieve work-related qualifications. To qualify for Train to Gain, employers need to agree to at least a minimum level of paid time that employees will be allowed to use for training. Employers with fewer than 50 full-time employees are eligible for limited wage subsidies. Train to Gain also provides skills advice to employers and helps match business needs with training providers. The UK Leitch Review recommended that the government provide the bulk of funding for basic skills training and that all adult vocational skills funding be routed through programs such as Train to Gain. As is the case with other benefits, many training programs aimed at increasing employees’ skills are initiated privately by employers and employees. For example, an employer in Saskatchewan, Canada, reported that he supports employees’ advancement by paying for necessary educational courses, such as those that prepare employees for required licenses. A large government employer in the UK, recognizing the challenges faced by women in a male-dominated field, offers flexible training to make the training more easily accessible to women—training is available online, from work or home, as well as through DVDs that can be viewed at one’s convenience. In the Netherlands, according to an employer representative, most training is developed through agreements in which employers agree to pay. In Denmark, a director in the Ministry of Education reported that some companies give employees the right to 2 weeks per year of continuing education in relevant and publicly funded education. Although Certain Workplace Policies Are Associated with Increased Participation in the Labor Force, in General, Effects Are Not Definitive Research has found that workplace policies such as child care and family leave encourage women to enter and return to the workforce, while evaluations of training policies show mixed results. Readily available child care appears to enable more women to participate in the labor market, especially when it is subsidized and meets quality standards such as having a high staff-to-child ratio and a high proportion of certified staff. Women are also more likely to enter and remain in the workforce if they have paid family leave, although the length of leave affects their employment. An extensive review of available research by the European Commission shows mixed results in whether training helps the unemployed get jobs. Some training initiatives have shown promise but have not been formally evaluated. In general, researchers and officials reported that it is difficult to determine the effects of a policy for a variety of reasons. Child Care and Paid Family Leave Are Associated With an Increase in Labor Force Participation Readily available child care, especially when it is subsidized and regulated with quality standards such as a high staff-to-child ratio and a high proportion of certified staff, appears to increase women’s participation in the labor force by helping them balance work and family responsibilities, according to research from several cross-national studies. Additionally, the European Commission reports that women prolong their time away from work when child care is not subsidized and relatively expensive. Low-wage workers, especially single parents, who are predominantly women, are particularly sensitive to the price of child care, according to a European Commission report. Research from the United States also shows that highly priced child care can deter mothers from working, according to a review of the literature. The association between child care and women’s labor force participation is found in several studies that control for a variety of factors, including individual countries’ cultural norms and experiences. However, the relationship between early childhood education—which acts as child care for some parents—and women’s labor force participation is uncertain. Because many unemployed mothers also place their children in subsidized preschool, any impact that the preschool has on encouraging mothers to work may appear to be diminished, according to a cross-national study. Research shows that paid family leave encourages women’s employment, but is not conclusive as to the ideal length of family leave to encourage women to return to work. One extensive review of the literature on family leave found that leave increases the chance that women will return to work by the end of the year following the birth. Another study examining paid maternity leave of varying lengths of time in several Western European countries, including Denmark, France, Ireland, and Sweden, concluded that maternity leave may increase women’s employment rate by about 3-4 percent. However, if leave is too short, women may quit their job in order to care for their children, according to a European Commission report. Another study found that if leave is too lengthy, it may actually discourage women from returning to work after having a child. One researcher stated that French mothers with at least two children returned to the workforce less frequently when they became eligible for 3 years of family leave. On the contrary, some researchers found that Sweden’s lengthy leave allowed more women to enter and remain in the labor force in the long run. One review of the literature concluded that leave of up to about 1 year is positively associated with women’s employment, while another found that after 20 weeks, the effect of leave on employment begins to deteriorate. The Effects of Most Workplace Policies and Practices, Including Training, Are Not Definitive Evaluations of training programs, where they exist, have shown mixed results, but many national and local efforts have shown promise. Research on training program participants from Sweden and Denmark found that training programs do not appear to positively affect all participants’ employment. While the Danish government’s labor market policies seem to have successfully lowered the overall unemployment rate to around 4 percent by the end of 2006, according to Danish officials, the effect of specific training programs on participants’ employment is difficult to discern. On the other hand, a number of evaluations of French training programs suggest that these programs help participants secure jobs. New Zealand’s evaluation of two of its training programs, which provide both remedial and vocational skills to participants, found that the training had a small effect on the participants’ employability. According to a European Commission report, one researcher’s review of 70 training program evaluations, including those in Denmark, France, The Netherlands, Sweden and the UK, suggested that training programs have a modest likelihood of making a positive impact on post-program employment rates. However, the European Commission reports that many studies on individual outcomes are based upon short-term data, while the effects on participants’ employment may not be evident for 1 to 2 years or more. Some national and local training initiatives that we reviewed—both those for the employed and those for the unemployed—have shown promise, although some have not been subject to an evaluation. For example, an evaluation of the precursor to the UK’s national Train to Gain program found that 8 out of 10 participants believed they had learned new skills, and employers and participants both felt that the training enabled participants to perform better at work. However, the evaluation estimated that only 10-15 percent of the training was new training, while the remaining 85-90 percent of the training would have occurred without the program. Although a planned evaluation has not yet been conducted, an individual UK employer reported that it had trained 43 women for jobs in which they are underrepresented. Fourteen of these women found employment and 29 are in further training. Even where evaluations do exist, it is difficult to determine the effects of any policy for a variety of reasons. Policies affecting female labor force participation interact with cultural factors, such as a country’s ideology concerning social rights and gender equality, according to a researcher from Ireland. In some cases, too, new policies interact with existing ones. For example, a researcher reported that the French government provides payments to mothers who may choose to stay home with their children, while also subsidizing child care that encourages mothers to work. Additionally, changes in the labor market may actually bring about the enactment of policies, rather than the other way around. For example, it is difficult to be sure whether the availability of child care causes women to enter the labor force or if it is an effect of having more women in the workforce, according to one researcher’s review of the relevant literature. Further, few evaluations of certain policies and practices have been conducted in Europe, although this is starting to change, according to the European Commission. Moreover, some policies were recently developed, and governments frequently make changes to existing policies, which may make it difficult to evaluate them. For example, a report by the Canadian government states that flexible work arrangements are relatively new and represent an area in which research is needed. In other cases, a policy simply codified into law a widely used practice. For example, a government official in the Netherlands reported that it was very common for Dutch women to choose to work part-time even before legislation passed that promoted employees’ right to reduce their working hours. While Several Factors Affect Uptake, Employees’ Use of Workplace Benefits Can Have Implications for Employers and Employees The experiences of the countries we reviewed have shown that characteristics of policies, such as the level of payment during leave, can affect whether an employee uses various workplace benefits. For example, the province of Saskatchewan in Canada provides 12 days of unpaid leave per year, but low-wage workers cannot always afford to take it. Similarly, according to a University of Bristol professor, low-income mothers in the UK disproportionately return to the workforce at the end of paid maternity leave whereas more affluent mothers tend to return at the end of unpaid leave. When parental leave can be shared between parents and the level of payment is low, women tend to take the leave, in part because their income level is often lower than their husband’s. A report from the European Commission also found that the ability to use leave flexibly, such as for a few hours each day or over several distinct periods rather than all at once, can also increase parents’ take-up rates for leave, as parents are able to care for their children and stay in the labor force at the same time. Employer views and employee perceptions can also directly affect an employee’s use of workplace benefits. Researchers in Canada, for example, found that the ability to arrange a schedule in advance and interrupt it if needed is very important to employees, but that this ability depends on how willing their supervisor is to be flexible. In addition, a cross-national study from the Organisation for Economic Co-Operation and Development, which included the countries we reviewed, found that many employers tend to view training for the low-skilled as a cost, rather than an investment, and devote substantially more resources to their high- skilled workers, on average. Since employers tend to target their training to higher skilled and full-time workers, employees who opt to work part- time may have fewer opportunities for on-the-job training that could help them advance, according to university researchers in the Netherlands. An employee’s perceptions on training can also affect his or her uptake of opportunities. Employee representatives from Denmark’s largest trade union confederation said that low-skilled employees are more likely to have had negative experiences with education and that these experiences can affect whether they take advantage of workplace training opportunities to increase their skills. Employees’ use of workplace benefits can create management challenges for their employers. For example, an employer in Saskatchewan reported that covering for the work of staff on family leave can be complicated. He said that although he was able to hire temporary help to cover an employee on maternity leave, he faced an unexpected staff shortage when the employee decided toward the end of her leave not to return to work and the temporary employee had found another job. An official affiliated with the largest employer association in the Netherlands stated that it can be hard to organize work processes around employees’ work interruptions, especially during short-term and unplanned leaves. The use of family leave or part-time work schedules may also have negative implications for an employee’s career. Employers have indicated that they would prefer to hire an older woman with children than a younger woman who has yet to have children, according to university researchers in Denmark. In addition, long parental leaves may lead to an actual or perceived deterioration in women’s labor market skills, according to an EU report, and can have negative effects on future earnings. According to employee representatives in Canada, in the high- tech sector, where there are rapid changes in technology, the use of parental leave can be particularly damaging. In addition, some part-time jobs have no career advancement opportunities and limited access to other benefits, such as payment during leave and training. Concluding Observations Workplace policies and practices of the countries we studied generally reflect cooperation among government, employer, and employee organizations. Many developed countries have implemented policies and practices that help workers enter and remain in the workforce at different phases of their working lives. These policies and practices, which have included family leave and child care, for example, have been adopted through legislation, negotiated by employee groups, and, at times, independently initiated by private industry groups or individual employers. U.S. government and businesses, recognizing a growing demand for workplace training and flexibility, also offer benefits and are seeking ways to address these issues to recruit and retain workers. Potentially increasing women’s labor force participation by further facilitating a balance of work and family, and improving the skills of low-wage workers throughout their careers, may be important in helping the United States maintain the size and productivity of its labor force in the future, given impending retirements. While other countries have a broader range of workforce benefits and flexibility and training initiatives, little is known about the effects of these strategies. Whether the labor force participation gains and any other positive outcomes from adopting other countries’ policies would be realized in the United States is unknown. Moreover, any benefits that might come from any initiatives must be weighed against their associated costs. Nonetheless, investigating particular features of such policies and practices in some of the developed countries may provide useful information as all countries address similar issues. This concludes my statement, Madam Vice-Chairwoman. I would be happy to respond to any questions that you or other members of the committee may have. GAO Contact and Staff Acknowledgments For future contacts regarding this testimony, I can be reached (202) 512-7215. Key contributors to this testimony were Sigurd Nilsen, Diana Pietrowiak, Gretta Goodwin, Avani Locke, Stephanie Toby, Seyda Wentworth, and Charles Willson. Related GAO Products: Women and Low-Skilled Workers: Other Countries’ Policies and Practices That May Help These Workers Enter and Remain in the Labor Force. GAO-07-817. Washington, D.C.: June 14, 2007. An Assessment of Dependent Care Needs of Federal Workers Using the Office of Personnel Management’s Survey. GAO-07-437R. Washington, D.C.: March 30, 2007. Highlights of a GAO Forum: Engaging and Retaining Older Workers. GAO-07-438SP. Washington, D.C.: February 2007. Workforce Investment Act: Employers Found One-Stops Centers Useful in Hiring Low-Skilled Workers; Performance Information Could Help Gauge Employer Involvement. GAO-07-167. Washington, D.C.: December 22, 2006. Employee Compensation: Employer Spending on Benefits Has Grown Faster than Wages, Due Largely to Rising Costs for Health Insurance and Retirement Benefits. GAO-06-285. Washington, D.C.: February 24, 2006. Social Security Reform: Other Countries’ Experiences Provide Lessons for the United States. GAO-06-126. Washington, D.C.: October 21, 2005. Child Care: Additional Information Is Needed on Working Families Receiving Subsidies. GAO-05-667. Washington, D.C.: June 29, 2005. Workforce Investment Act: Substantial Funds Are Used for Training, but Little Is Known Nationally about Training Outcomes. GAO-05-650. Washington, D.C.: June 29, 2005. Highlights of a GAO Forum: Workforce Challenges and Opportunities for the 21st Century: Changing Labor Force Dynamics and the Role of Government Policies. GAO-04-845SP. Washington, D.C.: June 2004. Women’s Earnings: Work Patterns Partially Explain Difference between Men’s And Women’s Earnings. GAO-04-35. Washington, D.C.: October 31, 2003. Older Workers: Policies of Other Nations to Increase Labor Force Participation. GAO-03-307. Washington, D.C.: February 13, 2003. Older Workers: Demographic Trends Pose Challenges for Employers and Workers. GAO-02-85. Washington, D.C.: November 16, 2001. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Increasing retirements and declining fertility rates, among other factors, could affect the labor force growth in many developed countries. To maintain the size and productivity of the labor force, many governments and employers have introduced strategies to keep workers who face greater challenges in maintaining jobs and incomes, such as women and low-skilled workers, in the workforce. This testimony discusses our work on (1) describing the policies and practices implemented in other developed countries that may help women and low-wage/low-skilled workers enter and remain in the labor force, (2) examining the change in the targeted groups' employment following the implementation of the policies and practices, and (3) identifying the factors that affect employees' use of workplace benefits and the resulting workplace implications. The testimony is based on a report we are issuing today (GAO-07-817). For that report, we conducted an extensive review of workforce flexibility and training strategies in a range of developed countries and site visits to selected countries. Our reviews were limited to materials available in English. We identified relevant national policies in the U.S., but did not determine whether other countries' strategies could be implemented here. The report made no recommendations. The Department of Labor provided technical comments; the Department of State had no comments on the draft report. Governments and employers developed a variety of laws, government policies, and formal and informal practices, including periods of leave, flexible work schedules, child care, and training. Each of the countries we reviewed provides some form of family leave, such as maternity, paternity, or parental leave, that attempts to balance the needs of employers and employees and, often, attempts to help women and low-wage/low-skilled workers enter and remain in the workforce. In Denmark, employed women with a work history of at least 120 hours in the 13 weeks prior to the leave are allowed 18 weeks of paid maternity leave. In addition to family leave for parents, countries provide other types of leave, and have established workplace flexibility arrangements for workers. U.S. federal law allows for unpaid leave under certain circumstances. All of the countries we reviewed, including the United States, also subsidize child care for some working parents through a variety of means, such as direct benefits to parents for child care or tax credits. For example, in Canada, the government provides direct financial support of $100 a month per child, to eligible parents for each child under 6. Last, governments and employers have a range of training and apprenticeship programs to help unemployed people find jobs and to help those already in the workforce advance in their careers. Although research shows that benefits such as parental leave are associated with increased employment, research on training programs is mixed. Leave reduces the amount of time that mothers spend out of the labor force. Cross-national studies show that child care--particularly when it is subsidized and regulated with quality standards--is positively related to women's employment. Available research on training in some of the countries we reviewed shows mixed results in helping the unemployed get jobs. Some local initiatives have shown promise, but evaluations of some specific practices have not been conducted. Some country officials said it is difficult to attribute effects to a specific policy because the policies are either new or because they codified long-standing practices. While policies do appear to affect workforce participation, many factors can affect the uptake of workplace benefits, and employees' use of these benefits can have implications for employers and employees. For example, employees' use of workplace benefits can create management challenges for their employers. Additionally, employees are more likely to take family leave if they feel that their employer is supportive. However, while a Canadian province provides 12 days of unpaid leave to deal with emergencies or sickness, low-wage workers cannot always afford to take it. Similarly, the uptake of available benefits can also have larger implications for an employee's career. Some part-time jobs have no career advancement opportunities and limited access to other benefits. Since employers tend to target their training to higher-skilled and full-time workers, employees who opt to work part-time may have fewer opportunities for on-the-job training that could help them advance, according to researchers in the Netherlands.
GAO_GAO-10-873
Background Real property is generally defined as facilities, land, and anything constructed on or attached to land. The federal government leases real property (referred to in this report as leased space) for a variety of purposes including office spaces, warehouses, laboratories, and housing. Agencies and Relevant Laws As the federal government’s landlord, GSA designs, builds, manages, and safeguards buildings to support the needs of other federal agencies. GSA is authorized to enter into lease agreements with tenant agencies for up to 20 years that the Administrator of GSA considers to be in the interest of the federal government and necessary to accommodate a federal agency. GSA uses its authority to lease space for many federal government agencies, and in fiscal year 2009 acquired more than 182 million square feet, the most leased space of any federal agency. In response to our 2005 recommendation and to enhance coordination with the FPS, GSA established the Building Security and Policy Division within the Public Buildings Service. The division developed the Regional Security Network, which consists of several security officials for each of GSA’s 11 regions, to further enhance coordination with FPS at the regional and building levels and to carry out GSA security policy in collaboration with FPS and tenant agencies. Some agencies have independent or delegated leasing authority which allow the agency to perform all necessary functions to acquire leased space without using GSA. In fiscal year 2009, VA, USDA, and DOJ, using GSA-delegated and/or independent leasing authority, leased a total of approximately 30 million of square feet to help meet their varying missions. Specifically, VA leased approximately 10 million square feet and has a large inventory of real property, including medical centers, outpatient facilities, and ambulatory care clinics. USDA leased approximately 17 million square feet. USDA uses leased space to administer programs which assist farmers and rural communities, oversee the safety of meat and poultry, provide low-income families access to nutritious food, and protect the nation’s forests, among other things. DOJ leased approximately 3 million square feet. DOJ is comprised of about 40 component agencies with wide-ranging missions, such as the U.S. Attorneys’ Offices, Drug Enforcement Agency (DEA), and the Federal Bureau of Investigation (FBI). The Homeland Security Act of 2002 established DHS to centralize the federal government’s efforts to prevent and mitigate terrorist attacks within the United States—including terrorism directed at federal facilities. Under the act, FPS was transferred from GSA to DHS. As of October 2009, FPS is organized within DHS’s National Protection and Programs Directorate. FPS is the primary federal agency responsible for protecting and securing GSA facilities, visitors, and over 1 million federal employees across the country. FPS’s basic security services include patrolling the building perimeter, monitoring building perimeter alarms, dispatching law enforcement officers through its control centers, conducting criminal investigations, and performing facility security assessments. FPS also provides building-specific security services, such as controlling access to building entrances and exits and checking employees and visitors. FPS is a fully reimbursable agency—that is, its services are fully funded by security fees collected from tenant agencies. FPS charges each tenant agency a basic security fee per square foot of space occupied in a GSA building (66 cents per square foot in fiscal year 2009), among other fees. ISC, established in 1995 by Executive Order 12977 after the bombing of the Alfred P. Murrah federal building in Oklahoma City, has representation from all the major property-holding agencies and a range of governmentwide responsibilities related to protecting nonmilitary facilities. These responsibilities generally involve developing policies and standards, ensuring compliance, and encouraging the exchange of security-related information. Executive Order 12977 called for each executive agency and department to cooperate and comply with the policies and recommendations of the Committee. DHS became responsible for chairing ISC, which, as of 2007, is housed in the Office of Infrastructure Protection within DHS’s National Protection and Programs Directorate. Executive Order 13286, which amended Executive Order 12977, calls for the Secretary of DHS to monitor federal agency compliance with the standards issued by ISC. The 2004 standards, in conjunction with the Facility Security Level Determinations for Federal Facilities—which ISC issued in 2008 to update standards issued by DOJ in 1995—prescribed administrative procedures and various countermeasures for perimeter, entry, and interior, and, as well as blast and setbacks for leased spaces based upon five different facility security levels ranging between levels I and V, with level I being the lowest risk level and level V being the highest. The 2004 standards were specifically developed in response to a perceived need for security standards that could be applied in a leased space environment. The Facility Security Level Determinations for Federal Facilities and its precursors established the criteria and process for determining the security level of a facility which serves as the basis for implementing the countermeasures prescribed within other ISC standards, including the 2004 standards. According to the 2004 standards, when an agency is seeking a new lease, a security official should determine the security level of the leased space based on an early risk assessment, which is performed prior to entering into a new lease. Requirements based on the designated facility security level, as outlined within the standards, are to be incorporated into a solicitation for offers (SFO), which is sent to potential lessors, as minimum requirements. These minimum requirements must be met, with the exception of blast and setback requirements in existing buildings. Potential lessors who are unwilling or unable to meet the requirements are deemed nonresponsive according to the standards and eliminated from the SFO process. After a lease is entered into, the Facility Security Level Determinations for Federal Facilities states that risk assessments, such as facility security assessments (FSA), be conducted on a periodic and timely basis, with the facility security level being determined or adjusted as part of each risk assessment. Specifically, risk assessments are to be conducted every 5 years for facilities classified as facility security level I or II, and every 3 years for facilities classified as facility security level III, IV, or V. Key Practices in Facility Protection We have previously identified, from the collective practices of federal agencies and the private sector, a set of key facility protection practices that provide a framework for guiding agencies’ physical security efforts and addressing challenges. Key facility protection practices as shown in figure 1 include the following: Information sharing and coordination establishes a means of communicating information with other government entities and the private sector to prevent and respond to security threats. Allocating resources using risk management involves identifying potential threats, assessing vulnerabilities, identifying the assets that are most critical to protect in terms of mission and significance, and evaluating mitigation alternatives for their likely effect on risk and their cost. Aligning assets to mission can reduce vulnerabilities by reducing the number of assets that need to be protected. Strategic human capital management ensures that agencies are well equipped to recruit and retain high-performing security staff. Leveraging technology supplements other countermeasures with technology in a cost-effective manner. Performance measurement and testing evaluates efforts against broader program goals and ensures that they are met on time and within budgeting constraints. Limited Risk Information and Lack of Control Over Common Areas and Public Access Pose Challenges to Protecting Leased Space Leasing Officials Sometimes Lack the Information Needed to Employ a Risk Management Approach Before a lease is signed, early risk assessments can help agencies allocate resources using a risk management approach, a key practice of facility protection. Through early risk assessments, security officials are able to collect key information about potential spaces, security risks, and needed countermeasures, which help leasing officials, in turn, identify the most appropriate space to lease and negotiate any needed countermeasures. Leasing officials primarily rely on security officials to supply information on physical security requirements for federally leased space. Some tenant agencies are able to supply leasing officials with key prelease information because they have developed the security expertise to conduct their own early risk assessments. For example, DEA has its own in-house security officials who work with leasing officials to conduct risk assessments early in the leasing process. This helps leasing officials assess risk and obtain space specific to DEA’s security needs. Similarly, VA has created internal policy manuals that describe agency security requirements which help guide leasing and security officials on how to assess risk and obtain appropriate space. These manuals are circulated to VA leasing, facilities, and security officials, and GSA leasing officials are made familiar with VA’s physical security requirements early in the leasing process for GSA- acquired space. Additionally, VA currently budgets $5 per net usable square foot for physical building security and sustainability requirements into all of its leases. At one site, VA officials are in the early stages of identifying space needs for the relocation of a community-based outpatient clinic. VA leasing officials and security officials, among others, are collaborating on decisions that integrate security with the function of the outpatient clinic that will help ensure funds are available to finance the security requirements. Despite the in-house expertise of some tenant agencies, leasing officials sometimes do not have the information they need to allocate resources using a risk management approach before a lease is signed because early risk assessments are not conducted for all leased space. Early risk assessments are absent for a significant portion of the GSA-acquired leased space portfolio because FPS does not uniformly conduct these assessments for spaces under 10,000 square feet—which constitute 69 percent of all GSA leases (see figure 2). While FPS is expected under the MOA to uniformly conduct early risk assessments for GSA-acquired space greater than or equal to 10,000 square feet, FPS and GSA officials agree that FPS is not expected to conduct early risk assessments for spaces under 10,000 square feet unless it has the resources to do so. As we have previously reported, FPS faces funding and workforce challenges, which may limit the resources available to conduct early risk assessments on spaces under 10,000 square feet. Further, FPS may lack incentive for prioritizing early risk assessments on smaller spaces, given that it receives payment on a square footage basis only after a lease has been signed. Currently, the cost of early risk assessments is distributed across all tenant agencies. We are examining FPS’s fee structure as part of our ongoing work in the federal building security area. According to FPS officials, FPS generally does not have enough time to complete early risk assessments on spaces less than 10,000 square feet, in part because GSA has requested early risk assessments too late or too close to the time when a site selection must be made. A GSA official involved with physical security stated that even when GSA gives FPS proper lead time, early risk assessments are still sometimes not conducted by FPS. For example, in October 2009, GSA requested FPS conduct an early risk assessment for a leased space under 10,000 square feet within 8 months. One week prior to the June 2010 deadline, GSA was still unsure if an FPS inspector had been assigned and if a risk assessment had been or would be conducted. Because FPS did not keep centralized records of the number of early risk assessments requested by GSA or completed by FPS in fiscal year 2009, we were unable to analyze how often early risk assessments are requested and the percentage of requested assessments that FPS completes. Leasing and security officials from our case study agencies agreed they are best able to negotiate necessary countermeasures before a lease is executed. Because of the immediate costs associated with relocating, after a tenant agency moves in, it may be forced to stay in its current leased space, having to accept unmitigated risk (if countermeasures cannot be negotiated) or expend additional time and resources to put countermeasures in place (and negotiate supplemental lease agreements) once a lease has been signed. For example, a DEA leasing official stated that relocation is often not a viable solution given costs, on average, of between $10 and $12 million to find and move an office to a new space. Furthermore, at one of our site visits, DEA officials have been working to install a costly fence—a DEA physical security requirement for this location that was originally planned as part of the built-to-suit facility, but canceled because of a lack of funds. According to DEA officials, now that DEA has acquired funding for the fence, they have been negotiating for more than a year with GSA and the lessor to receive supplemental lease agreements, lessor’s design approval, and resolve issues over the maintenance and operation of the fence. According to DEA officials, fence construction is expected to commence in January 2011. Tenant Agencies’ Lack of Control Over Common Areas in Leased Space Can Hamper Their Ability to Mitigate Risks Balancing public access with physical security and implementing security measures in common areas of federally leased space are major challenges. The public visits both owned and leased federal facilities for government services, as well as for other business transactions. In leased space, the number and range of people accessing these buildings can be large and diverse, and building access is generally less restricted than in owned space. Fewer access restrictions and increased public access heighten the risk of violence, theft, and other harm to federal employees and the public. In leased space, it can be more difficult to mitigate risks associated with public access because tenant agencies typically do not control common areas, which are usually the lessor’s responsibility, particularly in multitenant buildings. Common areas, as shown in figure 3, can include elevator lobbies, building corridors, restrooms, stairwells, loading docks, the building perimeter, and other areas. FSAs can identify countermeasures to address risks with public access, but FSA recommendations can be difficult to implement because tenant agencies must negotiate all changes with the lessor. Lessors may resist heightened levels of security in common areas—such as restricted public access—because of the potential adverse effect on other tenants in the building. For example, a multitenant facility security level IV building we visited, housing the United States Forest Service among other federal agencies, experienced difficulty installing X-ray machines and magnetometers in the main lobby. The lessor deemed these proposed countermeasures inconvenient and disruptive for some other tenants, including two commercial businesses located on the ground floor—a daycare center and a sundries shop—and for the public. Because the livelihood of these businesses depends on pedestrian traffic and because federal tenant agencies did not lease the lobby, per se, the lessor resisted having additional security countermeasures in place that would restrict public access. While some tenant agency officials at our site visits stated that lessors were responsive to security needs in common areas, other tenant agency officials we spoke with said that negotiating security enhancements to common areas with lessors is a problem that can lead to a lack of assurance that security risks and vulnerabilities are being mitigated. A regional GSA official involved with physical security stated that because GSA and tenant agencies do not control common areas in buildings where they lease space, it can be challenging to secure loading docks, hallways, and corridors. Another regional GSA official involved with physical security stated that tenant agencies do what they can by implementing countermeasures in their own leased space rather than in common areas, for example, by regulating access at the entrances to leased space rather than at the building entrances. At one site, a FBI official indicated that by relocating to a new leased space, FBI, as the sole tenant, would be able to better control common areas and public access. Overall, the negative effects of these challenges are significant because GSA, FPS, and tenant agencies can be poorly positioned to implement the practices that we have identified as key to protecting the physical security of leased spaces. Tenant agencies that are unable to identify and address vulnerabilities may choose space poorly, misallocate resources, and be limited in their ability to implement effective countermeasures. Furthermore, when tenant agencies are unable to allocate resources according to identified vulnerabilities, they may also be unable to employ the other key practices in facility protection. For example, tenant agencies may not be able to leverage technology to implement the most appropriate countermeasures if it requires a presence in common areas that are not under the control of the federal tenant. The 2010 Standards Show Potential for Addressing Some Challenges with Leased Space The 2010 Standards’ Focus on Decision Making and Documentation Aligns with Some Key Facility Protection Practices In April 2010, ISC issued the Physical Security Criteria for Federal Facilities, also known as the 2010 standards. These standards define a decision-making process for determining the security measures required at a facility. According to the standards, it is critical that departments and agencies recognize and integrate the process as part of the real property acquisition process (i.e., leasing process) in order to be most effective. The 2010 standards provide in-depth descriptions of the roles of security officials who conduct and provide early risk assessments, the tenant agency, and the leasing agency (e.g., GSA) and also define each entity’s respective responsibilities for implementing the standards’ decision- making process. For example, the 2010 standards state that: Tenant agencies are the decision maker as to whether to fully mitigate or accept risk. Tenant agencies must either pay for the recommended security measures and reduce the risk, or accept the risk and live with the potential consequences. Leasing officials will determine how additional countermeasures will be implemented or consider expanding the delineated area, in conjunction with the tenant agency, during the leasing acquisition process. Security officials are responsible for identifying and analyzing threats and vulnerabilities, and recommending appropriate countermeasures. Once a credible and documented risk assessment has been presented to and accepted by the tenant agency, the security official is not liable for any future decision to accept risk. The 2010 standards align with some key practices in facility protection because these standards focus on allocating resources using a risk management approach and measuring performance. As previously discussed, having information on risks and vulnerabilities allows tenant agencies to maximize the impact of limited resources and assure that the most critical risks are being prioritized and mitigated. Likewise, performance measurement, via tracking and documentation of decision making, can help agencies to determine the effectiveness of security programs and establish accountability at the individual facility level. By allocating resources using a risk management approach and measuring performance, tenant agencies and the federal government will be better positioned to comprehensively and strategically mitigate risk across the entire portfolio of real property. Allocating resources using a risk management approach is a central tenet of the 2010 standards. The 2010 standards prescribe a decision-making process to determine the risk posed to a facility (level of risk), the commensurate scope of security (level of protection) needed, and the acceptance of risk when countermeasures will not be implemented or implemented immediately. Like the 2004 standards, the 2010 standards outline a minimum set of physical security countermeasures for a facility based on the space’s designated facility security level. The 2010 standards allow for this level of protection to be customized to address site specific conditions in order to achieve an acceptable level of risk. The 2004 standards allowed for some countermeasures to be unmet due to facility limitations, building owner acceptance, lease conditions, and the availability of adequate funds, but required a plan for moving to security compliant space in the future in such instances. According to the 2004 standards, these exemptions allowed agencies to obtain the best security solution available when no compliant space was available. According to the ISC Executive Director, the 2004 standards were, in effect, lower standards because of the operational considerations given to leased space. The Executive Director said that the 2010 standards correct this weakness by focusing on decision making that can lead to an acceptable level of protection and risk through a variety of means, rather than a standard that simply prescribes a fixed set of countermeasures that can then be circumvented by exemptions as in the 2004 standards. Additionally, the 2010 standards emphasize documentation of the decision- making process—a cornerstone for performance measurement. The 2004 standards required agencies to provide written justification for exceeding the standard and documentation of the limiting conditions that necessitated agencies to go below the standard. The 2010 standards more explicitly state that “the project documentation must clearly reflect the reason why the necessary level of protection cannot be achieved. It is extremely important that the rationale for accepting risk be well documented, including alternate strategies that are considered or implemented, and opportunities in the future to implement the necessary level of protection.” More specifically, the 2010 standards state that any decision to reject implementation of countermeasures outright or defer implementation due to cost (or other factors) must be documented, including the acceptance of risk in such circumstances and that tenant agencies should retain documents pertinent to these decisions, such as risk assessments. The ISC Executive Director stated that after the standards are fully implemented, the federal government will be able to accurately describe the state of federal real property and physical security. For each facility, there will be documentation—a “final building report”— containing information on physical security decision making, including the costs of implementing countermeasures. Each agency will be able to assess their entire portfolio of real property by aggregating these final building reports to determine the overall status and cost of physical security. These reports will be able to demonstrate the federal government’s level of protection against potential threats, according to the executive director. We agree that if the standards succeed in moving agencies to track and document such information at a building level, then tenant agency, leasing, and security officials will be better able to determine if the most critical risks are being prioritized and mitigated across an entire real property portfolio and to determine the gaps and efficacy of agency-level security programs. ISC Standards Could Spur Agencies to Allocate the Resources Necessary for Early Risk Assessment Early risk assessments are key initial steps in the decision-making process prescribed by the 2010 standards. The standards contain a direct call for risk assessments to be conducted and used early in the leasing process. The standards prescribe the following: Prospective tenant agencies will receive information regarding whether the level of protection can be achieved in a delineated area. Security officials will conduct risk assessments and determine facility security levels early to determine required countermeasures that leasing officials should include within SFOs. Security officials will evaluate the proposed security plans of potential lessors responding to the SFOs and update the risk assessment on offers in the competitive range to identify threats and vulnerabilities for the specific properties and recommend any additional security measures to tenant agencies and leasing officials. The 2004 standards outlined more broadly that the initial facility security level should be determined by a security official based on a risk assessment and that those potential lessors who are unwilling or unable to meet the standard be considered unresponsive to the SFO. The 2010 standards also make no distinction or exemptions to the requirement for early risk assessments of leased space, based on a space’s square footage or any other wholesale factor. Like the 2004 standards, the 2010 standards apply to all buildings and facilities in the United States occupied by federal employees for nonmilitary activities. Further, according to the 2010 standards, each executive agency and department shall comply with the policies and recommendations prescribed by the standards. Given this, the 2010 standards’ language on early risk assessments, as previously discussed, should encourage agencies to perform and use these assessments in leased space—including spaces under 10,000 square feet. Specifically, language within the standards directing agencies to uniformly perform and use early risk assessments as part of the prescribed decision-making process is useful, because it provides a baseline for agencies to consider as they develop protocols and allocate resources for protecting leased space. Since leased space for nonmilitary activities acquired by GSA is subject to ISC standards, and FPS provides security services for GSA-acquired leased space, it is up to both agencies to figure out how to meet the 2010 standards in light of available resources. However, as previously discussed, FPS already faces resource and other challenges in conducting these early risk assessments. Given these current challenges, it will likely be difficult for FPS to meet the 2010 standards, which would necessitate an expansion of the services FPS is expected to perform under the current MOA. In October 2009, we reported that FPS and GSA recognized that the MOA renegotiation can serve as an opportunity to discuss service issues and develop mutual solutions. Both FPS and GSA officials reported that the delivery of early risk assessments was being reviewed as part of the MOA. As part of the MOA renegotiations, GSA’s Regional Security Network developed a flowchart to expressly show the need for FPS services, such as early risk assessments. According to FPS officials, one of the goals of the MOA is to clarify how early and from whom GSA officials ought to request these risk assessments from FPS. Other agencies will also have to consider how they will meet the 2010 standards’ requirement for early risk assessments. VA and USDA have efforts underway to further standardize their leasing guidance which represent opportunities for doing just this. According to VA officials, VA will review and update its leasing and security manuals to reflect the 2010 standards and is currently assessing what other additional revisions to these manuals may be warranted. VA can now incorporate the 2010 standards’ baseline decision-making process for its leasing and security officials, which would help support the use of early risk assessments. USDA is also modifying a department-level leasing handbook to incorporate the 2010 standards, since leasing officials can play a significant role in physical security in the leasing process, particularly given the limited number of security officials within USDA. Additionally, USDA is considering realigning its few security officials to report to a department-level office (rather than be organized under each agency) in order to maximize available resources for performing such things as risk assessments. According to officials from agencies within VA and USDA, department-level direction is a valuable resource that leasing officials rely on for determining what activities must be undertaken during the leasing process. ISC Standards Lack Guidance for Working with Lessors A shortfall within the 2010 standards is that they do not fully address the challenge of not controlling common areas and public access in leased space. Though the standards speak to tenant agencies, leasing officials, and security officials about their various roles and responsibilities in implementing the standard, the 2010 standards lack in-depth discussion for these entities about how to work with lessors to implement countermeasures. The 2010 standards outline specific countermeasures for addressing public access as part of protecting a facility’s entrance and interior security, such as signage, guards, and physical barriers. Similar to the 2004 standards, the 2010 standards acknowledge that the ability to implement security countermeasures is dependent on lessors. Nevertheless, like the 2004 standards, there is little discussion on ways for tenant agencies, leasing officials, and security officials to work with or otherwise leverage lessors, which in our view is a significant omission given that implementing countermeasures can depend largely on lessors’ cooperation. Given the critical role that lessors play, guidance for tenant agencies, leasing officials, and security officials—such as best practices—from ISC could be helpful for agencies as they attempt to meet the baseline level of protection prescribed within the 2010 standards for protecting leased space. Best practices comprise the collective practices, processes, and systems of leading organizations, including federal agencies and the private sector. Best practices can provide agencies, though diverse and complex, with a framework for meeting similar mission goals, such as facility protection. Guidance on working with lessors could suggest such practices as the inclusion of clauses within SFOs and lease agreements that obligate lessors to a level of protection in common areas as defined in ISC standards (i.e., deemed necessary by tenant agencies, in conjunction with security officials, as the result of FSAs conducted after a lease is executed). Currently, GSA standard leasing templates contain language stipulating that lessors must provide a level of security that reasonably prevents unauthorized entry during nonduty hours and deters loitering or disruptive acts in and around leased space. Prior to the execution of the lease, leasing officials and tenant agencies could also negotiate or stipulate a cost-sharing structure with lessors in the event that future countermeasures are needed. For example, GSA standard leasing templates already reserve that right of the government to temporarily increase security in the building under lease, at its own expense and with its own personnel during heightened security conditions due to emergency situations. A best practice could be that such existing language regarding common areas and the implementation of security countermeasures be articulated and linked to ISC standards more definitively within SFO and leasing agreements. This could provide tenant agencies, leasing officials, and security officials the leverage necessary for compelling lessors to allow or cooperatively implement security countermeasures in common areas in order to mitigate risks from public access. As the government’s central forum for exchanging information and guidance on facility protection, ISC is well positioned to develop and share best practices. ISC has the capacity to create a working group or other mechanism to address this gap in its 2010 standards. ISC has previously developed best practices in physical security issues, and one of its five standing subcommittees is focused on developing best practices related to technology. Officials from our case study agencies reported that their agencies use ISC guidance and standards in developing policies and protocols for physical security and leasing. Moreover, we have reported that previous ISC standards have been viewed as useful in communicating increased physical security needs to private owners and involving them directly in the process of security program development for their buildings. Conclusions Federal agencies continue to rely on leased space to meet various missions, but the limited use of early risk assessments and a lack of control over common areas present challenges to protecting this space. Though all risks can never be completely predicted or eliminated, it is imperative to address these challenges because they leave GSA, FPS, and tenant agencies poorly positioned to implement key practices in facility protection, such as allocating resources using a risk management approach, leveraging technology, and measuring performance. As the government-wide standard for protecting nonmilitary federal facilities, the 2010 standards are aligned with some of these practices, providing direction on the roles of various entities and their responsibilities in achieving minimum levels of protection and acceptable levels of risk. Specifically, the 2010 standards hold promise for positioning the federal government to begin comprehensively assessing risks with its requirement for documenting building-specific security decision making. The 2010 standards’ prescription that risk assessments be used early in all new lease acquisitions is significant because it could provide the impetus for agencies to examine and allocate the resources needed for implementing early risk assessments, in particular for leases under 10,000 square feet. In contrast, the standards’ lack of discussion on working with lessors is notable, given the significant role these entities have in implementing countermeasures that could mitigate risks from public access, particularly in common areas, such as lobbies and loading docks. Guidance to tenant agencies, leasing officials, and security officials on how to work with lessors, such as best practices, would give helpful direction as these entities work together to secure common areas and protect leased space. Recommendation for Executive Action To enhance the value of ISC standards for addressing challenges with protecting leased space, we recommend that the Secretary of Homeland Security instruct the Executive Director of the ISC, in consultation, where appropriate, with ISC member agencies to (1) establish an ISC working group or other mechanism to determine guidance for working with lessors, which may include best practices to secure common areas and public access, and (2) subsequently incorporate these findings into a future ISC standard or other product, as appropriate. Agency Comments and Our Evaluation We provided a draft of this report to DHS, GSA, VA, USDA, and DOJ for review and comment. DHS concurred with our recommendation and GSA, VA, USDA, and DOJ provided technical comments, which we incorporated as appropriate. DHS’s comments are contained in Appendix I. We will send copies of this report to the Secretary of Homeland Security, FPS Director of DHS, the Administrator of GSA, the Secretary of VA, the Secretary of Agriculture, the Attorney General, and appropriate congressional committees. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix II. Appendix I: Comments from the Department of Homeland Security Appendix II: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the contact named above, David E. Sausville, Assistant Director; Delwen Jones; Susan Michal-Smith; Sara Ann Moessbauer; Meghan Squires; Kyle Stetler; and Friendly Vang-Johnson made key contributions to this report.
The federal government's reliance on leased space underscores the need to physically secure this space and help safeguard employees, visitors, and government assets. In April 2010 the Interagency Security Committee (ISC), comprised of 47 federal agencies and departments and chaired by the Department of Homeland Security (DHS), issued Physical Security Criteria for Federal Facilities (the 2010 standards) which supersede previous ISC standards. In response to Congress' direction to review ISC standards for leased space, this report (1) identifies challenges that exist in protecting leased space and (2) examines how the 2010 standards address these challenges. To conduct this work, GAO analyzed agency documents and interviewed federal officials from ISC, four federal departments selected as case studies based on their large square footage of leased space, and the Federal Protective Service (FPS). GAO also consulted prior work on federal real property and physical security, including key practices in facility protection. Limited information about risks and the inability to control common areas and public access pose challenges to protecting leased space. Leasing officials do not always have the information needed to employ a risk management approach for allocating resources--a key practice in facility protection. Early risk assessments--those conducted before a lease is executed--can provide leasing officials with valuable information; however, FPS, which is the General Service Administration's (GSA) physical security provider, generally does not perform these assessments for leased space under 10,000 square feet--which constitutes a majority of GSA's leases. Under its memorandum of agreement (MOA) with GSA, FPS is not expected to perform these assessments and does not have the resources to do so. Another challenge in protecting leased space is tenant agencies' lack of control over common areas (such as elevator lobbies, loading docks, and the building's perimeter) which hampers their ability to mitigate risk from public access to leased space. In leased space, lessors, not tenant agencies, typically control physical security in common areas. To implement measures to counter risks in common areas, tenant agencies must typically negotiate with and obtain consent from lessors, who may be unwilling to implement countermeasures because of the potential burden or undue effect on other, nonfederal tenants. For example, tenant agencies in a high-risk, multitenant leased facility we visited have been unable to negotiate changes to the common space, including the installation of X-ray machines and magnetometers, because the lessor believed that the proposed countermeasures would inconvenience other tenants and the public. The 2010 standards show potential for addressing some challenges with leased space. These standards align with some key practices in facility protection because they prescribe a decision making process to determine, mitigate, and accept risks using a risk management approach. Further, by requiring that decision making be tracked and documented, the standards facilitate performance measurement that could help enable agency officials to determine if the most critical risks are being prioritized and mitigated. With its emphasis on the uniform use of early risk assessments, the 2010 standards provide a baseline requirement for agencies to consider as they develop protocols and allocate resources for protecting leased space. For example, GSA and FPS must now consider this requirement, which represents an expansion of the services currently expected of FPS, as they renegotiate their MOA. In contrast, a shortfall within the 2010 standards is that they offer little means for addressing tenant agencies' lack of control over common areas and public access. While the 2010 standards outline specific countermeasures for addressing public access, they lack in-depth discussion and guidance--such as best practices--that could provide a framework for working with lessors to implement these countermeasures. Given the critical role that lessors play, such guidance is warranted. As the government's central forum for exchanging information on facility protection, ISC is well positioned to develop and share this guidance.
GAO_GGD-99-38
Background Congress passed LDA and IRC sections 4911 and 162(e) at different times and for different purposes. LDA, which was enacted in 1995 and became effective on January 1, 1996, requires organizations that lobby certain federal officials in the legislative and executive branches to register with the Secretary of the Senate and the Clerk of the House of Representatives. It also requires lobbying organizations that register to semiannually report expenditures and certain other information related to their lobbying efforts. Congress intended LDA’s registration and reporting requirements to provide greater public disclosure of attempts by paid lobbyists to influence decisions made by various federal legislative and executive branch officials. Unlike LDA, neither IRC section 162(e) nor section 4911 was intended to facilitate the public disclosure of lobbying. IRC section 4911, which was enacted in 1976, provides for a limit on the amount of lobbying by 501(c)(3) organizations and thereby helps clarify the extent to which these public charities can lobby without jeopardizing their tax-exempt status. Section 162(e), as amended in 1993, denies the federal income tax deductibility of certain lobbying expenses for businesses. It does not otherwise place restrictions on lobbying activities. Registration LDA requires lobbying organizations, such as lobbying firms, to register with the Secretary of the Senate and the Clerk of the House of Representatives no later than 45 days after they first make a lobbying contact on behalf of a client. Also, organizations that have employees who lobby on behalf of the organizations—the organizations on which this report focuses—must register under LDA. The lobbying registration includes such information as the registering organization’s name and address; the client’s name and address; the names of all individuals acting as lobbyists for the client; the general and specific issues to be addressed by lobbying; and organizations substantially affiliated with the client, including foreign organizations. An organization that has employees who lobby on the organization’s behalf must identify itself as both the registering organization and the client, because the organization’s own employees represent the organization. LDA includes minimum dollar thresholds in its registration requirements. Specifically, an organization with employees who lobby on the organization’s behalf does not have to register under LDA unless its total lobbying expenses exceed or are expected to exceed $20,500 during the 6 month reporting period (i. e., January through June and July through December of each year). LDA also includes minimum thresholds for determining which employees must be listed as lobbyists in the lobbying registration. Under LDA, to be listed as a lobbyist, an individual must make more than one lobbying contact and must spend at least 20 percent of his or her time engaged in lobbying activities on behalf of the client or employing organization during the 6 month reporting period. An organization must have both $20,500 in lobbying expenses and an employee who makes more than one lobbying contact and spends at least 20 percent of his or her time lobbying before it is required to register under LDA. Reporting All organizations that register under LDA must file lobbying reports with the Secretary of the Senate and Clerk of the House of Representatives for every 6 month reporting period. The lobbying reports filed under LDA by organizations that lobby on their own behalf must include the following disclosures: total estimated expenses relating to lobbying activities (total expenses are reported either by checking a box to indicate that expenses were less than $10,000 or by including an amount, rounded to the nearest $20,000, for expenses of $10,000 or more); a three-digit code for each general issue area (such as AGR for Agriculture and TOB for Tobacco) addressed during lobbyists’ contacts with federal government officials; specific issues, such as bill numbers and references to specific executive branch actions that are addressed during lobbyists’ contacts with federal government officials; the House of Congress and federal agencies contacted; the name of each individual who acted as a lobbyist; and the interest of the reporting organization’s foreign owners or affiliates in each specific lobbying issue. Unless it terminates its registration, once a lobbying organization registers, it must file reports semiannually, regardless of whether it has lobbied during the period. Option of Using IRC Lobbying Definitions Under LDA, lobbying firms that are hired to represent clients are required to use the LDA lobbying definition. However, LDA gives organizations that lobby on their own behalf and that already use an IRC lobbying definition for tax purposes the option of using the applicable IRC lobbying definition (IRC sections 4911 or 162(e)), instead of the LDA lobbying definition, for determining whether the LDA registration threshold of $20,500 in semiannual lobbying expenses is met and calculating the lobbying expenses to meet the LDA reporting requirement. For all other purposes of the act, including reporting issues addressed during contacts with federal government officials and the House of Congress and federal agencies contacted, LDA provides that organizations using an IRC definition must (1) use the IRC definition for executive branch lobbying and (2) use the LDA definition for legislative branch lobbying. By allowing certain organizations to use an IRC definition to calculate lobbying expenses, LDA helps those organizations avoid having to calculate their lobbying expenses under two different lobbying definitions—the LDA definition for reporting under LDA and the applicable IRC definition for calculating those expenses for tax purposes. An organization that chooses to use the applicable IRC definition, instead of the LDA definition to calculate its lobbying expenses, must use the IRC definition for both lobbying reports filed during a calendar year. However, from one year to the next, the organization can switch between using the LDA definition and using the applicable IRC definition. Objectives, Scope and Methodology Under LDA, we are required to report to Congress on (1) the differences among the definitions of certain lobbying-related terms found in LDA and the IRC, (2) the impact that any differences among these definitions may have on filing and reporting under the act, and (3) any changes to LDA or to the appropriate sections of the IRC that the Comptroller General may recommend to harmonize the definitions. As agreed with your offices, our objectives for this report were to describe the differences between the LDA and IRC section 4911 and 162(e) determine the impact that differences in the definitions may have on registration and reporting under LDA, including information on the number of organizations using each definition and the expenses they have reported; and identify and analyze options, including harmonizing the three definitions, that may better ensure that the public disclosure purposes of LDA are realized. To identify the differences among the LDA and IRC lobbying definitions, we reviewed the relevant statutory provisions. We also reviewed related regulations and guidance, including guidance issued by the Secretary of the Senate and the Clerk of the House of Representatives. We also reviewed journal articles and an analysis of the definitions of lobbying and met with registered lobbyists, representatives of nonprofit and business organizations, and other parties who were knowledgeable about the different statutory definitions and their effect on lobbying registrations. To determine the differences among the LDA and IRC lobbying definitions regarding the number of federal executive branch officials covered for contacts dealing with nonlegislative matters, we reviewed the LDA and IRC statutory definitions of covered executive branch officials that apply for lobbying contacts on nonlegislative matters. To determine the number of officials covered by these definitions, we counted the number of Executive Schedule Levels I through V positions listed in sections 5312 through 5316 of Title 5 of the United States Code. In several cases, these sections of Title 5 list federal boards and commissions as having Executive Schedule positions but do not specify the number of such positions. In these cases, we did not attempt to determine the number of positions and counted only one position for each such listed board or commission. Thus, our estimate of the number of Executive Schedule Levels I through V positions is understated. Further, to determine the number of officials covered, we obtained data from The United States Government Manual 1998/1999 on cabinet-level officials and the number of offices in the Executive Office of the President; the Department of Defense (DOD) on military personnel ranked 0-7 and above as of September 30, 1997; the U.S. Coast Guard, the Public Health Service, and the National Oceanic and Atmospheric Administration (NOAA) on the number of commissioned corps ranked 0-7 and above as of February 1999; the Office of Personnel Management’s (OPM) Central Personnel Data File on the number of Schedule C officials as of September 30, 1997; and Budget of the United States Government, Appendix, Fiscal Year 1999 on the actual full-time-equivalent employment for fiscal year 1997 in each office of the Executive Office of the President. To determine the impact that differences in the definitions may have on registration and reporting under LDA, we first had to define how we would measure impact. We defined impact as (1) the way differences among the definitions can affect who must register with the Secretary of the Senate and the Clerk of the House of Representatives and what lobbying expenses and related information must be included in those reports; (2) the number of organizations that reported using the LDA and IRC section 4911 and 162(e) definitions when reporting lobbying expenses and related information for July through December 1997; and (3) the lobbying expenses reported under each of the three definitions for this period. To determine the way differences among the definitions can affect who must register and what they must report, we reviewed, analyzed, and categorized the general effects of the differences that we found among the definitions under our first objective. We also looked for possible effects during our reviews of statutes, regulations, guidance, and journal articles. Finally, we discussed the possible effects of the differences among the definitions with registered lobbyists, representatives of nonprofit and business organizations, and other knowledgeable parties. To identify the number of organizations that reported using the definitions of lobbying in LDA or IRC to calculate their lobbying expenses for July through December 1997 and to determine the lobbying expenses reported under LDA that were calculated using one of the three definitions, we obtained data on all lobbying reports filed with the Secretary of the Senate during this period from the new lobbying database of the Senate Office of Public Records. Only the lobbying reports for one semiannual period—July through December 1997—were available from the new database when we began our analysis in October 1998. Using the database, we identified the number of organizations that lobbied on their own behalf and filed reports for the period July through December 1997. We also analyzed the reported expenses of these organizations and determined the mean and median expenses reported under each of the three definitions. Because lobbyists did not round their lobbying expenses to the nearest $20,000 in some cases, as required by LDA, we rounded all reported expenses to the nearest $20,000 before conducting our analysis. Officials from the Senate Office of Public Records said that they had not verified the data in the database, and we did not perform a reliability assessment of the data contained in this database. However, we reviewed the lobbying reports of all organizations whose lobbying expenses were recorded in the database as being less than $10,000, which is the minimum amount required to be recorded on the lobbying form, but had erroneous Senate Office of Public Records codes. We corrected any errors we found before conducting our analysis. To identify and analyze options that may better ensure that the public disclosure purposes of LDA are realized, we relied on (1) information we collected from our review of the relevant literature on lobbying, including statutory provisions, regulations, and guidance; and (2) our findings for our first two objectives. We did our work during two periods. From November 1996 through April 1997, we reviewed the differences in the LDA and IRC definitions of lobbying-related terms. As agreed by the Senate Committee on Governmental Affairs and the House Subcommittee on the Constitution, Committee on the Judiciary, we postponed completing our review until data on lobbying expenses became available. The second period of our review was from October 1998 through January 1999, after we obtained data on lobbying expenses from the new lobbying database of the Senate Office of Public Records. We did our work in Washington, D.C., and in accordance with generally accepted government auditing standards. We obtained technical comments on a draft of this report from the Internal Revenue Service and incorporated changes in the report as appropriate. The Clerk of the House of Representatives, the Secretary of the Senate, and the Department of the Treasury had no comments on the report. Significant Differences Exist Among LDA and IRC Definitions The contacts, activities, and expenses that are considered to be lobbying under the LDA lobbying definition differ in many ways from those covered by the IRC definitions. Most significantly, LDA covers contacts only with federal officials; the IRC definitions cover contacts with officials in other levels of government as well as attempts to influence the public through grassroots lobbying. Also, the definitions differ in their coverage of contacts with federal officials depending on whether the contact was on a legislative or nonlegislative matter. Table 1 and the following sections present some of the key differences in coverage under the different definitions. Appendix I discusses these differences in more detail; and appendix II provides a detailed table of the differences among the definitions concerning coverage of the federal, state, and local levels of government. Lobbying State and Local Officials LDA covers only the lobbying of federal government officials, so organizations using the LDA definition would not include any information in their lobbying reports about lobbying state and local officials. But both IRC lobbying definitions cover contacts with state government officials to influence state legislation. In addition, both IRC definitions cover contacts with local government officials to influence local government legislation, but IRC section 162(e) provides an exception for contacts with local legislative officials regarding legislation of direct interest to the organization. Grassroots Lobbying The LDA lobbying definition covers only lobbying of federal government officials, so organizations using the LDA definition would not include in their lobbying reports any information related to attempts to influence legislation by affecting the opinions of the public—that is, grassroots lobbying. Both IRC lobbying definitions cover grassroots lobbying, such as television commercials; newspaper advertisements; and direct mail campaigns to influence federal, state, and local legislation, including referenda and ballot initiatives. Lobbying Federal Government Officials To determine if a lobbyist’s contact with a federal government official is covered by one of the three lobbying definitions, one must (1) have certain information about the government official, such as whether the official is in the legislative or executive branch; and (2) know whether a legislative or nonlegislative subject was addressed during the contact. The three definitions differ in many ways regarding the officials and subjects they cover. The LDA definition does not distinguish between covered legislative and executive branch officials on the basis of whether the subject of the lobbyist’s contact is legislative or nonlegislative in nature. The IRC definitions define covered officials differently, depending on whether the subject of the lobbying contact was legislative or nonlegislative in nature. When the subject of a lobbyist’s contact concerns a nonlegislative matter, such as a regulation, grant, or contract, LDA covers more officials than the IRC definitions cover. When the subject of a lobbyist’s contact is a legislative matter, both IRC definitions potentially cover more levels of executive branch officials than the LDA definition does. Under LDA, lobbying organizations’ contacts with all Members of Congress and employees of Congress and approximately 4,600 executive branch officials are covered for either legislative or nonlegislative subjects. In contrast, under IRC section 4911, contacts with legislative or executive branch officials, including Members of Congress and the President, about any nonlegislative subject do not count as lobbying. Also, under IRC section 162(e), contacts with Members of Congress and other legislative branch officials do not count as lobbying if they deal with a nonlegislative subject; and very few executive branch officials are covered if contacts are about nonlegislative matters. As table 2 shows, LDA covers 10 times the number of executive branch officials that IRC section 162(e) covers for nonlegislative matters; it also contrasts with IRC section 4911, which does not cover federal officials for nonlegislative contacts. For contacts on legislation, LDA covers contacts with Members of Congress, employees of Congress and the approximately 4,600 executive branch officials shown in table 2. In contrast, for contacts on legislation, the IRC definitions cover Members of Congress, employees of Congress, and any executive branch officials who may participate in the formulation of the legislation. Therefore, for contacts addressing legislation, the IRC definitions potentially cover more levels of executive branch officials than the LDA definition does. Exceptions to the Lobbying Definitions LDA contains 19 exceptions to the definition of lobbying; however, for the most part, these exceptions make technical clarifications in the law and do not provide special exceptions for particular groups. The IRC section 162(e) definition has one exception in the statute, which is for contacts with local government legislative branch officials on legislation of direct interest to the organization. In addition, IRC section 162(e) has seven exceptions, which are provided for by Treasury Regulations and which are technical clarifications of the statutory provisions. IRC section 4911 has five exceptions, and two of these could allow a significant amount of lobbying expenses to be excluded from IRC section 4911 coverage. The first is an exception for making available the results of nonpartisan analysis, study, or research. Due to this exception, IRC section 4911 does not cover 501(c)(3) organizations’ advocacy on legislation as long as the organization provides a full and fair exposition of the pertinent facts that would enable the public or an individual to form an independent opinion or conclusion. The second significant exception under IRC section 4911 is referred to as the self-defense exception. This exception excludes from coverage lobbying expenses related to appearances before, or communications to, any legislative body with respect to a possible decision of such body that might affect the existence of the organization, its powers and duties, tax-exempt status, or the deduction of contributions to the organization. According to IRS officials, this exception provides that a 501(c)(3) nonprofit tax-exempt organization can lobby legislative branch officials on matters that might affect its tax-exempt status or the activities it can engage in without losing its tax-exempt status, and such lobbying will not be counted under the IRC section 4911 definition. According to IRS officials, this exception does not cover lobbying on state or federal funding. Differences in Definitions Can Significantly Affect Registration and Reporting Under LDA For those organizations that lobby on their own behalf, the choice of using either the LDA definition or the applicable IRC definition can significantly affect whether they must register with the Secretary of the Senate and the Clerk of the House of Representatives. In addition, the lobbying definition an organization uses can materially affect the information, such as federal- level lobbying, it must disclose on its semiannual lobbying report. Allowing organizations to use an IRC definition for LDA reporting can result in organizations disclosing information that may not be comparable, is unrelated to LDA’s purpose, or that falls short of what LDA envisions. However, of the 1,824 organizations that lobbied on their own behalf and filed reports under LDA from July through December 1997, most reported using the LDA definition. Those organizations that used the IRC section 162(e) definition had the highest mean and median expenses reported. Differences in Definitions Can Affect Who Must Register The lobbying definition an organization uses, which governs how it calculates lobbying expenses, can affect whether the organization is required to register under LDA. If (1) the actual or expected expenses of an organization lobbying on its own behalf exceed or are expected to exceed the $20,500 LDA threshold for a 6-month period, and (2) the organization has an employee that makes more than one lobbying contact and spends at least 20 percent of his or her time lobbying during the same 6-month period, then the organization must register. Lobbying activities and contacts that count toward the $20,500 and 20 percent thresholds depend on which lobbying definition—LDA, IRC section 4911, or IRC section 162(e)—an organization uses. If an activity is not covered under a particular definition, then the expenses related to that activity do not count toward the lobbying expenses of an organization using that definition. In some cases, allowing organizations to use an IRC definition instead of the LDA definition could result in the organization having covered lobbying expenses below the $20,500 threshold and no employees who spend 20 percent of their time lobbying; however, if the organization used the LDA definition, its lobbying expenses and activities could be above the LDA registration thresholds. For example, for an organization that primarily focuses its lobbying efforts on lobbying federal officials about nonlegislative matters, using an IRC definition is likely to result in lower covered lobbying expenses than using the LDA definition and, therefore, could result in an organization not meeting the $20,500 registration threshold. This could occur because any contacts with legislative branch officials about nonlegislative matters are not covered under either IRC sections 4911 or 162(e). Also, for contacts on nonlegislative matters, IRC section 4911 does not cover executive branch officials, and IRC section 162(e) covers only about one-tenth of the executive branch officials that LDA covers. Thus, an organization could spend over $20,500 lobbying federal officials who are covered by LDA for nonlegislative matters, with the possibility that none of these expenses would count toward the registration requirement if the organization used an IRC definition. It is also possible that an organization could have over $20,500 in lobbying expenses and one or more employees spending 20 percent of their time lobbying by using an IRC definition, when using an LDA definition would put its covered expenses below $20,500 and put its lobbying employees under the 20-percent threshold. For example, the IRC definitions potentially cover contacts with more executive branch officials than LDA covers when those contacts are about legislation. So, if an organization lobbies executive branch officials not covered under LDA in order to influence legislation, those contacts would count as lobbying under the IRC definitions but not under the LDA definition. This could result in the organization’s covered lobbying expenses being above the $20,500 threshold and in an employee’s time spent on lobbying being above the 20 percent threshold. However, no data exist to determine the number of organizations (1) that are not registered under LDA as a result of using an IRC definition or (2) that met the thresholds under an IRC definition but not under the LDA definition. Similarly, the individuals who must be listed as lobbyists on an organization’s lobbying registration can be affected by the choice of definition. Individuals must be listed as lobbyists on the registration if they make more than one lobbying contact and spend at least 20 percent of their time engaged in lobbying activities for their employers during the 6 month reporting period. Using an IRC definition instead of the LDA definition could result in an individual not being listed as a lobbyist on his or her organization’s registration or subsequent semiannual report. For example, this could occur if a lobbyist spends most of his or her time lobbying high-level officials at independent federal agencies about regulations, contracts, or other nonlegislative matters, because the IRC definitions do not consider such contacts as lobbying. Differences in Definitions Can Affect What Must Be Reported Just as the choice of definition affects whether an organization must register under LDA with the Secretary of the Senate and the Clerk of the House of Representatives, the choice of definition also can materially affect the information that is reported semiannually. Because an organization can switch from using the LDA definition one year to using the applicable IRC definition another year and vice versa, organizations can use the definitions that enable them to minimize what they must disclose on their lobbying reports. The three definitions were written at different times for different purposes, so what they cover differs in many ways, both subtle and substantial. These differences result in organizations that use one definition reporting expenses and related information that organizations using another definition would not report. The reported expenses and other information may provide less disclosure and may be unrelated to what is needed to fulfill LDA’s purpose of publicly disclosing the efforts of lobbyists to influence federal officials’ decisionmaking. Whether an organization uses the LDA definition or the applicable IRC definition, it is required to disclose on its lobbying report its total estimated expenses for all activities covered by the definition. Thus, organizations using the LDA definition must report all expenses for lobbying covered federal government officials about subject matters covered by LDA. Similarly, organizations using an IRC definition must disclose on their lobbying reports all expenses for activities that are covered by the applicable IRC definition, including federal, state, and local government lobbying and grassroots lobbying. However, organizations report only their total expenses, so the lobbying reports do not reveal how much of the reported expenses were for individual activities and for what level of government. Thus, even if an organization using the LDA definition reported the same total lobbying expenses as an organization using an IRC definition, it would be impossible to tell from the lobbying reports how similar the two organizations’ federal lobbying efforts may have been. In addition, an organization reporting under an IRC definition would be, in all likelihood, including expenses that are not related to LDA’s focus on federal lobbying because the IRC definitions go beyond lobbying at the federal level. An organization reporting under an IRC definition could also be reporting less information on federal level lobbying than would be provided under the LDA definition, which Congress wrote to carry out the public disclosure purpose of LDA. For example, the IRC definitions include far fewer federal officials in their definitions for lobbying on nonlegislative matters. Also, an organization using the IRC section 4911 definition could exclude considerable lobbying expenses from its lobbying report, if its lobbying fell under the IRC section 4911 exception for nonpartisan analysis or the self- defense exception. For example, in 1995, a 501(c)(3) tax-exempt nonprofit organization lobbied against legislation that would have sharply curtailed certain activities of charities. On its 1995 tax return, the organization, which used the IRC section 4911 definition to calculate its lobbying expenses for tax purposes, reported about $106,000 in lobbying expenses. However, in a letter to a congressional committee, the organization stated that its 1995 lobbying expenses totaled over $700,000; it cited the self- defense exception as a reason for excluding about $594,000 in lobbying expenses from its tax return. In contrast to reporting expenses, when reporting information other than expenses on the LDA lobbying reports, organizations are required to report only information related to federal lobbying. This information includes issues addressed during lobbying contacts with federal government officials and the House of Congress and federal agencies contacted. Therefore, if an organization uses an IRC definition and includes expenses for state lobbying and grassroots lobbying in its total lobbying expenses, it is not required to report any issues or other information related to those nonfederal expenses. Further, LDA provides that for reporting information other than expenses for contacts with federal executive branch officials, organizations using an IRC definition to calculate their expenses must use the IRC definition for reporting other information. But for contacts with federal legislative branch officials, organizations using an IRC definition to calculate their lobbying expenses must use the LDA definition in determining what other information, such as the issues addressed during lobbyists’ contacts and the House of Congress contacted, must be disclosed on their reports. Because of this latter provision, organizations that use an IRC definition and lobby legislative branch officials about nonlegislative matters are required to disclose the issues addressed and the House of Congress contacted, even though they are not required to report the expenses related to this lobbying. Most Organizations Reported Using the LDA Definition For the July through December 1997 reporting period, lobbying firms that had to use the LDA definition to calculate lobbying income filed reports for 9,008 clients. In addition, for this reporting period, 1,824 organizations that lobbied on their own behalf and were able to elect which definition to use in calculating their lobbying expenses filed lobbying reports. Of the 1,824 organizations, 1,306 (71 percent) used the LDA definition to calculate their lobbying expenses. Another 157 organizations (9 percent) elected to use the IRC 4911 definition. Finally, 361 organizations (20 percent) used the IRC 162(e) definition to calculate their lobbying expenses. (See table 3.) Data do not exist that would enable us to estimate the number of organizations that may not be registered because they used an IRC definition but would have had to register had they used the LDA definition. Because computerized registration data were available only for one 6- month period when we did our analysis, we did not analyze changes in registrations over time. Thus, we do not know whether, or to what extent, organizations switch between definitions from year to year as allowed by LDA. Reported Expenses Were Highest Under IRC Section 162(e) Definition Organizations that lobbied on their own behalf and reported using the IRC section 162(e) definition had the highest mean and median expenses reported. These organizations had 87 percent higher mean lobbying expenses than organizations that reported using the LDA definition and 58 percent higher mean lobbying expenses than those using the IRC section 4911 definition. Organizations that reported using the IRC section 162(e) definition had $180,000 in median expenses; organizations that reported using the LDA definition and those that reported using the IRC section 4911 definition each had median expenses of $80,000. Organizations that lobby on their own behalf do not have to register if their lobbying expenses for the 6 month reporting period are below $20,500. However, until a registered organization terminates its registration, it must file lobbying reports, even if its lobbying expenses are below the $20,500 registration threshold. activities. Therefore, data do not exist that would help explain the reasons for the differences. Table 4 shows the total, mean, and median expenses for organizations using each of the three lobbying definitions that reported having $10,000 or more in lobbying expenses from July to December 1997. Table 4 includes only data on organizations reporting lobbying expenses of $10,000 or more, because organizations with less than $10,000 in expenses check a box on the LDA reporting form and do not include an amount for their expenses. Because, as shown in table 3, many more of these organizations used the LDA definition than used either of the IRC definitions, it follows that the largest total amount of all expenses reported was under the LDA definition. Options That May Better Ensure That the Public Disclosure Purposes of LDA Are Realized Because the differences among the three lobbying definitions can significantly affect who registers and what they report under LDA, the current statutory provisions do not always complement LDA’s purpose. As discussed earlier, allowing organizations to use an IRC definition for LDA purposes can result in organizations (1) not registering under LDA, (2) disclosing information that may not be comparable, and (3) disclosing information that is unrelated to LDA’s purpose or that falls short of what LDA envisions. Options for revising the statutory framework exist; LDA requires us to consider one option, harmonizing the definitions; and we identified two other options on the basis of our analysis. Those options are eliminating the current authorization for businesses and tax-exempt organizations to use the IRC lobbying definitions for LDA reporting and requiring organizations that use an IRC lobbying definition to include only expenses related to federal lobbying covered by that IRC definition when the organizations register and report under LDA. The options address, in varying degrees, the effects of the differences on registration and reporting, but all have countervailing effects that must be balanced in determining what, if any, change should be made. Harmonizing the Definitions In addition to charging us with analyzing the differences among the three lobbying definitions and the impact of those differences on organizations’ registration and reporting of their lobbying efforts, LDA charges us with reporting any changes that we may recommend to harmonize those definitions. Harmonization implies the adoption of a common definition that would be used for LDA’s registration and reporting purposes and for the tax reporting purposes currently served by the IRC definitions. Harmonizing the three lobbying definitions would ensure that organizations would not have the burden of keeping track of their lobbying expenses and activities under two different definitions–one for tax purposes and another for LDA registration and reporting purposes. Requiring the use of a common definition would also mean that no alternative definitions could be used to possibly avoid LDA’s registration requirement and that all data reported under the common definition would be comparable. However, developing a lobbying definition that could be used for the purposes of LDA, IRC section 4911, and IRC section 162(e) would require Congress to revisit fundamental decisions it made when it enacted each definition. For example, if a common definition included state lobbying expenses that are included under the current IRC definitions, then the current objective of LDA to shed light on efforts to influence federal decisionmaking would essentially be rewritten and expanded. On the other hand, if a common definition did not include state lobbying expenses, fundamental decisions that were made when the statutes containing the IRC definitions were written would be similarly modified. Adopting a harmonized definition of lobbying could result in organizations disclosing less information on lobbying reports, if the new definition covered less than what is covered by the current LDA definition. In addition, a new definition would not be used only by organizations lobbying on their own behalf, which currently have the option of using an IRC definition for LDA reporting, but also by lobbying firms, which currently must use the LDA definition for their clients’ lobbying reports. Eliminating the Current Authorization for Using an IRC Lobbying Definition for LDA Purposes Eliminating the current authorization for using the IRC lobbying definitions for LDA purposes would mean that consistent registration and reporting requirements would exist for all lobbyists, and the requirements would be those developed by Congress specifically for LDA. This would result in all organizations following the LDA definition for LDA purposes; thus, only the data that Congress determined were related to LDA’s purposes would be reported. However, this option could increase the reporting burden of the relatively small number of organizations currently using the IRC definitions under LDA, because it would require them to track their lobbying activities as defined by LDA while also tracking the activities covered under the applicable IRC lobbying definition. Requiring Organizations Using IRC Definitions to Use Only Expenses for Federal Lobbying for LDA Registration and Expense Reporting The last option we identified would require organizations that elected to use an IRC definition for LDA to use only expenses related to federal lobbying efforts as defined under the IRC definitions when they determine whether they should register and what they should report under LDA. This would improve the alignment of registrations and the comparability of lobbying information that organizations reported, because organizations that elected to use the IRC definitions would no longer be reporting to Congress on their state, local, or grassroots lobbying. The reporting of expenses under this option would be similar to the reporting of all other information required under LDA, such as issues addressed and agencies contacted, which are based on contacts with federal officials. However, this option would only partially improve the comparability of data being reported by organizations using different definitions. Differences in the reported data would remain because the LDA and IRC definitions do not define lobbying of federal officials identically. LDA requires tracking contacts with a much broader set of federal officials than do the IRC definitions when lobbying contacts are made about nonlegislative matters. In addition, because differences would remain between the LDA and IRC definitions of lobbying at the federal level under this option, organizations might still avoid registering under LDA and might still report information that would differ from that reported by organizations using the LDA definition. For example, because the IRC lobbying definitions include fewer federal executive branch officials when a contact is about a nonlegislative matter, organizations using an IRC definition might still have expenses under the $20,500 threshold for lobbying; whereas, under the LDA definition they might exceed the threshold. Finally, this option could impose some additional reporting burden for the relatively small number of organizations currently using IRC definitions for LDA purposes. Reporting only federal lobbying when they use an IRC definition could result in some increased recordkeeping burden if these organizations do not currently segregate such data in their recordkeeping systems. Conclusions The three lobbying definitions we reviewed were adopted at different times to achieve different purposes. What they cover differs in many subtle and substantial ways. LDA was enacted to help shed light on the identity of, and extent of effort by, lobbyists who are paid to influence decisionmaking in the federal government. IRC section 4911 was enacted to help clarify the extent to which 501(c)(3) organizations could lobby without jeopardizing their tax-exempt status, and IRC section 162(e) was enacted to prevent businesses from deducting lobbying expenses from their federal income tax. Because the IRC definitions were not enacted to enhance public disclosure concerning federal lobbying, as was the LDA definition, allowing organizations to use the IRC definitions for reporting under LDA may not be consistent with achieving the level and type of public disclosure that LDA was enacted to provide. Allowing organizations to use an IRC definition instead of the LDA definition for calculating lobbying expenses under LDA can result in some organizations not filing lobbying registrations, because the use of the IRC definition could keep their federal lobbying below the LDA registration thresholds. On the other hand, under certain circumstances, organizations could meet the thresholds when using the IRC definition but would not do so if they used the LDA definition. We do not know how many, if any, organizations are not registered under LDA that would have met the registration thresholds under LDA but not under the applicable IRC definition. Giving organizations a choice of definitions to use each year can undermine LDA’s purpose of disclosing the extent of lobbying activity that is intended to influence federal decisionmaking, because organizations may disclose very different information on lobbying reports, depending on which definition they use. When an organization can choose which definition to use each year, it can choose the definition that discloses the least lobbying activity. Further, if an organization uses an IRC definition for its lobbying report, the report can include expenses for state, local, and grassroots lobbying that are unrelated to the other information on the report that only relates to federal lobbying. Also, if an organization uses an IRC definition, its lobbying report can exclude expenses and/or other information about lobbying that is not covered under the selected IRC definition (e.g., contacts about nonlegislative matters) but that nevertheless constitutes an effort to influence federal decisionmaking. In this situation, less information would be disclosed than LDA intended. Because the differences among the LDA and IRC lobbying definitions can significantly affect who registers and what they report under LDA, the use of the IRC definitions can conflict with LDA’s purpose of disclosing paid lobbyists’ efforts to influence federal decisionmaking. Options for reducing or eliminating these conflicts exist. These options include (1) harmonizing the definitions, (2) eliminating organizations’ authorization to use an IRC definition for LDA purposes, or (3) requiring those that use an IRC definition to include only expenses related to federal lobbying under the IRC definition when they register and report under LDA. The options, to varying degrees, could improve the alignment of registrations and the comparability of reporting with Congress’ purpose of increasing public disclosure of federal lobbying efforts. However, each option includes trade-offs between better ensuring LDA’s purposes and other public policy objectives and could result in additional reporting burden in some cases. In our opinion, the trade-offs involved in the option of harmonizing the definitions are disproportionate to the problem of LDA registrations and reporting not being aligned with LDA’s purpose. Harmonizing the definitions would best align registrations and reporting with LDA’s purposes if LDA’s definition is imposed for tax purposes as well, which would significantly alter previous congressional decisions about how best to define lobbying for tax purposes. Adopting a common lobbying definition that includes activities, such as state lobbying, that are covered under the current IRC definitions would require a rewrite and expansion of LDA’s objective of shedding light on efforts to influence federal decisionmaking. Such major changes in established federal policies that would be required to harmonize the definitions appear to be unwarranted when only a small portion of those reporting under LDA use the IRC definitions. The trade-offs for the other two options are less severe. Eliminating organizations’ authorization to use a tax definition for LDA purposes would ensure that all lobbyists register and report under the definition that Congress wrote to carry out LDA’s purpose. However, eliminating the authorization likely would impose some additional burden on the relatively small number of organizations currently using IRC definitions for LDA. Requiring that only expenses related to federal-level lobbying under the IRC definitions be used for LDA purposes would not align reporting with LDA’s purposes as thoroughly as eliminating the authorization to use an IRC definition for LDA would. Under this option organizations could still avoid registering under LDA when the use of an IRC definition results in total expenses falling below the LDA registration threshold. The option also could impose some additional recordkeeping burden for the relatively small number of organizations currently using the IRC definitions. Matters for Congressional Consideration If Congress believes that the inclusion of nonfederal lobbying expenses and the underreporting of lobbying efforts at the federal level due to the optional use of the IRC lobbying definitions seriously detract from LDA’s purpose of public disclosure, then it should consider adopting one of two options. Congress could remove the authorization for organizations to use an IRC definition for reporting purposes. In this case, data reported to the Senate and House would adhere to the LDA definition, which Congress enacted specifically to achieve LDA’s public reporting purpose. Alternatively, Congress could allow organizations to continue using the IRC definitions but require that they use only the expenses related to federal-level lobbying that those definitions yield when they register and report under LDA. The data reported would be more closely aligned with LDA’s purpose of disclosing federal level lobbying efforts, but some differences would remain between the data so reported and the data that would result from applying only the LDA definition. If either of these options were considered, Congress would need to weigh the benefit of reporting that would be more closely aligned with LDA’s public disclosure purpose against the additional reporting burden that some organizations would likely bear. Agency Comments and Our Evaluation On February 11, 1999, we sent a draft of this report for review and comment to the Clerk of the House of Representatives, the Secretary of the Senate, the Secretary of the Treasury, and the Commissioner of the Internal Revenue Service. Representatives of the Clerk of the House of Representatives, the Secretary of the Senate, and the Secretary of the Treasury told us that no comments would be forthcoming. On February 17, 1999, we met with officials from the Internal Revenue Service, and they provided technical comments on a draft of this report. On the basis of their comments, we made changes to the report as appropriate. In a letter dated March 5, 1999, the Chief Operations Officer of the Internal Revenue Service stated that IRS had reached general consensus with us on the technical matters in the report. We are sending copies of this report to Senator Carl Levin; Senator Ted Stevens; Senator William V. Roth, Jr., Chairman, and Senator Daniel P. Moynihan, Ranking Minority Member, Senate Committee on Finance; Representative Bill Archer, Chairman, and Representative Charles B. Rangel, Ranking Minority member, House Committee on Ways and Means; the Honorable Gary Sisco, Secretary of the Senate; the Honorable Jeff Trandahl, Clerk of the House of Representatives; the Honorable Robert E. Rubin, Secretary of the Treasury; and the Honorable Charles O. Rossotti, Commissioner of Internal Revenue. Copies will also be made available to others upon request. The major contributors to this report are listed in appendix IV. Please call me on (202) 512-8676 if you have any questions. Comparisons of LDA and IRC Definitions Comparison of LDA and IRC Definitions The types of activities and contacts that are covered by the Lobbying Disclosure Act of 1995 (LDA) lobbying definition are significantly different from those covered under the Internal Revenue Code (IRC) definitions. First, LDA does not cover grassroots lobbying. The IRC lobbying definitions cover grassroots lobbying, such as television advertisements and direct mail campaigns, that are intended to influence legislation at the federal, state, or local levels. Second, LDA covers lobbying only at the federal level. However, both IRC definitions cover lobbying of federal officials, as well as state and local government officials. The IRC definitions potentially cover contacts with more levels of executive branch officials than LDA covers when those contacts are about legislation. However, when contacts are about nonlegislative subject matters, such as regulations or policies, LDA covers contacts with a broader range of federal officials than the IRC definitions. Further, LDA’s definition of lobbying includes legislative matters and an extensive list of nonlegislative matters. IRC section 4911 only covers lobbying contacts that address specific legislative proposals. IRC section 162(e) covers lobbying contacts on legislative and nonlegislative subjects, but its coverage of legislative subjects is somewhat more limited than LDA’s coverage, and its coverage of nonlegislative subjects is not clearly defined. Grassroots Lobbying Grassroots lobbying—efforts to influence legislation by influencing the public’s view of that legislation—is covered under the IRC definitions but not under the LDA definition. Grassroots lobbying campaigns can use such means as direct mailings and television, radio, and newspaper advertisements and can be very expensive. Both IRC section 4911 and IRC section 162(e) cover grassroots lobbying at the federal, state, and local levels. However, IRC section 4911 has a narrower definition of grassroots lobbying than IRC section 162(e) does. Under IRC section 4911, grassroots lobbying is defined as any attempt to influence legislation through an attempt to affect the opinions of the general public or any segment thereof. To be considered grassroots lobbying under IRC section 4911, a communication with the public must refer to a specific legislative proposal, reflect a view on such legislative proposal, and encourage the recipient of the communication to take action with respect to such legislative proposal. IRC section 162(e) does not have the same stringent tests that IRC section 4911 has for determining if a communication with the public is grassroots lobbying. Under IRC section 162(e), communications with the public that attempt to develop a grassroots point of view by influencing the general public to propose, support, or oppose legislation are considered to be grassroots lobbying. To be considered as grassroots lobbying under IRC section 162(e), a communication with the public does not have to encourage the public to take action with respect to a specific legislative proposal. Therefore, the IRC section 162(e) grassroots lobbying provision is likely to encompass more lobbying campaigns than IRC section 4911 does. Lobbying State and Local Officials The LDA lobbying definition covers only contacts with federal government officials and does not require lobbyists to report any expenses for contacts with state and local government officials. This is consistent with LDA’s overall purpose of increasing public disclosure of the efforts of lobbyists paid to influence federal decisionmaking. The IRC lobbying definitions also cover contacts with federal government officials. However, in contrast to LDA, the IRC lobbying definitions require that expenses for contacts with state officials to influence state legislation be included in lobbying expenses. Further, both IRC lobbying definitions cover contacts with local government officials to influence local government legislation; but coverage of local government contacts is limited under IRC section 162(e), because that section has an exception for contacts with local councils on legislation of direct interest to the organization. (Contacts with state and local government officials to influence something other than legislation, such as a state or local policy or regulation, are not covered by either of the IRC definitions.) The amounts spent lobbying state governments can be significant. For example, in 1997, under state lobbying disclosure laws, reported spending on lobbying state government officials was $144 million in California, $23 million in Washington, and $23 million in Wisconsin. Differences Based on the Federal Officials Contacted Whether a lobbyist’s contact with a federal government official counts as lobbying under any of the three lobbying definitions depends, in part, on whether the contact is with a covered official. Covered officials are defined by several factors, such as their branch of government, the office they work in, and their rank. All three definitions include as lobbying lobbyists’ contacts with legislative branch officials—Members and employees of Congress—to influence legislation. However, for contacts with executive branch officials to influence legislation and contacts with either legislative branch or executive branch officials on legislative matters, such as regulations and contracts, the definitions of what is counted as lobbying differ significantly. Under LDA, contacts with any covered government officials about any legislative or nonlegislative matters covered by LDA are considered lobbying contacts, and their associated expenses must be reported. However, under the IRC definitions, whether the contact is on legislative or nonlegislative matters determines which officials are covered. For contacts to influence legislation, any executive branch officials who may participate in the formulation of legislation are covered under both IRC definitions. But, for nonlegislative matters, IRC section 4911 covers no executive branch officials, and IRC section 162(e) covers very few executive branch officials. Covered Executive Branch Officials for Contacts on Nonlegislative Matters Many of the executive branch officials covered by LDA for contacts on any lobbying subject are not covered by IRC section 162(e) when contacts are intended to influence nonlegislative matters. Also, none of the executive branch officials covered by LDA are covered by IRC section 4911 for contacts on nonlegislative matters, because IRC section 4911 covers only contacts to influence legislation. For contacts to influence the official actions or positions of an executive branch official on nonlegislative matters, IRC section 162(e) provides a list of covered executive branch officials. LDA’s list of covered executive branch officials includes all the officials on the IRC section 162(e) list, plus several more categories of officials. LDA’s list applies to contacts on any matter covered by LDA—legislative or nonlegislative. Table I.1 shows that LDA covers about 10 times the number of officials that IRC section 162(e) covers for nonlegislative matters. As shown in table I.1, LDA and IRC section 162(e) include contacts with the President and Vice President and Cabinet Members and similar high- ranking officials and their immediate deputies. In the Executive Office of the President, LDA includes all contacts with all offices; IRC section 162(e) includes only all officials in the White House Office and the two most senior level officers in the other agencies of the Executive Office of the President. Further, LDA includes contacts with officials in levels II through V of the Executive Schedule, which includes agency heads and deputy and assistant secretaries; IRC section 162(e) does not. Also, LDA includes contacts with officials at levels O-7 and above, such as Generals and Admirals, in the uniformed services. Finally, LDA includes contacts with all Schedule C appointees, who are political appointees (graded GS/GM-15 and below) in positions that involve determining policy or require a close, confidential relationship with the agency head or other key officials of the agency. The narrow scope of IRC section 162(e)’s list of covered executive branch officials can result in organizations not including on their lobbying reports expenses or other information, such as issues addressed, relating to contacts with very high-ranking officials. For example, if an organization made contacts to influence an official action or position with the top official at most independent agencies, including the National Aeronautics and Space Administration, the General Services Administration, the Export-Import Bank, and the Federal Communications Commission, these contacts would not be considered as contacts with covered executive branch officials and therefore would not be covered by the IRC section 162(e) definition. Similarly, contacts on nonlegislative matters with the heads of agencies within cabinet departments, such as the heads of the Internal Revenue Service, the Occupational Safety and Health Administration, the Bureau of Export Administration, and the Food and Drug Administration, would not be considered as contacts with officials at a high enough level for the list of covered executive branch officials under the IRC section 162(e) definition. However, contacts with all of these officials would be covered under the LDA definition of lobbying. Covered Executive Branch Officials for Contacts on Legislation The two IRC definitions generally provide the same coverage of contacts with executive branch officials for influencing legislation. The two definitions provide that a contact with “any government official or employee who may participate in the formulation of legislation” made to influence legislation must be counted as a lobbying expense. Thus, these definitions potentially cover many more levels of executive branch officials than are included on LDA’s list of covered executive branch officials. LDA’s list of covered officials is shown in table I.1 and applies to both legislative and nonlegislative matters. Therefore, contacts with officials in the Senior Executive Service or in grades GS/GM-15 or below who are not Schedule C appointees would generally count as lobbying contacts under the IRC definitions if such contacts were for the purpose of influencing legislation and those officials participated in the formulation of legislation. But such contacts would not count as lobbying contacts under the LDA definition, because LDA does not include these officials as covered executive branch officials. Covered Legislative Branch Officials for Contacts on Nonlegislative Matters Neither IRC section 162(e) nor IRC section 4911 covers contacts with legislative branch officials on nonlegislative matters. The two IRC definitions cover only legislative branch officials in regard to contacts to influence legislation. However, LDA counts as lobbying any contacts with Members of Congress and congressional employees on any subject matter covered by LDA. Therefore, a lobbyist who contacts Members of Congress to influence a proposed federal regulation would be required to count these contacts in lobbying expenses calculated under the LDA definition and to disclose the issues addressed and the House of Congress contacted. Covered Legislative Branch Officials for Contacts on Legislation LDA and the two IRC definitions cover the same federal legislative branch officials for contacts made to influence legislation. LDA covers contacts with any Member or employee of Congress for contacts on any legislative or nonlegislative subject matter covered by the act. Both IRC definitions cover contacts with any Member or employee of Congress for contacts made to influence legislation. Differences Based on Subjects Addressed During Lobbying Contacts The subject matters for which contacts with officials count as lobbying are different under the three lobbying definitions. LDA provides a comprehensive list of subjects about which contacts with a covered official are considered to be lobbying. For example, for nonlegislative matters, the list includes, in part, “the formulation, modification, or adoption of a federal rule, regulation, Executive order, or any other program, policy, or position of the United States Government.” Under IRC section 4911, the only subject covered by lobbying contacts is “influencing legislation.” Under IRC section 162(e), the subjects covered are “influencing legislation” and “influencing official actions or positions” of executive branch officials. The phrase “official actions or positions” applies to contacts on nonlegislative matters. Further, more specific information about what was covered in a lobbyist’s contact is needed under IRC sections 4911 and 162(e) than is needed under LDA to determine if the contact should count as lobbying. Coverage of Legislative Matters For legislative matters, LDA covers “the formulation, modification, or adoption of Federal legislation (including legislative proposals).” In contrast, for legislative matters, the IRC lobbying definitions list only “influencing legislation,” which, according to the Treasury Regulations, refers to contacts that address either specific legislation that has been introduced or a specific legislative proposal that the organization supports or opposes. Under both IRC definitions, a contact to influence legislation is a contact that refers to specific legislation and reflects a view on that legislation. Therefore, a lobbyist’s contact with a legislative branch official in which the lobbyist provides information or a general suggestion for improving a situation but in which the lobbyist does not reflect a view on specific legislation would not be considered to be a lobbying contact under the IRC definitions. For example, the Treasury regulations for IRC section 162(e) provide an example of a lobbying contact in which a lobbyist tells a legislator to take action to improve the availability of new capital. In this example, the lobbyist is not referring to a specific legislative proposal, so the contact does not count as lobbying. However, according to the Treasury Regulations, a lobbyist’s contact with a Member of Congress in which the lobbyist urges a reduction in the capital gains tax rate to increase the availability of new capital does count as lobbying, because the contact refers to a specific legislative proposal. In contrast, because LDA covers legislation from its formulation to adoption, the fact that a specific legislative proposal was not addressed during a lobbyist’s contact with a government official does not prevent the contact from being counted as a lobbying contact. Coverage of Nonlegislative Matters LDA’s list of nonlegislative matters under its definition of “lobbying contact” seems to include most activities of the federal government. The list includes the formulation, modification, or adoption of a federal rule, regulation, executive order, or any other program, policy, or position of the United States Government; the administration or execution of a federal program or policy (including the negotiation, award, or administration of a federal contract, grant, loan, or permit, or license); and the nomination or confirmation of a person for a position subject to confirmation by the Senate. IRC section 4911 does not include any nonlegislative matters in its lobbying definition. The only nonlegislative matter included under the IRC section 162(e) lobbying definition is “any direct communication with a covered executive branch official in an attempt to influence the official actions or positions of such official.” However, neither IRC section 162(e) nor its regulations define what is meant by “official actions or positions,” thus leaving the interpretation of what activities to count up to the lobbyist. Some lobbyists might consider an official action to be almost anything a federal official does while at work, while others might consider that official actions must be more formal actions, such as those requiring the signing of official documents. Exceptions to the Lobbying Definitions LDA contains 19 exceptions to the definition of lobbying and IRC sections 4911 and 162(e) contain 5 and 7 exceptions, respectively. These exceptions are listed in appendix III. Although LDA includes an extensive list of exceptions, for the most part these exceptions make technical clarifications in the law and do not provide special exceptions for particular groups. Many of the LDA exceptions are for contacts made during the participation in routine government business, and some of these are for contacts that would be part of the public record. For example, these include (1) contacts made in response to a notice in the Federal Register soliciting communications from the public and (2) a petition for agency action made in writing and required to be a matter of public record pursuant to established agency procedures. Other exceptions are for contacts dealing with confidential information, such as contacts “not possible to report without disclosing information, the unauthorized disclosure of which is prohibited by law.” LDA includes four exceptions for particular groups, including an exception for contacts made by public officials acting in an official capacity; an exception for representatives of the media making contacts for news purposes; an exception for any contacts made by certain tax-exempt religious organizations; and an exception for contacts made with an individual’s elected Member of Congress or the Member’s staff regarding the individual’s benefits, employment, or other personal matters. Of the five exceptions to the IRC section 4911 lobbying definition, two could allow a significant amount of lobbying expenses to be excluded from IRC section 4911 coverage. The first is an exception for making available the results of nonpartisan analysis, study, or research. Due to this exception, IRC section 4911 does not cover 501(c)(3) organizations’ advocacy on legislation as long as the organization provides a full and fair exposition of the pertinent facts that would enable the public or an individual to form an independent opinion or conclusion. The second significant exception under IRC section 4911 is referred to as the self-defense exception. This exception excludes from coverage lobbying expenses related to appearances before, or communications to, any legislative body with respect to a possible decision of such body that might affect the existence of the organization, its powers and duties, tax- exempt status, or the deduction of contributions to the organization. According to IRS officials, this exception provides that a 501(c)(3) nonprofit tax-exempt organization can lobby legislative branch officials on matters that might affect its tax-exempt status or the activities it can engage in without losing its tax exempt status, and such lobbying will not be counted under the IRC section 4911 definition. According to IRS officials, this exception does not cover lobbying on state or federal funding. The IRC section 162(e) definition has one exception in the statute, which is for contacts with local government legislative branch officials on legislation of direct interest to the organization. In addition, IRC section 162(e) has seven exceptions, which are provided for by Treasury Regulations. These seven exceptions provide technical clarifications to the statutory provisions and do not appear to exclude a significant amount of expenses that would be counted as lobbying expenses under the other lobbying definitions. For example, the IRC section 162(e) exceptions include (1) any communication compelled by subpoena, or otherwise compelled by federal or state law; and (2) performing an activity for purposes of complying with the requirements of any law. Coverage of Different Contacts, Activities, and Expenses Under the Three Definitions of Lobbying This appendix contains detailed information about which contacts, activities, and expenses are covered under the definitions of lobbying for LDA, IRC section 4911, and IRC section 162(e). Table II.1 shows the coverage of federal lobbying. Table II.2 shows the coverage of state lobbying, and table II.3 shows the coverage of local lobbying. IRC section 162(e) Yes 2 U.S.C. 1602 (7) Yes Treas. Reg. § 56.4911-3(a) Yes 26 U.S.C. 162(e)(5)(C) Yes 2 U.S.C. 1602(8)(A)(i) & (4)(A) Yes 2 U.S.C. 1602(8)(A)(i) & (4)(C) & (D) Yes 26 U.S.C. 4911(d)(1)(B) Yes 26 U.S.C. 4911(d)(1)(B) Yes 26 U.S.C. 162(e)(1)(A) & (4)(A) Yes 26 U.S.C. 162(e)(1)(A) & (4)(A) President, Vice President; Executive Schedule level I, cabinet-level officials, and their immediate deputies Executive Schedule levels II, III, IV, and V (excluding cabinet-level officials and their immediate deputies) Yes 2 U.S.C. 1602(8) (A)(i) & (3)(A), (B) & (D) Yes 2 U.S.C. 1602(8) (A)(i) & (3)(D) Yes26 U.S.C. 4911(d)(1)(B) Yes26 U.S.C. 4911(d)(1)(B) Yes26 U.S.C. 4911(d)(1)(B) Yes26 U.S.C. 4911(d)(1)(B) Yes26 U.S.C. 4911(d)(1)(B) Yes26 U.S.C. 4911(d)(1)(B) Yes 2 U.S.C. 1602(8)(A)(i) & (3)(E) Yes 2 U.S.C. 1602(8)(A)(i) & (3)(F) Yes 2 U.S.C. 1602(8)(A)(i) & (3)(C) Yes 2 U.S.C. 1602(8)(A)(i) & (3)(C) Yes 2 U.S.C. 1602(8)(A)(i) & (3)(C) Yes26 U.S.C. 4911(d)(1)(B) Yes26 U.S.C. 162(e)(1)(A) & (4)(A) Yes26 U.S.C. 162(e)(1)(A) & (4)(A) Yes26 U.S.C. 162(e)(1)(A) & (4)(A) Yes26 U.S.C. 162(e)(1)(A) & (4)(A) Yes26 U.S.C. 162(e)(1)(A) & (4)(A) Yes26 U.S.C. 162(e)(1)(A) & (4)(A) Yes26 U.S.C. 162(e)(1)(A) & (4)(A) IRC section 162(e) Yes 26 U.S.C. 4911(d)(1)(B) Yes 26 U.S.C. 162(e)(1)(A) & (4)(A) Yes 2 U.S.C. 1602(8)(A)(ii) & (4)(A) Yes 2 U.S.C. 1602(8)(A)(ii) & (4)(C) & (D) President, Vice President; Executive Schedule level I, cabinet-level officials, and their immediate deputies Executive Schedule levels II, III, IV, and V (excluding cabinet-level officials and their immediate deputies) Yes 2 U.S.C. 1602(8)(A)(ii) & (3)(A), (B) & (D) Yes 2 U.S.C. 1602(8)(A)(ii) & (3)(D) Yes 2 U.S.C. 1602(8)(A)(ii) & (3)(E) Yes 2 U.S.C. 1602(8)(A)(ii) & (3)(F) Yes 2 U.S.C. 1602(8)(A)(ii) & (3)(C) Yes 2 U.S.C. 1602(8)(A)(ii) & (3)(C) Yes 2 U.S.C. 1602(8)(A)(ii) & (3)(C) Maybe26 U.S.C. 162(e)(1)(D) & (6)(C) Maybe26 U.S.C. 162(e)(1)(D) & (6)(C) Yes 2 U.S.C. 1602(8)(A)(iii) & 4(A) IRC section 162(e) Yes 2 U.S.C. 1602(8)(A)(iii) & (4)(C) & (D) President, Vice President; Executive Schedule level I, cabinet-level officials, and their immediate deputies Executive Schedule levels II, III, IV, and V (excluding cabinet-level officials and their immediate deputies) Yes 2 U.S.C. 1602(8)(A)(iii) & (3)(A), (B) & (D) Yes 2 U.S.C. 1602(8)(A)(iii) & (3)(D) Yes 2 U.S.C. 1602(8)(A)(iii) & (3)(E) Yes 2 U.S.C. 1602(8)(A)(iii) & (3)(F) Yes 2 U.S.C. 1602(8)(A)(iii) & (3)(C) Yes 2 U.S.C. 1602(8)(A)(iii) & (3)(C) Yes 2 U.S.C. 1602(8)(A)(iii) & (3)(C) Maybe26 U.S.C. 162(e)(1)(D) & (6)(C) Maybe26 U.S.C. 162(e)(1)(D) & (6)(C) Executive Schedule levels II, III, IV, and V (excluding cabinet-level officials and their immediate deputies) Yes2 U.S.C. 1602(8)(A) & (3)(A), (B) & (D) Yes2 U.S.C. 1602(8)(A) & (3)(D) Yes2 U.S.C. 1602(8)(A) & (3)(E) Yes2 U.S.C. 1602(8)(A) & (3)(F) IRC section 162(e) Yes2 U.S.C. 1602(8)(A) & (3)(C) Yes2 U.S.C. 1602(8)(A) & (3)(C) Yes2 U.S.C. 1602(8)(A) & (3)(C) Yes 26 U.S.C. 162(e)(1)(D) & (6)(C) Yes 26 U.S.C. 162(e)(1)(D) & (6)(C) Yes 2 U.S.C. 1602(8)(A)(iv) & (4)(A) Yes 2 U.S.C. 1602(8)(A)(iv) &(4)(C) & (D) Yes 26 U.S.C. 4911(d)(1)(B) Yes 26 U.S.C. 4911(d)(1)(B) Yes 26 U.S.C. 162(e)(1)(A)& (4)(A) Yes 26 U.S.C. 162(e)(1)(A)& (4)(A) Yes26 U.S.C. 4911(d)(1)(B) Executive Schedule levels II, III, IV, and V (excluding cabinet-level officials and their immediate deputies) Yes26 U.S.C. 4911(d)(1)(B) Yes26 U.S.C. 4911(d)(1)(B) Yes26 U.S.C. 4911(d)(1)(B) Yes26 U.S.C. 4911(d)(1)(B) Yes26 U.S.C. 4911(d)(1)(B) Yes 2 U.S.C. 1602(8)(A)(iv) & (3)(A), (B) & (D) Yes 2 U.S.C. 1602(8)(A)(iv) & (3)(D) Yes 2 U.S.C. 1602(8)(A)(iv) & (3)(E) Yes 2 U.S.C. 1602(8)(A)(iv) & (3)(F) Yes 2 U.S.C. 1602(8)(A)(iv) & (3)(C) Yes 2 U.S.C. 1602(8)(A)(iv) & (3)(C) Yes 2 U.S.C. 1602(8)(A)(iv) & (3)(C) Yes26 U.S.C. 4911(d)(1)(B) Yes26 U.S.C. 162(e)(1)(A)& (4)(A) Yes26 U.S.C. 162(e)(1)(A)& (4)(A) Yes26 U.S.C. 162(e)(1)(A)& (4)(A) Yes26 U.S.C. 162(e)(1)(A)& (4)(A) Yes26 U.S.C. 162(e)(1)(A)& (4)(A) Yes26 U.S.C. 162(e)(1)(A)& (4)(A) Yes26 U.S.C. 162(e)(1)(A)& (4)(A) IRC section 162(e) Yes 26 U.S.C. 4911(d)(1)(A) & (e)(2) Yes 26 U.S.C. 162(e)(1)(C) & (4)(B) 26 U.S.C. 4911(e)(2) IRC section 162(e) Yes Treas. Reg. § 56.4911-3(a) Yes 26 U.S.C. 162(e)(5)(C) Yes 26 U.S.C. 4911(d)(1)(A) & (e)(2) Yes 26 U.S.C. 162(e)(1)(C ) & (4)(B) 26 U.S.C. 4911(e)(2) IRC section 162(e) Yes Treas. Reg. § 56.4911-3(a) Yes 26 U.S.C. 162(e)(5)(C) Yes 26 U.S.C. 4911(d)(1)(B) & (e)(2) Yes 26 U.S.C. 4911(d)(1)(B) & (e)(2) Yes 26 U.S.C. 4911(d)(1)(B) & (e)(2) Yes 26 U.S.C. 4911(d)(1)(B) Yes 26 U.S.C. 4911(d)(1)(A) & (e)(2) Yes 26 U.S.C. 162(e)(1)(C)& (e)(4)(B) 26 U.S.C. 4911 (e)(2) Exceptions to the LDA and IRC Lobbying Definitions Exceptions to the LDA Lobbying Definition Title 2 of the United States Code contains 19 exceptions to LDA’s lobbying definition. Under Title 2, the term “lobbying contact” does not include a communication that is: 1. made by a public official acting in the public official’s official capacity; 2. made by a representative of a media organization if the purpose of the communication is gathering and disseminating news and information to the public; 3. made in a speech, article, publication, or other material that is distributed and made available to the public, or through radio, television, cable television, or other medium of mass communication; 4. made on behalf of a government of a foreign country or a foreign political party and disclosed under the Foreign Agents Registration Act of 1938;5. a request for a meeting, a request for the status of an action, or any other similar administrative request, if the request does not include an attempt to influence a covered executive branch official or a covered legislative branch official; 6. made in the course of participation in an advisory committee subject to the Federal Advisory Committee Act; 7. testimony given before a committee, subcommittee, or task force of Congress, or submitted for inclusion in the public record of a hearing conducted by such committee, subcommittee, or task force; 8. information provided in writing in response to an oral or written request by a covered executive branch official or a covered legislative branch official for specific information; 9. required by subpoena, civil investigative demand, or otherwise compelled by statute, regulation, or other action of Congress or an agency, including any communication compelled by a federal contract, grant, loan, permit, or license; 10. made in response to a notice in the Federal Register, Commerce Business Daily, or other similar publication soliciting communications from the public and directed to the agency official specifically designated in the notice to receive such communications; 11. not possible to report without disclosing information, the unauthorized disclosure of which is prohibited by law; 12. made to an official in an agency with regard to—(1) a judicial proceeding or a criminal or civil law enforcement inquiry, investigation, or proceeding; or (2) a filing or proceeding that the government is specifically required by statute or regulation to maintain or conduct on a confidential basis–if that agency is charged with responsibility for such proceeding, inquiry, investigation, or filing; 13. made in compliance with written agency procedures regarding an adjudication conducted by the agency under section 554 of Title 5 or substantially similar provisions; 14. a written comment filed in the course of a public proceeding or any other communication that is made on the record in a public proceeding; 15. a petition for agency action made in writing and required to be a matter of public record pursuant to established agency procedures; 16. made on behalf of an individual with regard to that individual’s benefits, employment, or other personal matters involving only that individual, except that this clause does not apply to any communication with—(1) a covered executive branch official, or (2) a covered legislative branch official (other than the individual’s elected Members of Congress or employees who work under such Member’s direct supervision)–with respect to the formulation, modification, or adoption of private legislation for the relief of that individual; 17. a disclosure by an individual that is protected under the amendments made by the Whistleblower Protection Act of 1989 under the Inspector General Act of 1978 or under another provision of law; 18. made by (1) a church, its integrated auxiliary, or a convention or association of churches that is exempt from filing a federal income tax return under paragraph (2)(A)(i) of such section 6033(a) of Title 26, or (2) a religious order that is exempt from filing a federal income tax return under paragraph (2)(A)(iii) of such section 6033(a); and 19. between (1) officials of a self-regulatory organization (as defined in section 3(a)(26) of the Securities Exchange Act) that is registered with or established by the Securities and Exchange Commission as required by that act or a similar organization that is designated by or registered with the Commodities Future Trading Commission as provided under the Commodity Exchange Act; and (2) the Securities and Exchange Commission or the Commodities Future Trading Commission, respectively, relating to the regulatory responsibilities of such organization under the act. Exceptions to the IRC Section 4911 Lobbying Definition Title 26 of the United States Code contains five exceptions to the lobbying definition in IRC section 4911. Under IRC section 4911, the term “influencing legislation”, with respect to an organization, does not include: 1. making available the results of nonpartisan analysis, study, or research; 2. providing technical advice or assistance (where such advice would otherwise constitute influencing of legislation) to a governmental body or to a committee or other subdivision thereof in response to a written request by such body or subdivision, as the case may be; 3. appearances before, or communications to, any legislative body with respect to a possible decision of such body that might affect the existence of the organization, its powers and duties, tax-exempt status, or the deduction of contributions to the organization; 4. communications between the organization and its bona fide members with respect to legislation or proposed legislation of direct interest to the organization and such members, other than communications that directly encourage the members to take action to influence legislation; 5. any communication with a government official or employee, other than (1) a communication with a member or employee of a legislative body (where such communication would otherwise constitute the influencing of legislation), or (2) a communication the principal purpose of which is to influence legislation. Exceptions to the IRC Section 162(e) Lobbying Definition Title 26 of the United States Code contains a single exception to the lobbying definition in IRC section 162(e): 1. appearances before, submission of statements to, or sending communications to the committees, or individual members, of local councils or similar governing bodies with respect to legislation or proposed legislation of direct interest to the taxpayer. In addition, the Treasury Regulations contain eight exceptions: 2. any communication compelled by subpoena, or otherwise compelled by federal or state law;3. expenditures for institutional or “good will” advertising which keeps the taxpayer’s name before the public or which presents views on economic, financial, social, or other subjects of a general nature but which do not attempt to influence the public with respect to legislative matters;4. before evidencing a purpose to influence any specific legislation— determining the existence or procedural status of specific legislation, or the time, place, and subject of any hearing to be held by a legislative body with respect to specific legislation;5. before evidencing a purpose to influence any specific legislation— preparing routine, brief summaries of the provisions of specific legislation; 6. performing an activity for purposes of complying with the requirements of any law; 7. reading any publications available to the general public or viewing or listening to other mass media communications; and 8. merely attending a widely attended speech. Major Contributors to This Report General Government Division, Washington, D.C. Office of the General Counsel Alan N. Belkin, Assistant General Counsel Rachel DeMarcus, Assistant General Counsel Jessica A. Botsford, Senior Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO reviewed the reporting of lobbying activities by organizations that have employees who lobby on the organizations' behalf and have the option to report their lobbying expenses under the Lobbying Disclosure Act (LDA) of 1995 or applicable Internal Revenue Code (IRC) provisions that they use for tax purposes, focusing on: (1) the differences between the LDA and IRC section 4911 and 162(e) definitions of lobbying; (2) the impact that differences in the definitions may have on registration and reporting under LDA, including information on the number of organizations using each definition and the expenses they have reported; and (3) identifying and analyzing options, including harmonizing the three definitions, that may better ensure that the public disclosure purposes of LDA are realized. GAO noted that: (1) the LDA definition covers only contacts with federal officials; (2) the IRC definitions cover contacts with federal, state, and local officials as well as attempts to influence the public through grassroots lobbying; (3) the definitions differ in their coverage of contacts with federal officials, depending on whether the contact concerns a legislative or nonlegislative matter; (4) the differences in the lobbying definitions can affect whether organizations register under LDA; (5) an organization that engages or expects to engage in certain lobbying activities during a 6-month period, including incurring at least $20,500 in lobbying expenses, is required to register under LDA; (6) the definition an organization uses in calculating its lobbying expenses determines the expenses it counts toward the $20,500 threshold; (7) when using the LDA definition would result in expenses of more than $20,500, an organization may be able to use the applicable IRC definition to keep its lobbying expenses below $20,500 or vice versa; (8) the lobbying definition an organization uses affects the information it must disclose on its semiannual lobbying report; (9) when using an IRC definition, an organization must report its total lobbying expenses for all activities covered by that definition; (10) however, all of these expenses are reported in one total amount, so the lobbying reports do not indicate the amount related to different levels of government and types of lobbying activities; (11) when organizations report information other than expenses, they are required to report only information related to federal government lobbying, regardless of whether they use the LDA definition or one of the IRC definitions to calculate expenses; (12) because of the differences in definitions, information disclosed on lobbying reports filed by organizations using the IRC definitions is not comparable to information on reports filed by organizations using the LDA definition; (13) under the IRC definitions, organizations can disclose less information than under the LDA definition; (14) of the organizations that lobbied on their own behalf and had the option of using an IRC definition for reporting expenses under LDA, most used the LDA definition; (15) the organizations that reported using the IRC section 162(e) definition had the highest mean and median expenses; and (16) because the differences among the three lobbying definitions can significantly affect who registers and what they report under LDA, the use of the IRC definitions can conflict with LDA's public disclosure purpose.
GAO_GAO-13-231
Background The Office of Personnel Management, as the federal government’s human capital manager, provides leadership and guidance on establishing and operating efficient federal training and development programs throughout the government. It provides advice and assistance to agencies on training and development programs so that those programs support strategic human capital investment. GAO, Federal Training Investments: Office of Personnel Management and Agencies Can Do More to Ensure Cost-Effective Decisions, GAO-12-878 (Washington, D.C.: Sept. 17, 2012). OFPP has specific responsibilities related to acquisition workforce training to help ensure that the skills needed to handle the complexities of acquisition programs and contracts are maintained. The Office of Federal Procurement Policy Act requires executive agencies and the OFPP Administrator to establish management policies and procedures for the effective management of the acquisition workforce—including education, training, and career development.4 In 2005, OFPP defined the civilian acquisition workforce to include, at a minimum, professionals serving in the contracting series (GS-1102), Contracting Officers, Purchasing series (GS-1105), Program/Project Managers (P/PM), and Contracting Officer’s Representatives (COR) or the equivalent, and additional positions identified by the agency. Participation in the acquisition workforce may be on a full-time, part-time, or occasional basis. 41 U.S.C. § 1703. 41 U.S.C. § 1703(c)(2). The Federal Supply Schedules program consists of contracts awarded by the General Services Administration or the Department of Veterans Affairs for similar or comparable goods or services, established with more than one supplier, at varying prices. Federal Acquisition Regulation (FAR) § 8.401 and § 8.402. The Schedules offer a large group of commercial products and services ranging from office supplies to information technology services. Governmentwide Acquisition Contracts (GWAC) are considered multi-agency contracts but, unlike other multi-agency contracts, are not subject to the same requirements and limitations, such as documentation that the contract is in the best interest of the government as set forth under the Economy Act. GWACs are contracts for information technology established by one agency for government-wide use that are operated—(1) by an executive agent designated by the Office of Management and Budget pursuant to 40 U.S.C. § 11302(e); or (2) under a delegation of procurement authority issued by the General Services Administration (GSA). FAR § 2.101. The Clinger-Cohen Act of 1996 authorized GWACs to be used to buy information technology goods and services. Pub. L. No. 101-510, § 1202 (1996), (codified at 40 U.S.C. § 11314(a)(2)). All agencies can order goods and services directly through the Schedules contracts and GWACs. Board of Directors, works closely with OFPP as well as the DAU to collaborate on many issues involved with training and developing the federal acquisition workforce. The Defense Acquisition Workforce Improvement Act required the Secretary of Defense to establish and maintain DAU in order to provide training for the DOD acquisition workforce.8 FAI differs notably from DAU, however, in that it shares much of the responsibility for training the civilian acquisition workforce with the agencies and has fewer budgetary and staffing resources. For example, FAI currently has 12 staff, while DAU has over 700 faculty and staff. The Administrator of OFPP is also required to ensure that agencies collect and maintain standardized information on the acquisition workforce. Agencies, in consultation with OFPP, can structure their management and oversight of training for their acquisition workforce in such a manner as best supports the agency. Key agency positions involved in managing the acquisition workforce within the agencies are described in table 1. Acquisition training-related responsibilities Responsible for developing and maintaining an agency acquisition career management program to ensure an adequate professional acquisition workforce. Participates in the strategic planning and performance evaluation process to develop strategies and specific plans for training to rectify deficiencies in the skills and abilities of the acquisition workforce. Responsible, in consultation with the Acquisition Career Manager, for reporting acquisition workforce data to the Human Capital Office for the agency’s human capital plan. Responsible for providing management direction of an agency’s procurement system. In some agencies, CAOs oversee or assume this role to help ensure implementation of agency acquisition workforce policy. Responsible for identifying training and development needs of the acquisition workforce. Leads an agency’s acquisition career management program and ensures that its acquisition workforce meets the skills and capabilities required in OMB, OFPP, and agency policies. 10 U.S.C. § 1746. provided by the agency, which may be taught using in-house staff or contractor employees; (2) government entity training provided by another federal agency, FAI, or the DAU; and (3) vendor training provided by a commercial sector company, which is generally available to multiple agencies at the same time. For the purposes of this report we refer to government entity and vendor training as external sources. OFPP and FAI Provide Policy, Guidance, and Assistance to Help Improve Government-wide Training Efforts OFPP sets standards and policies for the training and development of the federal acquisition workforce. Its efforts include strengthening workforce planning requirements and setting standards for core acquisition training by establishing certification requirements. FAI conducts activities that support and assist civilian agencies in training and development of their acquisition workforces. FAI efforts include: improving the collection and management of training information, including cost data and course evaluations; streamlining the communication of acquisition training guidance; and coordinating efforts to maximize acquisition workforce training investments government-wide. OFPP Promotes Government-wide Practices for Acquisition Workforce Training and Development The Federal Acquisition Certification requirements were issued in 2005 for contracting professionals and 2007 for Contracting Officer’s Representatives and Program/Project Managers. provided in the agencies’ plans to develop a government-wide snapshot of the acquisition workforce and to identify practices for sharing throughout the government. Table 2 provides an overview of OFPP’s legislative responsibilities related to acquisition workforce training. 41 U.S.C. § 1201(a)(1). workforce. FAI reported that while its spending plan for fiscal year 2012 could provide $3 million for courses, training requests submitted by the agencies totaled in excess of $18 million. FAI is working with a number of vendors who provide acquisition workforce training to establish government contracts that would provide uniform costs for standardized training courses needed for the FAC programs. FAI plans to have contracts in place in 2013 that would be available for agencies to use to provide fiscal year 2014 training. Assisting federal agencies with their acquisition human capital FAI offers guidance and direction on human capital planning through its instructions for agencies’ annual AHCP submissions. Beginning in 2010, OFPP has required agencies to submit an annual AHCP each March that provides the agency’s strategies and goals for increasing both the capacity and capability of the acquisition workforce. To assist the agencies in preparing each submission, FAI provides them with a report template that includes specific topics to be included. We observed that the fiscal year 2012 AHCP submissions provided information on the agencies’ goals for strengthening the acquisition workforce, including certification goals, and planned training initiatives for the current and future fiscal years. 41 U.S.C. § 1201(a)(11). 41 U.S.C. § 1201(a)(3). FAI recently upgraded the government-wide acquisition career information management system it maintains for all agencies. In 2011, FAI replaced the Acquisition Career Management Information System (ACMIS) with the FAI Training Application System (FAITAS), which allows registration for courses, tracking of individuals’ training records, and other information management tools. For example, agencies can now manage the certification approval process, track an individual’s continuous learning progress, and search for courses offered by FAI and other agencies that use FAITAS to enroll participants in their courses. FAI reports that five agencies currently use the FAITAS registration function to alert other agencies to unfilled seats within a specific agency’s training course. Additional features of the system include a course scheduling module; a business intelligence tool for agencies to identify training locations and course availability; and a communication tool for broader outreach to the acquisition workforce. For example, an OFPP official explained that FAITAS can be used to send e-mail updates on OFPP policy, guidance, or other OFPP or FAI initiatives to all individuals registered in the system. FAI conducts periodic surveys of agencies to collect data about the acquisition workforce for OFPP. For example, FAI conducts the bi- annual Acquisition Workforce Competency Survey to collect information on agency acquisition workforces’ skills and abilities and to identify competency gaps within and across agencies. FAI officials explained that the survey results provide a government-wide view of which courses are in demand that informs their efforts to achieve economies of scale when providing training. FAI reports an annual count of the acquisition workforce as it has done for more than the past decade; however, its count has been limited to professionals serving in four job series—General Business and Industry (GS-1101), Contracting (GS-1102), Purchasing (GS- 1105), and Procurement Clerical and Assistance (GS-1106). FAI began gathering data on CORs and P/PMs as part of the acquisition workforce count in fiscal year 2007. In its recent report on the acquisition workforce count, FAI reported that 33,271 personnel were employed in the acquisition workforce’s four main job series in fiscal year 2010, and an additional 47,959 and 4,186 personnel were identified as CORs and P/PMs, respectively, by the agencies in their AHCPs—which represents about 85,000 individuals in the civilian acquisition workforce for fiscal year 2010. FAI does not directly collect data for the workforce count, and has acknowledged that agencies have some problems identifying CORs and P/PMs but the data have been improving. Facilitating interagency intern and training programs, as According to officials, FAI also conducts outreach and leverages training courses government-wide. In 2012, FAI met with officials from each agency to discuss training efforts and locate training spaces that could be used by FAI and others when supplying courses through vendors. FAI officials reported that these agency visits provided insights regarding how the agencies’ training programs varied in terms of their organization and resources and identified duplicative training efforts. In addition, FAI provides opportunities to share information across the acquisition workforce community. FAI began providing webinars on current acquisition issues in fiscal year 2012, and launched a newsletter for the acquisition workforce community in December 2012. According to officials, the newsletter will provide more in-depth information on policy changes, human capital initiatives, and tools and technology enhancements. FAI hosts, coordinates, and participates in roundtables at the Interagency Acquisition Career Management Committee meetings to foster discussion among agencies and share information on their training or educational challenges and needs. According to FAI officials, these meetings provide opportunities to share leading practices, identify challenges, and discuss potential initiatives. Recent discussion topics included plans to hold a competency-based certification workshop, roll out of additional FAITAS workforce management tools, and results of FAI’s bi-annual Acquisition Workforce Competency Survey. In July 2012, FAI established the Federal Acquisition Council on Training to help share information on agencies’ training efforts and standardize acquisition training throughout the government. According to FAI officials, the council’s goal is to help ensure no acquisition workforce training seat, whether offered by FAI or another agency, goes unfilled. Agencies are asked to open up unfilled training seats to other agencies whenever available, and to use FAITAS as the official course registration system to communicate available training seats and enroll participants for courses to reduce overhead and administrative costs for agencies. Periodically analyzing acquisition career fields and evaluating the effectiveness of training and career development programs for acquisition personnel14 According to officials, FAI has efforts underway to standardize the end-of-course participant evaluations administered by the vendors who provide FAI-sponsored courses. According to FAI officials, course participants will not receive their course completion certifications until they complete the course evaluation. Currently FAI does not analyze course evaluations administered by the vendors; however, officials commented that the use of standardized evaluations could enable them to work with vendors to improve the consistency of the information provided in training courses. A pilot is underway using standard evaluations for FAI courses taught by vendors being provided to one agency, and FAI plans to extend the pilot to other agencies early in 2013. Agencies Generally Use External Sources for Acquisition Training and Face Similar Workforce Training Challenges Most agencies approach acquisition workforce training through classroom courses taught by external sources—vendors, FAI, DAU, or other agencies. While all agencies have meeting spaces for training, three operate permanent centers with dedicated resources that train the agency’s acquisition workforce. The agencies’ current training focus is to provide courses through which their acquisition workforces may attain or maintain their FAC certifications. Agencies reported facing several challenges in providing acquisition-related training. The areas reported as being the most challenging are related to staffing and budgetary resources. Some agencies also reported challenges with the identification of their acquisition workforce, which is a fundamental step needed for managing the workforce and its training. Agencies also reported that additional assistance from OFPP and FAI would help their acquisition workforce training efforts. In addition, the agencies reported that their acquisition workforces are challenged in finding time in their workload to attend training. For more details on the approaches, budgetary and staffing resources, and other challenges faced by the agencies, see appendix I for a summary of the 23 agencies’ responses to the questionnaire we administered. 41 U.S.C. § 1201(a)(4) and 41 U.S.C. § 1201(a)(7). Agencies Generally Rely on External Training Sources Agencies provide acquisition workforce training predominantly through external sources—government entities or vendors. In fact, 17 agencies reported that the majority of their acquisition workforce training comes from having their workforce attend training provided by external sources rather than from their agency. The other government entities may provide training to an agency without seeking additional reimbursement, as is the case with FAI, DAU, and some federal agencies; however, others, such as the VA, charge a fee to attend their training to recoup their costs.15 Of the remaining 6 agencies, 5 reported that they hold agency-sponsored courses to provide the majority, if not all, of their acquisition workforce training, and one agency did not report. Some agencies use contractor personnel to instruct a portion of their agency-sponsored training. Figure 1 illustrates the sources of training that agencies reported using to provide training to their acquisition workforce in fiscal year 2011. Although FAI does not seek reimbursement for individual classes, civilian agencies “pay” for the courses since they provide the funding for FAI’s operations through the mandatory fee paid equal to five percent of the dollar amount of acquisitions the agency makes through the Federal Supply Schedule and the Governmentwide Acquisition Contracts. The Department of Transportation provided data that does not total 100 percent due to its averaging of the responses from its bureaus. The Department of Justice data is an average of the responses submitted by 4 procurement offices— headquarters and 3 bureaus. A leading training investment practice involves agencies taking steps to identify the appropriate level of investment to provide for training and development efforts and to prioritize funding so that the most important training needs are addressed first.16 The four agencies we selected illustrate that agencies use different approaches to manage and provide acquisition workforce training, due to such factors as the size of their workforce, the need for certification training, and the resources dedicated to training. Three agencies—DHS, Treasury, and VA—chose to have permanent training facilities dedicated to providing courses to the acquisition workforce. These three agencies differed, however, in the percentage of courses they provided to meet their acquisition workforce needs. DHS manages its entire acquisition workforce training at the department level and recently expanded the number of courses provided by the agency. However, part of DHS’s strategy is to make use of DAU and FAI courses that address its needs, which can be obtained at no additional charge. VA also manages all of its training at the department level and in 2008 established a training academy, the VA Acquisition Academy, to provide all acquisition-related courses to agency personnel who need it. Alternately, Treasury built its training program on an existing training institute within the Internal Revenue Service, which employs the majority of Treasury’s acquisition workforce, to provide agency courses available to all of its bureaus and to other agencies. Of the three dedicated training facilities, VA is the only one to provide agency- sponsored training for 100 percent of its acquisition training courses. VA, DHS, and Treasury all reported using contractors to instruct the agency- provided training at their respective facilities. Education does not have a dedicated facility and reported that it provides only about 10 percent of its acquisition workforce training via agency-sponsored courses, obtaining about 90 percent of needed training through external sources. GAO-12-878. training needs. In their responses to our questionnaire, the agencies did not explain how effectiveness is determined. As we discuss later in this report, based on a subsequent data call we found that the agencies have limited insight into the effectiveness of their acquisition workforce training courses. Figure 2 illustrates the factors and number of agencies that reported them as influencing their training source selections. Agencies’ current focus is to provide courses that allow their acquisition workforces to attain or maintain their FAC certification. In response to our questionnaire, most agencies—17 of 23—reported that they are able to find sufficient courses to do this. In their AHCP submission for fiscal year 2011, most agencies (15) reported certification rates of more than 75 percent for their contracting staff, but 3 agencies reported over 90 percent of their contracting staff were certified. The majority of the 23 agencies reported that more than 90 percent of their CORs and P/PM staff had obtained their basic FAC certifications—with many reaching 100 percent. Agency officials explained that the agency’s certification rates fluctuate due to changes in the workforce—including experienced staff leaving the agency and less experienced staff being in the process of obtaining certifications. Table 3 shows the certification rates reported by agencies for fiscal year 2011. Officials from three of the four selected agencies—DHS, Treasury, and VA—explained that they prioritize their training funding based on the need for acquisition staff to be certified. Education, where the majority of the acquisition workforce has attained their certifications already, reported it is focusing on courses that meet staff needs to maintain certifications or develop expertise. Agencies Face Challenges with Having Adequate Training Resources, Identifying the Acquisition Workforce, and Staff Having Time to Attend Training The top challenges reported by agencies in obtaining training for their acquisition workforces were having sufficient resources—both staffing to manage the training program and budgetary resources—to provide training. Specifically, 20 agencies reported that obtaining adequate funding is a challenge, and 19 reported that obtaining sufficient staff to manage the acquisition workforce training is a challenge. As for determining the appropriate level of investment, we have previously reported that when assessing opportunities for training, the agency should consider the competing demands confronting the agency, the limited resources available, and how these demands can best be met with available resources. Figure 3 provides a summary of the number of agencies that reported challenges involved in obtaining sufficient acquisition workforce training. Although almost all agencies reported that obtaining an adequate level of funding is challenging, less than half—10 of 23—reported that their current acquisition training budgets are insufficient to meet their training needs. The acquisition workforce training budgets for fiscal year 2012 reported by 22 of the agencies ranged from $0 to $39,598,000 for an agency with its own acquisition workforce training facility. These budgets correspond to acquisition workforces that ranged in size from 186 to 11,867 employees for fiscal year 2010 that handled anywhere from about 900 to 1.4 million contracting actions with obligations ranging from $132 million to $25 billion in fiscal year 2011. Almost all the agencies—20 of 23—reported that they would like additional FAI assistance to help their acquisition workforce members attain federal certification by providing more course offerings or additional topics. In addition, to accommodate travel restrictions and budget constraints, some agencies would like FAI to provide more virtual courses for personnel that are geographically dispersed. FAI has some efforts underway that may somewhat address these issues, but their results are still uncertain as they are just beginning. FAI officials noted that while agencies want more offerings, FAI still has under-attended classes that have to be canceled. FAI officials also commented that developing on-line courses can also be very expensive and the cost to maintain currency of information is an often times overlooked cost. GAO-11-892. personnel are dispersed throughout many organizations, come from a variety of career fields, and are often involved in acquisitions as a secondary and not a primary duty. The agencies’ Acquisition Career Managers told us that they face similar challenges in identifying CORs and other personnel with acquisition-related responsibilities. Three of the four agencies selected for further insight—DHS, Education, and VA— acknowledged that they continue to be challenged by identifying some members of their acquisition workforces. A majority of the agencies—15 of 23—reported that they would like additional assistance from OFPP to improve their acquisition workforce training by either of two ways. Thirteen agencies supported the idea of OFPP creating separate job series for additional acquisition workforce categories, such as P/PMs. Twelve agencies supported the idea of OFPP maintaining a government-wide database to identify and track members. According to FAI and OFPP officials, they are sympathetic to agency concerns about identifying the total acquisition workforce. As for creating separate job series to more easily identify members of the acquisition workforce, OFPP notes that any changes would need to be made by the Office of Personnel Management and that some acquisition positions, such as CORs, may not lend themselves to becoming a unique job series because the work is performed as a collateral duty. As for maintaining a government-wide database to identify and track acquisition workforce members, FAI officials noted that agencies are encouraged to use the FAITAS registration system to track their workforce’s individual training records and certifications. However, registration into the system is currently voluntary. Agencies also reported that their acquisition workforces are challenged in finding time in their workload to attend training. Most agencies—14 of 23—considered the ability to find time as extremely or very challenging. Figure 4 provides a summary of the number of agencies that reported challenges to their staff accessing and attending acquisition-related training. Lack of Comparable Cost Data and Limited Insights On Benefits of Training Hinder Efforts to Maximize Resources Government-wide Agencies collect some training cost data and limited information about the benefits of their acquisition workforce training. Based on responses to our questionnaire, a supplemental data request, and discussions with agencies’ and FAI officials, we found that many agencies do not collect data on the costs of training provided to their acquisition workforce that can be used to inform agency and government-wide training resource investment decisions. In addition, some agencies do not have metrics to assess the effectiveness of their training. Government-wide Training Cost Data Reported Is Not Comparable GAO-12-878 and GAO-04-546G. data eventually provided by the agencies included different cost components, which did not lend themselves to comparative analysis. For example, some agencies provided data on the cost per seat of specific courses, while others provided the total costs for delivery of each course. Some agencies provided the costs associated with the development of an in-house course, while others included development costs with delivery costs as a total cost of obtaining the course from a vendor. Although the data do not allow direct comparisons among agencies, the data show a range of costs for similar courses. For example, five agencies provided a per seat cost for a COR Refresher course, which ranged from $97 to $363 per seat. In addition, five agencies provided a per seat cost for a Cost Analysis and Negotiation Techniques course, which ranged from $282 to $925 per seat. However, a number of factors can produce the variation in the agency-reported costs and these factors, along with the agencies’ data collection methods, can limit the ability to make government-wide decisions on acquisition workforce training investments. According to agency officials, the costs of individual acquisition workforce training courses can vary greatly among agencies due to a variety of factors, including: the number of times per year the course is provided, actual attendance numbers, location of training, and whether courses are tailored with agency-specific information. FAI officials reported that they also received limited responses to their request for training cost data in the agencies’ latest AHCPs, which made it impossible for them to complete their planned comparative analyses. The instructions for the FAI data request did not include definitions of the cost components to be reported, which are important to help ensure consistent reporting. Due to the limited response on cost data, FAI initiated a subsequent data call. As of November 2012, FAI had received responses from 20 of the 23 agencies, but again was unable to perform a comparative analysis of the cost data due to various limitations in the data it received. FAI officials noted that collecting cost data from agencies is an evolving process and that having comparable training cost data is important to help FAI in its efforts to maximize the use of acquisition workforce training dollars government-wide. FAI plans to request cost data annually with the AHCPs; however, the guidance for completing the next submission does not include definitions of the cost data to be reported, which may again yield data that cannot be compared across agencies. Improved reporting of cost data could assist FAI as it moves forward with its plans to award government-wide contracts for acquisition workforce training in 2014 by allowing FAI to obtain insight into the agencies’ training costs when obtaining training from a variety of external sources. Agencies Gather Limited Information on Benefits of Acquisition Workforce Training Investment About half of the agencies—12 of 23—did not have insight into whether their acquisition workforce training investment is improving individual skills or agency performance. In particular, in response to our questionnaire and subsequent data request 7 of 23 agencies reported having no metrics to monitor or assess the effectiveness of their acquisition workforce training efforts, a measure of whether the training investment is improving individual skills or agency performance. Five agencies did not provide information to support use of metrics to measure the training benefits, either improving individual skills or agency performance, of acquisition training in response to our supplemental data request. We have previously reported that training programs can be assessed by measuring (1) participant reaction to the training program, (2) changes in employee behavior or performance, and (3) the impact of the training on program or organizational results, which may include a return on investment assessment that compares training costs to derived benefits. Of the 11 agencies that provided information to support their use of metrics, 3 reported using end-of-course evaluations to measure participants’ reaction, one reported using end-of-course tests to measure changes in employees’ knowledge, and one reported using post-course surveys to supervisors or participants to measure if what was learned affected the participants’ behavior. The other 6 agencies reported measures aimed at determining the impact of training on the agency’s mission. Furthermore, DHS officials said that they plan to begin using post-course surveys of participants and supervisors in fiscal year 2013, and they are pursuing the development of additional measures to evaluate the impact of training on the agency. VA is also pursuing the development of additional metrics, such as proxy measures for return on investment, to evaluate the impact of training on the agency. Participant evaluations offer limited insight into improvements in individual and agency performance; however, we recognize that higher levels of evaluation (such as evaluating the impact on organizational performance or comparing training costs to derived benefits) can be challenging to conduct because of the difficulty and costs associated with data collection and the complexity in directly linking training and development programs to improved individual and organizational performance. Officials at some of the selected agencies told us that when the course is provided by an external source, such as FAI or DAU, they rely on the external source to provide end-of-course evaluations to the participants. As we noted earlier, 17 of 23 agencies obtain the majority of their acquisition training from external sources. In particular, some agencies noted that they do not collect or assess the end-of-course evaluations, relying solely on the external source to use the evaluations as they believe appropriate. FAI officials noted that if the agencies do not review the evaluations themselves, they are missing the opportunity to ensure that the courses are effective in training their acquisition personnel. We have previously reported that it is increasingly important for agencies to be able to evaluate their training programs and demonstrate how these efforts help develop employees and improve the agencies’ performance because it can aid decision makers in managing scarce resources and provide credible information on how training has affected individual and organizational performance.22 Although FAI has an initiative underway to standardize the evaluations provided for its courses, its impact may be limited if agencies do not obtain the evaluation results and use them to evaluate the effectiveness of the training. Conclusions GAO-12-878 and GAO-04-546G. to ensure that every training seat is filled—to assist agencies in meeting OFPP requirements and leveraging federal government training resources, these efforts face significant obstacles. Agencies, FAI, and OFPP lack data on the acquisition workforce itself and the benefits of training, as well as cost data that can be used to make comparisons. These limitations hinder government-wide efforts to share information during these times of constrained budgets. OFPP, FAI, and the agencies need basic information on how much agencies are spending to train the acquisition workforce. Providing definitions and guidance about the elements of costs agencies should include in their annual acquisition human capital plans will help OFPP and FAI in their efforts to collect comparable data across agencies. FAI can then analyze and share information to help agencies make choices regarding how best to dedicate resources to effectively train the acquisition workforce. Additionally, comparable costs data will be helpful to inform FAI efforts to award government-wide contracts for standard acquisition workforce certification training. Currently, the main focus of monitoring and tracking acquisition workforce training efforts in agencies is on completion of training to attain and maintain the FAC certifications. Although the higher levels of evaluation (such as evaluating the impact on organizational performance or comparing training costs to derived benefits) can be challenging to conduct due to costs and complexity, agencies should, at a minimum, evaluate participant reaction to the training program through end of course evaluations. However, this is a basic measure that some agencies are lacking because the evaluations currently being administered go to the provider of the class and not the agency paying for the training. Therefore, these agencies have little insight into how the workforce perceives the training they received. We note that FAI has initiatives underway to standardize the course evaluations for the courses it provides and to make completion of the course evaluations mandatory to receive course credit. At present, this effort only applies to FAI-sponsored courses and will not include agencies that use other vendors, and does not guarantee that agencies will analyze the results; it is important for all agencies to collect and analyze this basic information. Recommendations for Executive Action To help ensure that agencies collect and report comparable cost data and perform a minimal assessment of the benefits of their acquisition training investments to aid in the coordination and evaluation of the use of resources government-wide, we recommend the Director of the Office of Management and Budget direct the Administrator of the Office of Federal Procurement Policy, in consultation with the Director of the Federal Acquisition Institute, to take the following two actions: provide further guidance, including definitions, on the types of costs that agencies should include in their Acquisition Human Capital Plan submission to help determine total training investment; and require all agencies, at a minimum, to collect and analyze participant evaluations of all acquisition workforce training as a first step to help assess the effectiveness of their training investment. Agency Comments and Our Evaluation We provided a draft of this report to OFPP, FAI, and the selected agencies—DHS, Education, Treasury, and VA. OFPP and FAI concurred with our recommendations in oral comments and e-mail responses. Education and Treasury responded via e-mail with no comments. DHS and VA provided technical comments, which we incorporated as appropriate. In oral comments, OFPP and FAI agreed with our recommendations and noted that they have begun drafting guidance to federal agencies. Consistent with our report, OFPP and FAI emphasized the importance of acquisition workforce management tools that improve the government’s ability to leverage acquisition resources especially during this time of budgetary constraints. These officials highlighted FAITAS, which can serve as a workforce management tool and training application system. Currently, use of this system is voluntary and some agencies use the system to enroll participants in FAI courses, communicate available training seats within a specific agency’s training course, and track their workforce’s individual training records and certifications. Recognizing the potential benefits of this system in helping to coordinate and evaluate the use of training resources government-wide, OFPP stated that it is considering making FAITAS reporting mandatory for civilian agencies. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Administrator of OFPP, the Director of FAI, and the Secretaries of DHS, Education, Treasury, and VA. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix III. Appendix I: Summary of Agencies’ Response to GAO Questionnaire on Civilian Acquisition Workforce Training In July 2012, GAO administered a government-wide survey of the 23 civilian CFO Act agencies designed to gather information regarding the training approach and methods, staffing and budgetary resources, and costs associated with providing acquisition training—including the operation of facilities dedicated to training the acquisition workforce. We received responses from 100 percent of the agencies, although not every agency responded to each question. A summary of the number of agencies’ who responded to each question is provided below. We removed the open-ended narrative responses, but included the ranges of open-ended data. Details of our survey methodology are provided in appendix II. The objectives of this review were to evaluate the training approaches of federal agencies, other than the Department of Defense, for the acquisition workforce. Specifically, this report addresses the (1) role of the Office of Federal Procurement Policy (OFPP) and the Federal Acquisition Institute (FAI) in promoting Federal Acquisition Certification (FAC) standards and assisting agencies in meeting acquisition workforce training requirements; (2) approaches agencies use to provide training to their acquisition workforces; and (3) the extent to which agencies track information on the costs and benefits of their acquisition training. To identify the roles of OFPP and FAI in promoting FAC standards, we analyzed legislation on their authority, the OFPP acquisition workforce strategic plan, and relevant guidance provided by them to agencies. We interviewed officials at OFPP and FAI about efforts to (1) provide oversight and assist agencies in meeting training requirements; (2) leverage government-wide resources; and (3) share leading practices. Since FAI’s initiatives are in various stages of implementation, we did not evaluate their effectiveness; however, in several instances, we note our initial observations of the efforts. 31 U.S.C. § 901. used a standard set of questions to obtain data about their training approaches, including the sources for course offerings (in particular whether courses were provided by the agency, a commercial sector vendor, FAI, the Defense Acquisition University, or another agency), facilities, budgetary and staffing resources, methods of training, and challenges faced in providing acquisition training. Because this was not a sample survey, it has no sampling errors. However, the practical difficulties of conducting any survey may introduce errors, commonly referred to as nonsampling errors. For example, difficulties in interpreting a particular question, sources of information available to respondents, or entering data into a database or analyzing them can introduce unwanted variability into the survey results. We took steps in developing the questionnaire, collecting the data, and analyzing them to minimize such nonsampling error. After we drafted the questionnaire, we asked for comments from federal officials knowledgeable about acquisition workforce and training issues and from independent GAO survey professionals. We conducted pretests to check that (1) the questions were clear and unambiguous, (2) terminology was used correctly, (3) the questionnaire did not place an undue burden on agency officials, (4) the information could feasibly be obtained, and (5) the survey was comprehensive and unbiased. We chose the two pretest sites to include perspectives from agencies with robust acquisition workforce training programs and training facilities. We made changes to the content and format of the questionnaire after both reviews and after each of the pretests, based on the feedback we received. We sent the questionnaire by e-mail in an attached Microsoft Word form that respondents could return electronically after marking checkboxes or entering responses into open answer boxes on July 5, 2012. We sent the questionnaire jointly to the Chief Acquisition Officer, Acquisition Career Manager, and Chief Learning Officer of each agency, asking for a consolidated agency response. After two weeks, we sent a reminder to everyone who had not responded. All questionnaires were returned by August 10, 2012. We received questionnaire responses from 100 percent of the agencies, although not all agencies answered every question. The Department of Justice provided four responses—one from its headquarters procurement office and three from bureau procurement offices. We consolidated the information from the four surveys to ensure the tabulations eliminated duplicative counting of the agency and to provide averaged data, as appropriate. In questions regarding the agency’s opinions regarding challenges faced, we deferred to the response of the headquarters procurement office. To confirm our understanding of the variety of issues addressed and obtain additional insights on the agencies’ responses to our questionnaire, we discussed our preliminary results with representatives from the 23 agencies at meetings of the Chief Acquisition Officers Council, Interagency Acquisition Career Management Committee, and the Chief Learning Officers Council. Table 4 provides a summary of the functions of these councils as related to government-wide efforts for training the acquisition workforce. We also reviewed the agencies’ annual Acquisition Human Capital Plans (AHCP) due March 31, 2012, to collect information on certification rates, and corroborate information agencies provided in response to the questionnaire on acquisition workforce size and training approaches. To determine the information agencies track on the costs and benefits of the training provided, we supplemented the questionnaire responses by soliciting additional cost and metrics data from the agencies using a data collection instrument. We developed our data collection instrument based on cost information provided from agencies in response to our questionnaire. We sent agency-tailored data collection instruments by e- mail in an attached Microsoft Excel spreadsheet that respondents could return electronically after updating or providing new data on October 15, 2012. We pre-populated each agency’s spreadsheet with any data previously provided by the agency, and sent the data collection instrument to the Acquisition Career Manager and the official who provided the response to the questionnaire. After two weeks, we sent a reminder to everyone who had not responded. The data collection instruments were returned by November 28, 2012. We received data collection instruments from 91 percent of the agencies. To gather illustrative examples and more detailed explanations regarding training approaches and data tracked related to costs and benefits, we selected four agencies—the Departments of Education (Education), Homeland Security (DHS), the Treasury (Treasury), and Veterans Affairs (VA)—for further review. DHS, Treasury, and VA are agencies that operate dedicated acquisition workforce training facilities—permanent centers with dedicated resources that provide training specifically for the agency’s acquisition workforce. Education, which is one of the smallest agencies, has no dedicated facility and has reported issues with access to acquisition workforce training courses. We interviewed agency officials to determine the extent to which they were aware of and using the leading training investment practices that the Office of Personnel Management and experts agreed should be implemented by agencies to support effective training investment decisions.3 Table 5 provides a summary of these leading practices. Of the eight leading practices GAO identified, we focused on the four practices dealing primarily with determining costs and effectiveness of training: (practice 1) identifying the appropriate level of investment to provide for training and development efforts and prioritize funding so that the most important training needs are addressed first; (practice 4) having criteria for determining whether to design training and development programs in-house or obtain these services from a contractor or other external source; (practice 6) tracking the cost and delivery of its training and development programs agency-wide; and (practice 7) evaluating the benefits achieved through training and development programs, including improvements in individual and agency performance. We conducted this performance audit from February 2012 to March 2013, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix III: GAO Contact and Staff Acknowledgements GAO Contact Staff Acknowledgements In addition to the contact name above, Penny Berrier (Assistant Director); R. Eli DeVan; Mya Dinh; Jean McSween; Aku Pappoe; Kenneth Patton; Carol D. Petersen; Sylvia Schatz; and Suzanne Sterling made key contributions to this report.
The acquisition workforce manages and oversees billions of dollars in acquisition programs and contracts to help federal agencies get what they need, at the right time, and at a reasonable price; therefore, it is important that agencies provide adequate training to this workforce. In this review, GAO identified (1) the role of OFPP and the FAI in assisting agencies in meeting certification requirements; (2) agencies' approaches to providing training; and (3) the extent to which agencies collect information on the costs and benefits of their acquisition training. To determine OFPP and the FAI roles, GAO analyzed relevant legislation. GAO obtained information from 23 federal agencies on their training approaches through a questionnaire, and selected 4 agencies--the Departments of Education and the Treasury, DHS, and VA--to provide illustrative examples. GAO used its questionnaire and a subsequent data call to obtain information on how agencies collect information on the costs and benefits of their training. The Office of Federal Procurement Policy (OFPP) sets standards and policies for the federal acquisition workforce, and has established certification requirements, including minimal training, for the three main acquisition roles--contracting staff, Contracting Officer's Representatives, and Program/Project Managers--to promote the development of government-wide core acquisition competencies and facilitate mobility across agencies. DOD follows separate certification standards. The Federal Acquisition Institute (FAI), which is responsible for fostering and promoting the training and development of the acquisition workforce, works closely with OFPP and has initiatives underway to improve the collection and management of training information, including cost data and course evaluations; streamline communication of acquisition training guidance; and coordinate efforts to leverage acquisition workforce training resources throughout the government. To support efforts for the acquisition workforce to attain and maintain federal certification requirements, most agencies (17 of 23) provide the majority of their acquisition training using external sources--vendors, FAI, the Defense Acquisition University, or other agencies. The Departments of Homeland Security (DHS), the Treasury, and Veterans Affairs (VA) operate their own permanent centers with dedicated resources that train the agency's acquisition workforce. Education reported using a different approach to providing acquisition training--it gets most of its training from other government entities. Federal agencies face similar challenges in providing training to their acquisition workforce. The top challenges reported by agencies in obtaining training for their acquisition workforce involved having sufficient resources. Twenty of 23 agencies identified obtaining adequate funding and 19 of 23 identified obtaining sufficient staff to manage training as challenging. In addition, almost half of the agencies reported that the fundamental step of identifying the acquisition workforce is a challenge especially when members of the workforce are involved in acquisitions as a secondary and not primary duty. The training cost data that agencies collect is not comparable and agencies have limited information on the benefits of their acquisition workforce training investments. Although almost all agencies provided some cost data in response to GAO's questionnaire and subsequent data call, the agencies' cost data did not allow for government-wide assessment of their training investment. To date, FAI's efforts to collect training cost data has also met with limited success. Cost data collected in 2012 by GAO and FAI included different cost components-- such as facilities, travel, and instructors--which do not allow for government-wide analysis. Having comparable training cost data are important to inform FAI efforts to establish government-wide contracts for training. As for determining benefits of training, 7 of 23 federal agencies reported having no metrics, not even basic endof- course evaluations. Without basic data, agencies do not have insight into the benefits of their acquisition workforce training efforts.
GAO_GAO-02-328
Background At Jomtien, Thailand, in March 1990, representatives of the global education community held the “World Conference on Education for All” and adopted a declaration on universal access to education as a fundamental right of all people. In April 2000, the “World Education Forum”met in Dakar, Senegal. Delegates from 181 nations adopted a framework for action committing their governments to achieve quality basic education for all—including ensuring that by 2015, all children— especially girls, children in difficult circumstances, and those from ethnic minorities—have access to completely free primary education of good quality. Also in early 2000, the U.S. ambassador to the U.N. Food Agencies in Rome proposed that the United States, within the U.N. framework, take the lead in organizing a worldwide school lunch program. The purpose would be to provide a meal every day for every needy child in the world. Doing so, the ambassador said, would attract children to school and keep them there under conditions in which they are able to learn and grow. The United States would pay 25 percent of the cost, and other donor nations would pay the rest. The United States would benefit, since Americans produce more food than they can eat or profitably sell and since most of the U.S. contribution would be in the form of agricultural commodities and thus would strengthen the market for cereal grain, dairy products, and livestock. According to the ambassador, other farm surplus countries such as France, Canada, and Australia would benefit as well. In late May 2000, President Clinton met with the ambassador to discuss the idea and asked senior advisers to prepare an analysis of how the United States might participate. In early July 2000, the advisers reported that all relevant agencies recommended that the president announce a U.S. pilot program to support the international community’s goal of achieving universal access to basic education by 2015 and the U.N.’s 10-year “Girls’ Education Initiative” to help poor countries eliminate gender disparities in educational access. The advisers recommended spending approximately $300 million in the first year on the pilot program, with levels in subsequent years dependent upon factors such as the extent of international participation and the continued availability of CCC funding. At the Okinawa Summit on July 23, 2000, the president announced the Global Food for Education Initiative and the pilot program. According to the White House press release, which was issued the day the program was announced, the purpose of the pilot program is to improve student enrollment, attendance, and performance in poor countries. These objectives were reaffirmed in USDA’s September 2000 request for proposals from cooperating sponsors and, more recently, in a December 2001 paper describing the goals, scope, and framework for action for monitoring and evaluating the pilot program. For the pilot, USDA sought proposals from various potential implementing partners, and approved 53 projects in 38 countries covering an estimated 8.3 million children. Partners include WFP and various cooperating sponsors. Among the latter are 13 PVOs and 1 foreign government (Dominican Republic). As of mid-December 2001, USDA had finalized agreements for 21 of 25 PVO projects, 26 of 27 WFP projects, and 1 project with the Dominican Republic. The recent total estimated cost for all of the pilot projects was $227.7 million, allocated as follows: WFP projects, $92.5 million; PVO projects, $121.1 million; and the government of the Dominican Republic, $14.1 million. The total cost is $72.3 million less than the originally planned $300 million initiative. According to USDA officials, the balance will be used in fiscal year 2002 to expand existing projects that show the most potential, based on performance. Appendix II provides more detailed program and cost information. Lessons Learned from School Feeding Programs Define Conditions for Likely Success Research and expert views on school feeding programs indicate that these programs are more likely to have positive results when they are carefully targeted and integrated with other educational, health, and nutritional interventions. There is considerable evidence that school feeding programs can increase enrollment and attendance if the programs are targeted at the right communities or populations. Evidence of the effectiveness of school feeding programs in improving learning is somewhat more mixed, possibly because of difficulties isolating factors associated with increased learning, the quality of studies assessing such relationships, or the quality and settings of such programs. Programs are more likely to have a positive result on enrollment, attendance, and learning when they are integrated with a facilitative learning environment and appropriate health and nutritional interventions. Community participation and parental involvement also promote these objectives. Taking steps to ensure that programs will be sustainable when donor assistance is no longer available is important for ensuring long-term effectiveness. At the same time, school feeding programs are costly and may not be cost effective, relative to other possible interventions. (Apps. IV and V provide results from selected studies on these issues.) Targeting the Right Population Can Improve Enrollment and Attendance Evidence indicates that school feeding programs can improve school enrollment and attendance if they target the right population. In general, studies and experts point to the importance of targeting programs on low- income communities that lack a secure supply of food and have relatively low rates of school enrollment and attendance. When school feeding programs do improve enrollment and attendance, their contribution is primarily through a transfer of income (the food) to families. School feeding programs may not have much of an impact if children are staying away because the distance to school is too far to walk, parents perceive the quality of the education to be low, or children are disabled. Providing national coverage to all children is usually not cost effective. Targeting high-risk communities is preferable to targeting individual children within schools, which could lead to competition among students and parents, dilution of nutritional impact through food sharing, and insufficient community support. (See app. IV for results from selected studies on the use of targeting to improve the effectiveness of school feeding programs.) According to several experts and practitioners, school feeding programs can also help reduce the educational gender gap—where the proportion of school-age boys attending school significantly exceeds that for school-age girls. Many studies have shown that the inability of households to cover direct and indirect costs of education results in fewer girls attending school. This inequity exists partly because parents perceive less value in educating girls, there is greater demand for girls’ labor at home, and girls are more affected by issues of school location and security. Yet some of the highest returns to education and other development investments derive from girls’ education. For example, according to studies cited by WFP: Illiterate girls have an average of six children each while girls who go to school average 2.9 children; Infants born to mothers with no formal education are twice as likely to die before their first birthday than are babies born to mothers with a post-primary school education; Between 1970 and 1995, 44 percent of the decrease in child malnutrition was attributable to improvements in female education; and Educated mothers are more likely to send their own daughters to school. To increase educational opportunities for girls, a “package” of strategies is often tailored to meet a country's special needs. These packages typically contain some combination of interventions to (1) reduce the opportunity costs of sending girls to school; (2) improve the quality and relevance of education; (3) increase access to close, safe schools equipped with basic infrastructure; (4) educate parents and communities about the benefits of girls' education; and (5) establish supportive national policies. Facilitative Environment Needed for Effective Learning A group of experts and practitioners who convened at USAID headquarters in October 2000concluded that little learning is likely to occur without a facilitative learning environment, where teachers engage children in stimulating learning tasks, provide frequent feedback and encouragement, and are equipped with motivational textbooks and other learning materials. A facilitative learning environment also requires a suitable physical environment and minimal school supplies. Unfortunately, most schooling in the developing world is far from this kind of environment.Teaching is frequently of poor quality and is poorly supported; and the curriculum often has little relevance to rural life, making formal schooling unconnected with the needs of rural communities. Thus, most developing countries require investments in teacher training; basic supplies (books, blackboards, desks, and chairs); a suitable physical environment; and other learning materials. Furthermore, many school systems in developing countries are dysfunctional, characterized by dispersed or displaced populations (as a result of conflict or natural calamities), limited basic infrastructure, and endemic child malnutrition. Many experts and practitioners also conclude that food for education programs must take place within the context of broad, national education reform programs that focus on essential inputs to education and learning, such as teacher development, curriculum reform, and student assessment. (See app. IV for results from selected studies on the impacts that school feeding programs have on learning.) Nutritional and Health Measures Are Needed for Effective Programs According to various studies, poor nutrition and health among schoolchildren contribute to diminished cognitive abilities that lead to reduced school performance. According to experts, school feeding programs can be effective in reducing short-term hunger—which in turn can improve learning capacity—by providing an inexpensive breakfast or small snack, shortly after students arrive at school. Meanwhile, using enriched foods or complementing commodities in school feeding programs with locally available vitamin and mineral-rich foods is an effective route to alleviating the complex micronutrient deficiencies that schoolchildren in developing countries often suffer. At the same time, school feeding programs designed to capture both nutritional and educational gains need to invest in adequate water and sanitation at schools, since poor water and sanitation give rise to infectious diseases, including parasites, which adversely affect schoolchildren’s enrollment, attendance, and learning. These programs also benefit from inclusion of deworming treatments and health and nutrition education. (See app. IV for results from selected studies on nutrition and health measures that can be used in combination with school feeding programs to improve school performance.) Community and Parental Involvement Also Can Contribute to Enrollment, Attendance, and Learning Community and parental involvement are also important to successful school feeding programs. Community involvement in implementing school feeding programs can increase contact, and hence communication, between parents and teachers, officials, and others; provide parents an opportunity to become more aware of what goes on at schools; help raise the value of education and the school for parents and the whole community; and motivate parents to enroll their children in school and ensure regular attendance. Parent-teacher associations (PTA) or other outreach efforts can be used to educate parents and other community groups on issues such as the negative effects of temporary hunger on learning or the social and health benefits of educating girls. Strong Government Commitment Boosts Effectiveness of School Feeding According to WFP, another important ingredient in successful school feeding programs is national government commitment to the goal of “education for all.” This commitment should be put into practice through policies, programs, and financial commitments within a country’s means that support basic education. Governments also need to commit to school feeding programs within the context of broad, national school reform programs, according to practitioners and experts who met at USAID in October 2000. These reforms should target essential inputs to education and learning, including teacher development, curriculum reform, and student assessment. Cost of School Feeding Programs Affects Sustainability While the benefits of school feeding programs are recognized, the programs are expensive both financially and in terms of the human resources required to operate them. In addition to the price of the food, costs associated with food logistics, management, and control can represent a significant financial burden for recipient country governments. These costs may be difficult for national governments to absorb and thus adversely affect long-term program sustainability. Estimates of the average cost of school feeding programs vary considerably (see app. V). According to WFP, the average cost per student of its development school feeding projects in 2000 was 19 cents per day, or $34 for a 180-day school year (see app. V). Programs costing $34 per pupil per school year are substantial when compared with what many developing countries spend on education. For example, in 1997 public expenditures of 19 least-developed countries for both pre-primary and primary education averaged only $20 per pupil, according to UNESCO. Average public expenditures of five southern Asian countries were reported at $40 per pupil. According to many experts, national ministries of education in developing countries should not be encouraged to take on school feeding at the expense of other educational inputs. Few national governments are committed to school feeding programs over the long term, they said. In addition, many governments and education ministries already are struggling to maintain barely functioning education systems; may not be equipped, financially or technically, to assume the additional burden of food distribution; and do not have the financial resources to sustain feeding programs after donor support is withdrawn. These experts say that getting local communities involved from the beginning and giving them ownership of school feeding programs greatly increase the chances for long-term program sustainability. According to WFP, its guidelines for school feeding programs require both national governments and local communities to provide a significant amount of resources and infrastructure. There are potential detrimental impacts if school feeding programs are not effectively implemented. For example, where adequate infrastructure is not available, increased attendance may lead to overcrowding and actually reduce educational achievement for existing students, while providing minimal benefit to new students. In some developing country circumstances, the school day is only a few hours. In such cases, time taken to prepare a meal may further limit an already inadequate period of instruction. In addition, if volunteers are not available to provide labor, teachers may be required to undertake this task at the expense of instructional time. Since school feeding is a highly visible income transfer, it may also be used for political purposes by actors in the recipient country. If school feeding programs are relatively ineffective, they may result in resources being taken away from better performing programs. According to several experts, in particular situations, school feeding programs may not be as cost effective in promoting learning as other possible approaches, such as establishing maternal child health and early childhood development programs or providing alternative nutritional and educational interventions (see app. V). Pilot Program Did Not Adequately Incorporate Lessons Learned The pilot program has not provided reasonable assurance that lessons from previous school feeding and food for education programs have been integrated into approved pilot projects. Under pressure to get the pilot up and running quickly, USDA gave interested applicants little time to prepare proposals, and it did not require them to provide basic information on and analysis of various factors important to successful food for education programs. Written criteria for evaluating proposals similarly did not focus on many of these factors. Many of the proposals approved did not address key elements of successful school feeding programs. Moreover, USDA provided little funding for important nonmeal components of the food for education projects, and only a few of the approved PVO proposals indicated they had obtained other donors’ support for nonmeal components. USDA Did Not Have Sufficient Information for Its Evaluation According to USDA officials with whom we spoke, the agency was under pressure to start a new and complex food for education program quickly and with far less funds—$300 million—than what is needed to fully address the educational components of school feeding. As a result, USDA did not solicit basic information on various factors linked to effective school feeding and food for education programs. Table 1 lists a set of questions, based on lessons learned, that USDA could have used to guide the type of information and analysis requested from implementing partners (i.e., cooperating sponsors and WFP) and, subsequently, for evaluating proposal quality. As shown in table 1, many important factors that experts cited were not addressed specifically by USDA in its formal request for proposals, and other items were only partly addressed in its request. The request was made to cooperating sponsors but not to WFP. (Less information was sought from WFP because, as a USDA official told us, many projects in the WFP proposals had previously been reviewed and approved by the U.S. government as part of the process by which the WFP Executive Board approves its projects.) We derived the questions from our review of lessons described in various studies and other documents on school feeding and food for education programs (see app. IV, especially tables 4 to 10. Also see app. VI for a more complete discussion of the interagency process used to evaluate, and approve proposals.) As table 1 indicates, USDA sought some information on how the projects would be targeted. For example, USDA indicated that it wanted to target poor countries and that it favored programs that would significantly improve enrollment and attendance. However, USDA did not require that proposals address how programs would be designed to improve educational performance, nor did it seek any information on factors that are key to whether learning could occur, such as adequate numbers of well- trained teachers and reasonable supplies of good learning materials. Similarly, USDA asked requesters how their programs would affect health and nutrition but did not specifically ask whether the schools had safe water and adequate sanitation facilities and whether intestinal parasitic infections in the student population were likely to be a problem. A USDA official told us there were limits on how much information the agency could require, given the short amount of time sponsors had to prepare proposals and the 1-year duration of the pilot. Further, the agency did not want to make the information requirements so costly for sponsors that it would get few or no proposals, the official said. Regarding the criteria used to evaluate the programs, table 1 also shows that U.S. agencies’ written criteria did not specifically address most of the key factors we derived, based on our review of lessons from previous school feeding and food for education programs. Of the 20 questions in table 1 on key factors in effective school feeding and food for education programs, 1 question was addressed specifically in the agencies’ written criteria and 8 were partly addressed. None of the agencies’ criteria specifically addressed the four learning environment questions shown in table 1. See appendix VI for a discussion of the written criteria used by agencies in evaluating the proposals. Some PVO and WFP Proposals Included Additional Information on Key Factors We also reviewed the approved PVO and WFP proposals and found that many included information related to the key factors we identified as important to successful food for education programs, although fewer than a third of the approved PVO and WFP proposals discussed most of the items. In general, the response rate was highest for those factors where USDA had solicited information. Table 2 shows the number of approved PVO and WFP proposals that provided information related to the key factors irrespective of whether USDA requested this information. For example, a considerable number of the PVO and WFP proposals included information on certain health and nutrition issues that were not specifically requested by USDA. To a lesser extent, proposals also included information on factors associated with the learning environment. Overall, the highest response rates were mostly for factors for which USDA had sought information (i.e., school enrollment and attendance levels, literacy rates, target area based on low economic status, and programs that involve the community and parents.) (See app. VI for additional discussion about the information that was included in WFP proposals.) USDA Provided Little Funding for Components Identified as Important to Successful Programs USDA provided little funding for nonmeal components—such as basic classroom materials, nutritional education, and treatment of parasitic infections—that are essential elements of an integrated approach to food for education programs. Altogether, USDA approved 60 proposals, including 34 for WFP, 25 for PVOs, and 1 for the government of the Dominican Republic. For WFP projects, USDA largely funded only school meals and related costs, including storage, transportation, and handling of the food. For the PVO projects, USDA was willing to consider proposals that included nonfood components to be funded by monetizing some of the surplus commodities or by the PVOs themselves. We found that 17 of the 25 approved PVO proposals included nonmeal components; but of the 17 proposals, only 10 included in their proposed budget a dollar value for resources that would be allocated to some or all of these activities. (See app. VII, table 14, for additional information on the extent to which PVO proposals included nonmeal components and budgeting for these activities.) Weaknesses in Structure, Planning, and Management Reduce Chances for Pilot Program Success While the U.S. pilot program expects to provide food to more than 8 million schoolchildren in developing countries, its structure, planning, and management to date do not reasonably ensure a program that will produce substantial gains in enrollment, attendance, and especially learning. The administration’s decision to fund the program through surplus commodities may be appropriate for a 1-year pilot but is not sustainable for a longer-term program. USDA, which was selected to manage the pilot, lacked the expertise and resources of USAID--the agency traditionally responsible for foreign development aid such as food for education programs. The pressure on USDA to get the pilot program up and running quickly did not allow time to adequately plan the program and hire additional staff to manage it. USDA launched the pilot before fully developing a strategy for monitoring and evaluating performance; and, because of the pilot’s short time frame, USDA officials told us they would not be able to evaluate improvements in learning—one of the program’s three objectives. This weakness, as well as others related to ensuring financial accountability for some parts of the projects, could make determining the pilot’s effectiveness difficult. Surplus Commodities Not Reliable Funding Mechanism The administration’s decision to use surplus agricultural commodities to fund the pilot was an expedient way to get the program quickly under way. However, surplus commodities are not a good vehicle for funding a medium- or long-term development program, since surpluses cannot be ensured on a regular basis. (For example, between fiscal years 1996 and 1998, there was no section 416(b) program.) Although the pilot was expected to run for just over 1 year, the administration contemplated a multiyear food for education program, possibly lasting as long as a decade. Under this scenario, when surpluses were not available, the administration would have to end the program or sustain it through the foreign aid budget, which is expected to have many competing priorities in the foreseeable future. USDA Lacked Expertise on School Feeding Programs USAID—traditionally the U.S. agency for providing foreign development assistance, including school feeding and food for education programs— would normally have been the logical choice to establish and run the pilot. However, in light of constraints on foreign aid funding generally and other high priority development needs, the administration wanted CCC to manage the pilot, and to do so using available surplus agricultural commodity funding authority (i.e., section 416(b) of the Agricultural Act of 1949). The administration’s decision to assign management responsibility for the pilot to USDA rather than USAID follows a recent trend of giving USDA a larger role in U.S. food aid programs, primarily because of increased section 416(b) program activity. However, USDA lacked USAID’s resources (such as USAID’s overseas development missions) and USAID’s school feeding/food for education development expertise. The principal mission of USDA’s Foreign Agricultural Service (FAS) is to help ensure open markets for U.S. agricultural exports; it generally has had little experience in managing school feeding development assistance programs. USDA has previously used section 416(b) authority to provide some commodities for international school feeding programs, but we were told the amounts were relatively smalland not for integrated food for education programs. In contrast, USAID has been engaged in school feeding programs since the 1950s and administers economic and humanitarian assistance programs in more than 80 countries. Beginning in the mid-1990s, USAID began reducing its support for traditional school feeding programs that provided only meals, citing mounting evidence that school feeding, in and of itself, contributed little to improving child learning ability or child nutrition on a sustainable basis. According to USAID officials, its school feeding assistance has evolved into programs designed to improve education (i.e., enrollment, attendance, and graduation rates, especially for girls) by focusing on national education policy reform, curriculum development, and teacher training programs. In 2000, USAID spent $33 million on PVO- operated food for education programs in eight countries that benefited 1.3 million children. Pilot Program Was Launched Quickly and Has Been Understaffed President Clinton announced GFEI on July 23, 2000. USDA began to implement the pilot program almost immediately, leaving little time for planning and relying on existing staff from within the Foreign Agricultural Service to work on the assignment. USDA issued its request for proposals on September 6, 2000, with a closing date for all submissions at the end of September. (See app. IX for key events from the time the concept of an international school lunch program was suggested until approval of the GFEI pilot program proposals.) According to USDA officials, USDA was understaffed when the GFEI pilot was launched and a year later still lacked sufficient staff for handling food aid matters. For example, in a July 2000 meeting with PVOs to discuss the pilot program, the Secretary of Agriculture said the lack of staffing in U.S. agencies for running food aid programs was acute. At the same time, he said the president wanted to see some benefits from the pilot program before leaving office. In November 2000, a USDA official told us that USDA was generally understaffed for monitoring food aid programs. At a July 2001 meeting with PVOs, other USDA officials apologized to PVO representatives for having too few staff available to negotiate agreements and address other food aid program issues in a timely manner.44, 45 According to OMB, in March 2001, the administration authorized USDA to use $2.5 million of the $300 million in CCC funds for administrative salaries and expenses. According to a USDA official, the funds are being used to facilitate monitoring and evaluation of the pilot program’s impact. As of September 2001, a year after the pilot was launched, USDA was still in the planning stage regarding hiring regional coordinators and local national staff in PVO recipient countries to help monitor pilot program projects. USDA’s Foreign Agricultural Service has managed the pilot with existing Program Development Division staff resources, which were already stretched thin because of a recent section 416(b) program expansion, personnel turnover, and slow hiring of replacements. During our review, a significant portion (ranging from between 25 percent to 33 percent) of the division’s permanent staff positions were vacant. WFP and IPHD noted that many of the recipient countries were well into their academic year before USDA commodities were procured, shipped, and available for distribution. USDA Policy Change on Funding PVO Projects Delayed Implementation USDA’s September 2000 Federal Register notice indicated that CCC funds might be available to cover some of the cooperating sponsors’ expenses related to implementing the school feeding projects. As a result, many PVOs submitted proposals based on the assumption that they would receive CCC funds to cover part of their expenses. However, in January 2001 USDA reversed its position, announcing that funding would not be available. This meant that PVOs’ expenses in recipient countries would have to be covered by selling (monetizing) commodities in the recipient countries and using the resulting local currency proceeds to cover in- country costs. The policy change further meant that PVO direct administrative headquarters’ costs could not be covered, since the section 416(b) program does not allow monetization of commodities for that purpose. USDA’s policy shift resulted in several of the proposals having to be restructured, causing discontent within the PVO community and leading to delays in concluding a number of agreements. In fact, about one-third of the approved PVO agreements were not signed by the end of September 2001. In addition, the change presented problems for some PVOs because it required them to monetize increased quantities of commodities within recipient countries to recover some of their costs, and there were limits on the commodity tonnage that could be monetized effectively. Some PVOs were also upset because all of WFP’s operating expenses, including headquarters’ costs, were funded by CCC cash payments. Legislative relief in the form of limited CCC funding was provided to PVOs in late July 2001; at that time, only 4 PVO agreements had been signed. (App. IX discusses the funding sources used for pilot program sponsors in more detail.) Weaknesses in Program Management and Short Duration of Pilot Will Affect Monitoring and Evaluation To know whether programs are effective, program objectives should clearly describe the intended end results and accompanying indicators so that changes and progress toward achieving the objectives can be tracked over time. However, USDA initiated its requests for proposals in September 2000 without having a comprehensive plan for how it would monitor and evaluate project performance and has spent much of the time since then establishing such a plan. USDA and WFP will collect baseline data on school enrollment and attendance for the time before the projects began and monitor and assess change in these variables over the course of the projects. However, USDA has not set specific targets or desired performance levels for enrollment and attendance in its agreements with most of its implementing partners. In addition, although improved learning is one of the three principal objectives of the pilot program, USDA said it will not monitor and evaluate performance on this variable, unless improved learning is an element within an agreement, because of the program’s short duration. Officials from USDA’s Foreign Agricultural Service told us USDA is responsible for evaluating the performance of WFP, PVOs, and the Government of the Dominican Republic in implementing GFEI projects. According to these officials, FAS’ mandate is to monitor and review the 25 PVO and 1 country government projects in 20 countries from October 2001 through March 2003, and at appropriate intervals report to the Congress on the projects’ status. They added that FAS headquarters staff is also responsible for evaluating WFP’s GFEI project implementation. They stated that the agency intends to complete an interim status report on the pilot for the Congress by July 2002 that will address several performance- related issues. In its September 6, 2000, Federal Register notice, USDA said that cooperating sponsors would be required to report periodically the number of meals served, enrollment levels, and attendance levels, including female attendance levels. In addition, USDA said that reports should include information on infrastructure relevant to sustaining the feeding program, such as establishment of PTAs and community groups. However, the notice did not indicate whether sponsors would be required to collect baseline data on these variables, which would permit comparisons of conditions before a project got under way and when it was completed. It did not indicate whether or how success would be measured—for example, what percent improvement in attendance would represent achievement of the program’s objectives. In addition, the notice did not discuss whether sponsors would be required to report on educational performance, one of the program’s three principal objectives. In February 2001, USDA began negotiating final agreements with cooperating sponsors and WFP for approved proposals. As of December 2001, USDA had completed agreements for 21 of 26 approved cooperating sponsor project proposals. All 21 proposals contained provisions that required reporting on the number of meals served, enrollment and attendance levels (including female attendance), and establishment of infrastructure relevant to sustaining the feeding program, such as PTAs and community groups. However, less than half of these agreements indicated a requirement for baseline data; and a majority of the agreements did not specify performance targets for enrollment, attendance, and female attendance. None of the agreements included reporting requirements for educational performance. (According to USDA officials, PVOs opposed such reporting, arguing that the pilot was too short in duration to permit a meaningful analysis of impacts on learning.) By September 2001, 33 of 34 agreements for WFP projects were concluded, with 1 deferred until fiscal year 2002. None of these agreements specified requirements for measuring project performance; in fact, they did not even say that WFP would report the types of data USDA had required from cooperating sponsors, such as enrollment and attendance data. Nonetheless, WFP developed a detailed survey instrument for collecting baseline information on its GFEI-funded projects. The survey was pilot- tested in August 2001, approximately 1 year after USDA received proposals from WFP and cooperating sponsors. According to USDA and WFP officials, WFP conducted the surveys in a sample of schools for all of its projects before the end of 2001 and before the food aid was distributed. In addition to collecting basic information on the feeding program, the survey sought detailed baseline and subsequent performance data on school enrollment and attendance (broken down by boys and girls and grade level); the number of certified and uncertified teachers in the school; the number of classrooms; certain baseline information on community and parental involvementand health and nutrition issues; and whether the school had other ongoing programs related to effective school feeding programs and if so, the name of the donor providing the program. The survey also called for the use of focus groups to collect views on the likely reasons why eligible children did not enroll and enrolled boys and girls did not attend school during a year. The survey instrument indicates WFP’s interest in upgrading monitoring and evaluation of its feeding programs, since previous efforts revealed some weaknesses. However, the survey included only two questions focused on the possible impact of the programs on improved learning.WFP is sharing its survey results with USDA. (See app. III for additional information on WFP activities to improve monitoring and evaluation of school feeding programs.) During the summer of 2001, USDA was still debating how to monitor and evaluate performance for the cooperating sponsors’ projects. In August 2001, it convened a working group of USDA officials and USAID consultants with expertise in monitoring and evaluation methodologies to discuss the issue. The group recommended use of local school or government records for collecting data on enrollment and attendance, but it was against collecting quantitative data on indicators for measuring educational progress (such as reduced dropout rates, retention and/or completion, and promotion to the next grade) and level of community participation and infrastructure development. For the latter variables, it recommended information be collected through a combination of focus groups and structured interviews with school staff and parent and community groups. In fall 2001, USDA decided to use the WFP survey instrument for the cooperating sponsors’ projects and, like WFP, apply the survey in a sample of the schools in each project. According to USDA officials, doing so would allow collection of comparable data, provided USDA’s sampling strategy was properly designed. USDA also decided to contract with about 20 local national monitors (approximately 1 per country) to collect the data and 5 regional coordinators to manage the monitors. In late December 2001, USDA officials told us they planned to add a few more questions to the survey to address concerns about whether some of the projects were well targeted. They also said the surveys would be conducted in early 2002. USDA officials told us that they ultimately decided not to measure change in school learning. They said that from the beginning of the pilot, USDA, WFP, and PVOs were concerned about the ability to effectively evaluate and judge an increase in student performance under a 1-year pilot program. Research that tries to demonstrate improvements in academic achievement is lengthy and requires a long-term approach, they said. USAID officials with whom we spoke were also critical of the short time allowed for running the pilot program. They said USAID pilot programs usually take 4 to 5 years, with an evaluation done in the third year to see if the program is on track, and an assessment of the impact conducted in the fourth year. Processes to Prevent Disincentive Effects of Food Aid Raise Some Concerns An effective global food for education program needs to ensure that food aid does not interfere with commercial markets and inhibit food production in developing countries. USDA uses an international consultative process—the Consultative Sub-Committee on Surplus Disposal (CSSD)—to keep the pilot program’s food aid from interfering with commercial exports. The process involves notification of various categories of food aid donations, prior consultation with other exporters, and establishment of Usual Marketing Requirements (UMR) to ensure that food aid recipients maintain a normal intake of commercial imports in addition to the food aid they receive. According to the CSSD, in recent years several factors reduced the effectiveness of the UMR approach, including (1) lack of uniformity in the compliance period (fiscal year, crop year, and calendar year); (2) fewer food aid operations covered by the UMR because many transactions are exempt; (3) a rise in UMR waivers for countries facing difficult economic situations; and (4) delays in collecting trade data, which make establishment of 5-year average commercial imports as a benchmark for current import levels unrealistic. USDA officials acknowledged that some countries have expressed concerns that GFEI might adversely affect commercial exports but said they have not received any specific complaints about the U.S. pilot’s food exports. To address disincentive effects of food aid on local production, the United States requires all proposed food aid projects to submit an analysis showing the recipient has adequate storage facilities and that food aid will not disrupt domestic production and marketing. (Technically the analysis is known as a Bellmon determination.) We reviewed the analyses by cooperating sponsors whose projects were approved for the pilot and found the analyses were not adequate for determining disincentives to production of local commodities. All cooperating sponsors concluded that the amount of food for their projects was so small it was unlikely to significantly affect local production. But their analysis of data on local market conditions was generally based on production of identical commodities. For example, if wheat was not grown in the recipient country, sponsors concluded there was no disincentive to importing and monetizing wheat—without considering whether the amount of imported wheat would affect price or demand for locally produced substitute commodities. Cooperating sponsors did not adequately verify that the commodities were in demand and would not compete with local markets, other commercial export programs, and other donor imports. USDA officials told us that cooperating sponsors are responsible for analyzing the potential disincentive effects of their projects. They said USAID no longer has agricultural officers stationed overseas and now USDA has to rely on PVOs—which have on-the-ground, in-country staff—to determine whether the food aid will adversely affect recipient country markets. (USAID advised us that while the number of agricultural officers overseas has been reduced in recent years, it still has such officers in a number of posts.) Although USDA and/or USAID attaches may review such analyses, USDA does not independently verify the results. USDA officials also noted that the lack of good data could affect sponsors’ ability to prepare more robust analyses. USDA does not require WFP to conduct or submit similar analyses of WFP projects that are partly funded by the U.S. pilot program. However, WFP told us a review is required of all WFP proposed projects for their potential impact on production and markets, and food aid donors (including the United States) participate. Key Weaknesses in Financial Accounting Could Have Negative Impact on Pilot Program We identified several weaknesses in how USDA has maintained financial accountability over WFP and PVO projects that could adversely affect the pilot program. Although USDA advances funds (in the case of WFP) or food (in the case of cooperating sponsors) on the basis of their estimated needs and requires them to provide regular though different forms of financial and project status reporting, WFP in particular has not adequately accounted for past Section 416(b) program donations. The PVOs provide more detailed financial reporting, in part, because a large portion of the commodities they receive are to be monetized in country to cover foodand other expenses. USDA requires that PVOs monetize commodities at market prices, but it has not systematically tracked whether the PVOs received prices for the monetized commodities that were commensurate with their cost or whether the funds were spent in accordance with approved program plans. WFP Reporting Has Been Inadequate Under a section 416(b) umbrella agreement, WFP is required to account for the costs it incurs and charges USDA on food aid donations. WFP is supposed to submit annual standardized project reports that provide implementation and actual expenditure data for ongoing activities similar to what is required of PVOs. We found that WFP had not met its obligation to provide USDA with an accounting for past Section 416(b) program donations by providing detailed actual cost data. As a result, USDA is not in position to know whether its advances to WFP, on the basis of initial cost estimates, are consistent with actual project costs and to what extent the project objectives are being achieved within the approved budget estimates. A similar situation exists with USAID-funded donations to WFP. According to a USAID official, WFP has not provided actual cost data for direct and indirect project costs at the level of project activities and by donors. Such data is needed, the official said, to know whether the United States is meeting and not exceeding its fair share of a project’s total cost, as well as the costs of specific project activities. In April 2001, U.S. officials reiterated to WFP officials the need for disaggregated actual cost data. During the meeting, WFP officials noted that WFP was in transition, using a new financial information system for new business while still using the earlier system for old business. According to a USAID review conducted in June 2001, WFP’s new system appeared to have the capacity to accurately monitor and report on full cost recovery in the aggregate. However, the system was not yet fully operational and thus the adequacy of the complete system could not yet be determined. In September 2001, WFP told USDA it would not be able to provide finalized reports for fiscal year 1999 obligations that were due by the end of that month. According to USAID, pursuant to bilateral consultations between an interagency U.S. government delegation and WFP management, the United States agreed to a 6-month extension for WFP to report actual cost data for all U.S. government contributions to WFP. Oversight of PVO Monetized Commodities Is Limited As previously indicated, a substantial portion of the commodities provided to PVOs are to be monetized, with the proceeds used to pay for other foods and/or other expenses, such as administrative expenses and inland transportation, storage, and handling costs. For the first 17 completed PVO agreements, more than 80 percent of the commodities are to be monetized. At issue is whether USDA is sufficiently tracking the proceeds that PVOs receive from the commodities they monetize. Also, if a PVO sells a commodity for less than the market value, the commodity could undercut other commercial sales, including imports or domestically produced commodities, and fewer proceeds would be available for financing the school meals or related activities. USDA regulations require that PVO commodity sales meet local market conditions and that PVO and government sponsors provide a report showing deposits into and disbursements out of special accounts established for commodity sales proceeds. In past Section 416(b) programs, USDA did not determine to what extent proceeds compared with what sponsors expected to receive as stipulated in the project agreements, nor whether the commodities were sold at real market prices. However, in September 2001, USDA officials told us they plan to conduct such an analysis for the pilot program projects. Most Other Donors Currently Uncommitted or Opposed to Major Support of GFEI The success of a comprehensive, long-term GFEI strongly depends on other donor support, but most other donors are either opposed or not committed to supporting GFEI at this time. A few donors have indicated support for the food for education initiative but have offered little in terms of specific additional contributions. While WFP officials are confident of eventual support, most donor countries seem unlikely to provide substantial support unless the United States adopts a permanent program that is not dependent on surplus commodities and/or unless the pilot program demonstrates strong, positive results. Some donors are opposed to GFEI on the grounds that developmental food aid assistance is ineffective in promoting sustainable development. Others are noncommittal for a variety of reasons, including possible adverse impacts on commercial agricultural exports to and domestic agricultural production in recipient countries. Long-Term Program Will Need Substantial Support from Other Donors The U.S.-proposed GFEI challenged other donor countries and organizations to join the United States in helping achieve the goal of education for all children in developing countries by 2015. Indeed, the United States said that its willingness to extend the pilot program beyond its first year would depend in part on other donors’ response. Since the initiative was first proposed, U.S. officials have indicated they would like to see other donors contribute, in aggregate, anywhere from two-thirds to three-quarters of the total cost of a global food for education program. The Clinton administration estimated that at least 300 million children in developing countries need school meals. Assuming an annual average cost of $34 per student for a 180-day school year, the annual meal cost alone for 300 million children would be approximately $10.2 billion.To put this estimate in perspective, in 1999, $10.2 billion represented about 96 percent of the Organization for Economic Cooperation/Development Assistance Committee countries’ official development assistanceto least developed countries, or about 18 percent of development assistance directed to all developing countries. In addition, net official development assistance has declined during the past decade, from $56.7 billion in 1991 to $53.7 billion in 2000. We estimate the food tonnage required to provide a school meal for 300 million children (for a 180-day school year) to be in excess of 16 million metric tons, which would exceed average annual global food aid deliveries between 1990 and 2000 by about 40 percent. (Global food aid deliveries averaged approximately 12 million metric tons per year from 1990 through 2000.) Moreover, food aid for development programs, only a part of which is for school feeding, averaged about 3 million metric tons per year. Thus GFEI would represent more than a fivefold increase for these types of programs. Donors Have Been Generally Noncommittal to GFEI According to a State Department cable, when the United States proposed GFEI at the July 2000 G-8 Summit, the proposal received a cool reception. Subsequently, in November 2000, the State Department headquarters asked U.S. diplomats in 23 countries to explain the U.S. pilot program to foreign governments and encourage their support. In addition, the previous U.S. Ambassador to the U.N. Food Agencies in Rome sought other countries’ support for GFEI through his participation in the WFP Executive Board and in official visits to food aid donor countries, such as Denmark and Finland. These efforts notwithstanding, most donor countries have yet to respond in a strongly positive or substantial way. Of the top 13 food aid donating countries for the period 1995 through 1999, the United States supplied more than half of all deliveries, with the other donors providing slightly more than 41 percent (see app. X). Table 3 summarizes general views of all but one of these other donor countries as well as Finland and their plans or actions to contribute to GFEI or the WFP’s school feeding initiative. As table 3 shows, representatives of 4 of the 12 donors (Japan, France, Italy, and Finland) indicated general support for the food for education initiative. The European Commission, the second largest provider of food aid in the world, has said it is against a “one-program-fits-all” approach, citing a preference for strategic planning that identifies all of a country’s development needs and then analyzes alternative ways to achieve them. According to the Commission, education forms an integral part of the European Union’s development policy, and it is crucial that all shortcomings in providing education are tackled at the same time. If analysis indicated that a food for education program would have a positive impact, the Commission would still want to assess the relative cost effectiveness and efficiency of the alternatives. Representatives of Germany, the United Kingdom, the Netherlands, and Sweden also expressed reservations about GFEI not being an integrated approach to development assistance and/or about the ability of recipient countries to sustain the programs over the long run. Representatives of Australia, Canada, Sweden, and the United Kingdom indicated they would like to see whether the U.S. pilot program or WFP program demonstrates successful results. Representatives of the European Commission, Canada, Germany, the Netherlands, and Sweden expressed concerns about or said they thought the U.S. program was being used to dispose of surplus commodities. In addition, some donors indicated they favor using food aid for emergency (rather than development) purposes, expressed reservations about providing assistance for school feeding programs in the form of food or surplus commodities, or indicated they lack convincing information on the effectiveness of WFP school feeding activities. (See app. VIII for additional information on donor views on food aid.) Regarding actual support for GFEI, Italy has contributed nearly $1 million to the WFP initiative in three African countries. A French representative said France might provide some support, either on its own or through WFP, but added that France wanted to maintain its current level of WFP development activities, which would limit France’s ability to greatly increase funding for WFP’s school feeding initiative. Representatives of Japan and Finland, the two other supporters, indicated their countries would not increase their current level of donations to support the initiatives. Meanwhile, representatives of Canada, Australia, the United Kingdom, and Sweden all indicated that they would track the progress of the food for education initiatives for the results. The German representatives said their country’s budget situation does not permit providing additional support. In mid-April 2001, the U.S. Ambassador to the U.N. Food Agencies in Rome acknowledged that there had been very little movement by other donor countries toward supporting GFEI but said that they were coming around to the idea. They want to see an American commitment, which will begin with the pilot program’s implementation, he said. The Ambassador said he thought Denmark, Finland, Norway, and Sweden would be on board within the next few months and that France and Germany would soon join in. At the same time, WFP officials told us that most or all governments, donors and recipients alike, support a global school feeding effort and that they were optimistic that additional contributions would be forthcoming by the end of 2001. At the beginning of August 2001, WFP officials told us the Swiss government was contributing 194 metric tons of food, and France intended to contribute a total of 5,280 metric tons of rice, beans, oil, and corn/soy blend to a Honduran program. In addition, they said, Cargill, Inc., had provided a $50,000 donation to assist WFP’s school feeding operation in Honduras (to be matched by the local Cargill affiliate in Honduras). Apart from food donations, the Canadian government approved the use of a $250,000 grant facility for WFP for a deworming effort in conjunction with WFP school feeding efforts in Africa, WFP officials said. In addition, an international fund offered to consider providing upwards of $300,000 to fund nonmeal items (such as construction of schools, teacher training, training materials, school books, and cooking utensils) in least-developed countries. And, the officials said, WFP was negotiating new partnerships for school feeding, including the health, sanitation, and educational aspects of primary schools, with a variety of U.S. government and international agencies. At the end of December, 2001, the U.S. Mission to the U.N. Food Agencies in Rome told us that Italy, France, and Switzerland were still the only countries that had agreed to supplement the U.S. government contribution to the WFP school feeding program. Conclusions In our review of the current GFEI pilot, we found a number of weaknesses that make it difficult to evaluate the program’s effectiveness. For example, our research of past school feeding programs indicated that the programs are more likely to improve enrollment, attendance, and learning if they are carefully integrated with other educational, health, and nutritional interventions—such as ensuring adequate numbers of well-trained teachers and providing treatments for parasitic infections and micronutrient deficiencies. However, USDA began the GFEI pilot quickly and did not require potential implementing partners to provide important information on the linkages to these other interventions. Since most of the pilot’s funding is targeted for the school meals, it is unclear whether these other important factors that contribute to effective programs are adequately addressed. In addition, USDA has not effectively managed the pilot in part because of its lack of expertise and resources for food for education development programs. It has not set specific targets or desired performance levels for enrollment and attendance in its agreements with most of its implementing partners. WFP has recently collected baseline data on enrollment and attendance, and USDA is in the process of doing so. USDA will not try to measure the projects’ impacts on learning, as it believes the 1-year time frame is too short for such an assessment.Because of these weaknesses, we do not believe the pilot program will yield adequate information on whether its projects have succeeded or failed in improving enrollment, attendance, and learning—and why. Furthermore, a number of other donor countries will not contribute to GFEI until they see if the pilot is successful. These are important concerns as the Congress considers what actions to take regarding legislation on GFEI. Matters for Congressional Consideration As the Congress decides whether to further fund GFEI, it may wish to consider: extending the pilot program to permit an assessment of its effects on learning, as well as a more meaningful review of its impact’s on enrollment and attendance; deciding whether additional funding for pilot project related activities, such as teacher training and textbooks, may be needed for effective projects; assuring that the administering agency has sufficient expertise and staff resources to effectively manage the program; and requiring the administering agency to establish measurable performance indicators to monitor progress and evaluate project results. Agency Comments and Our Evaluation We received written comments on a draft of this report from USDA, USAID, and the Office of Management and Budget (OMB) that are reprinted in appendixes XII, XIII, and XIV. These agencies also provided technical comments, which we incorporated in this report as appropriate. The Department of State’s liaison for GAO told us that State believes the report findings are essentially factual and correct and opted not to comment further. We also obtained technical comments on parts of the report from the World Bank, WFP, and six PVOs and have incorporated them as appropriate. In its comments, USDA reiterated a number of key points and findings that were in the draft report and provided some additional information about certain aspects of the pilot program. Beyond that, USDA said it believes we have taken an overly critical view of how it has administered the pilot program, given time and resource constraints. Our draft report cited time and resource limitations as key factors affecting the management and possible effectiveness of the program. USDA also said it believes the report fails to recognize that the president directed a school feeding program, not an entire educational program. We disagree with this statement. We clearly said— as the White House did on the day the program was announced and as USDA itself did in its comments—that the pilot is a school feeding program with the three purposes of improving student enrollment, attendance, and learning. USAID said our draft report accurately and fairly depicted the complex and formidable challenges confronting the GFEI, fully endorsed our matters for congressional consideration, and said the findings and matters should be of great use to the Congress as it debates the structure of U.S. food assistance. USAID observed that the pilot placed priority on getting the program up and running, with program designers believing that improvements could then be made that would address issues of cost, sustainability, and the need for complementary programs. OMB commented that the draft report was balanced and generally accurate and would serve the Congress and the public in future deliberations about school feeding programs. OMB also said that the principal criticisms of the pilot program problems may be attributable to the urgency with which the program was generated. In addition, OMB said, greater emphasis was placed on the nutritional goals of the pilot rather than education objectives. One could expect that some of these problems could be addressed by a more deliberate approach to performance and evaluation, it said. We are sending copies of this report to interested congressional committees and the secretary of state; secretary of agriculture; and the administrator, USAID. Copies will also be made available to others upon request. If you or your staff have any questions about this report, please contact me on (202) 512-4347. Other GAO contacts and staff acknowledgments are listed in appendix XII. Scope and Methodology We obtained information on the Global Food for Education Initiative (GFEI) and pilot program from U.S. government officials at the Departments of Agriculture (USDA) and State, as well as officials from the Agency for International Development (USAID), the Office of Management and Budget (OMB), and the White House. We also obtained information from officials of the World Food Program (WFP), foreign donor governments, and representatives of private voluntary organizations. In addition, we met with representatives of the European Commission and the World Bank, and experts from private research institutions. We conducted our review in Washington, D.C.; Rome, Italy; and Brussels, Belgium. Our review addressed lessons learned from past international school feeding programs, the application of lessons learned to the pilot program, an assessment of the design and implementation phase of the pilot project, the impact of the GFEI on recipient country agricultural markets, and the commitment of other donor countries to the initiative. Our review did not address the in-country phase of the pilot program because projects were not operational during most of the time of our review. Our contact with PVOs was limited because most of their agreements were not finalized until we had completed most of our field work. To examine the lessons learned about the effectiveness and cost of school feeding programs in promoting increased school enrollment, attendance, and performance, we reviewed studies completed by the U.S. government, international organizations, private voluntary organizations, and private research institutions. We also met with selected experts in international school feeding. We reviewed the studies in terms of past programs’ impact on enrollment, attendance, and learning. In reviewing studies and meeting with experts, we also identified key factors common to effective school feeding programs. Through our analysis of information from World Bank and WFP, we also compared estimated costs of various school feeding programs. To examine the extent to which the U.S. pilot program has been built upon the lessons learned from previous school feeding programs, we met with senior officials of the USDA and State, USAID, the White House, and OMB, as well as representatives of private voluntary organizations, research institutions, and international organizations. We also reviewed program decisionmaking documents. We compared information obtained from these sources to key conclusions of past international school feeding studies and the views of various experts. To determine whether the U.S. pilot program was designed and implemented to reasonably ensure that the food aid and monetized proceeds were used effectively and efficiently, we gathered information and met with officials from the USDA, USAID, the White House, and OMB. We also obtained information from private voluntary organizations and WFP. We reviewed pilot program guidance, proposals, and relevant laws and regulations governing the development and administration of the pilot project. We also gathered and analyzed a variety of key pilot project information to provide estimates of tonnage, project costs, and number of beneficiaries by cooperating sponsor. We assessed selected information in proposals for approved pilot projects and nonmeal program components of these projects, including the amount budgeted and number of project beneficiaries. We applied our governmentwide internal control standards in evaluating the pilot project’s management and financial controls. To determine the views of other major food aid donors regarding support for a comprehensive, long-term global food for education initiative, we gathered information and met with officials from donor countries including Australia, Canada, Denmark, Finland, France, Germany, Italy, Japan, the Netherlands, Sweden, and the European Commission. We developed an analytical framework to summarize their individual and collective views on how food aid should be provided in terms of emergencies, development, cash, or food-in-kind. We conducted our review from November 2000 through December 2001 in accordance with generally accepted government auditing standards. Pilot Program Projects’ Implementing Partners, Countries, Agreement Status, Tonnage, Cost, and Beneficiaries International Orthodox Christian Georgia Charities (IOCC) Does not include a late fiscal year 2002 shipment of 2,350 metric tons. Some projects involve multiple commitments. The United States approved 34 WFP proposals covering 27 WFP projects in 23 countries. Of the 34 proposals, 8 were for expansions of already existing school feeding projects. The United States approved two different projects each for Guinea, Kenya, Nicaragua, and Uganda. As of February 21, 2002, USDA and WFP were still negotiating the terms of the second project for Guinea, and no figures for this project are shown in the table. The World Food Program’s Role in School Feeding and Food for Education The World Food Program (WFP), set up in 1963, is a major U.N. agency in the fight against global hunger. In 2000, WFP fed 83 million people in 83 countries, including most of the world’s refugees and internally displaced people. It shipped 3.5 million tons of food; received $1.75 billion in donations; and had operational expenditures of $1.49 billion (provisional figures). WFP provides three basic kinds of food aid: (1) emergency assistance to cope with the adverse food effects of natural disasters, civil conflict, and war; (2) protracted relief or rehabilitation aid to help people rebuild their lives and communities once the causes of emergencies recede; and (3) development assistance that aims to make communities food secure so they can devote time, attention, and work to escaping the poverty trap. When WFP was founded, its food assistance primarily focused on development, and for years development projects accounted for more than two-thirds of its expenditures. However, during the past 15 years, WFP has become increasingly involved in responding to humanitarian emergencies. According to WFP officials, WFP devoted 28 percent of its resources to development in 1997, 18 percent in 1999, and only 13 percent in 2000. WFP relies entirely on voluntary contributions to finance its projects. Governments are the principal source of funding, but corporations, foundations, and individuals also contribute. Donations are made either as cash, food (such as grains, beans, oil, salt, and sugar), or the basic items necessary to grow, cook, and store food—kitchen utensils, agricultural tools, and warehouses. Since it has no independent source of funds, WFP’s Executive Board has mandated that all food donations, whether in cash or in-kind, must be accompanied by the cash needed to move, manage, and monitor the food aid. WFP has been running school feeding programs for nearly 40 years. In 1999, it operated 76 school feeding projects in 48 developing countries. These included 33 emergency or protracted relief projects that had 5.28 million beneficiaries and 43 development projects that had 5.85 million beneficiaries. Thus, total beneficiaries were 11.13 million. In 2000, WFP operated 68 projects in 54 countries, with a total of 12.27 million beneficiaries. According to WFP, the total expenditure for its school feeding operations in 2000 was approximately $421 million. About $239 million was for development projects focused on school feeding, and the remainder was for school feeding components of emergency or protracted relief and recovery operations. WFP welcomed President Clinton’s July 23, 2000, announcement of the $300 million pilot program to launch a universal school feeding program, noted that it had been working closely with the U.S. ambassador to the U.N. Food Agencies in Rome to assist in the creation of such a program, and expressed the hope that the initiative would become a permanent feature of the global community of nations. A few days later, WFP’s executive director, in testimony before a U.S. Senate committee, said a global program needs to be managed by a global organization and WFP, as the food aid arm of the U.N., was uniquely qualified to manage the initiative. Regarding its role in implementing a global program, WFP has said that much could be done to strengthen the education system in many developing countries.According to WFP, this a highly complex task, one for which food aid is not the most effective resource. WFP’s approach will be to use food aid where the food is needed. WFP does not propose to monetize food commodities to fund related educational support activities. WFP will monetize only to effect an exchange between donated commodities and locally grown foods when this is cost effective and does not have an adverse effect on local markets. At the same time, WFP recognizes that while school feeding can bring children to school and help them learn while they are there, school feeding does not ensure qualified teachers, books and supplies, or a suitable curriculum. According to WFP, this is the role of national governments, often supported by international organizations or Private Voluntary Organizations (PVO); and the relationship between improvements in an education system and a national system of school feeding is one that must be managed by governments. However, within the broad framework of government cooperation, WFP said, it is eager to work with other operational partners and experienced in doing so. Underfunding of Projects WFP told us that many of its school feeding projects have shortfalls.Funding for all components of approved projects, including current school feeding programs, depends on the level of contributions received. When and where possible, WFP will allocate unearmarked donations to underfunded projects, taking into consideration the urgency of the need and a need to comply with the executive board’s approved allocation formula. According to WFP, it usually is not technically feasible to identify how many children were not fed due to under-resourcing. An unstable resourcing situation often compels project managers to temporarily adjust the on-site ration size or the number of food distribution days, rather than reducing the number of beneficiaries, it said. When under-resourcing is of a more permanent nature, the project plan is revised and a formal change in the beneficiaries occurs. WFP’S Approach to Certain Key Factors Associated with Effective School Feeding Programs WFP has developed several documents that describe its policies for establishing school feeding programs and which guide the project development and approval process for all WFP school feeding activities.The following is a brief summary of some of the points presented in these documents, or provided directly to us by WFP in response to questions that we provided to the agency, regarding certain key factors associated with their school feeding programs. Targeting—The focus of WFP’s world school feeding initiative is on feeding preschool and primary school age children. On an exceptional basis, food aid activities designed to encourage girls to continue their education beyond primary school will be considered. Some fundamental issues to be examined in determining the problems to be addressed are (1) enrollment and dropout rates in primary education broken down by gender, region and sociocultural groups, to the extent possible, and factors explaining these rates; (2) extent of, and factors contributing to, short-term hunger; (3) average distances walked by the students, who will be covered in the school feeding activity, between their homes and their school; and (4) cultural practices affecting enrollment/attendance, especially of girls. As a general rule, targeting within school feeding projects will be conducted at the level of geographic areas, with no selection of individual pupils within schools. The only exception for this may be when the effectiveness of an incentive for a particular category (e.g., girls) can be demonstrated. According to WFP, it requires at least 50 percent of its resources in education to be targeted for girls, and WFP has been very successful in achieving this requirement. WFP has a vulnerability analysis and mapping unit (VAM) to identify people most vulnerable to hunger and to target their needs. According to WFP, VAM uses state of the art satellite imagery of rainfall and crop conditions, as well as monitoring of food prices in local markets. WFP has VAM sub-units in more than 50 developing countries. According to WFP, this system is also used in targeting its school feeding programs. Facilitative learning environment—WFP told us that it does not require a facilitative learning environment to be in place or provided as part of its programs, but such an environment is highly desired and encouraged. According to WFP, the presence of school feeding in schools helps bring attention to other school conditions (e.g., classrooms, materials, sanitary facilities, teachers, curricula, and health conditions) and, in turn, helps WFP and its partners to bring attention to problems and attract other needed resources. Safe water and sanitation—WFP guidelines say basic water supply and sanitation standards must be met if food is to be safely stored and prepared for school feeding, and safe water supply should be available on the school premises at all times. WFP provides detailed information on optimal and minimal standards for a safe water supply and sanitation at schools. However, WFP told us it does not require safe water and sanitation facilities to be in place in order to implement school feeding in a given school and, as a rule, does not provide water and sanitation facilities. However, WFP said, it does work with the national and local governments and with other U.N. agencies, donors, and nongovernmental organizations who have the appropriate skills and resources to “trigger” action where the lack of such facilities is a problem. Deworming treatments—According to WFP guidelines, WFP will generally support deworming in a school feeding program when more than 50 percent of the children have intestinal parasites. Treatment is with a single dose of the proper medicine, up to three times a year, and should be combined with improved sanitation and safe water supply, as well as health education on prevention. In April 2001, WFP told us that it did not yet have complete information regarding which of its school feeding programs had already initiated deworming activities (due to decentralized decision-making and no prior requirements for reporting such information). However, WFP said it did know that most or all of its school feeding operations in Latin America and the Caribbean and two or more in Asia had at least implemented limited deworming activities. WFP estimated that by the end of 2001, it would have initiated deworming in its school feeding programs in 15 or more countries, in partnership with WHO and the World Bank, and assisted, in part, by a Canadian grant. WFP said that it hopes to achieve deworming activities in most or all GFEI, as well other WFP school feeding operations. WFP also noted that national, regional, or local governments may require deworming to be in place. Micronutrient supplementation—WFP guidelines note that school feeding can be a vehicle for micronutrients in countries where school children are affected by and/or at high risk of developing micronutrient deficiencies. WFP provides information on micronutrient deficiencies that have been shown to affect school attendance and performance, recommended levels of intake of these micronutrients for 3- to 12-year old children, and guidance on how to use them in school meals. WFP told us that micronutrient supplementation is most often handled as an additive to the commodities that are distributed. In cases where the commodities that arrive are not fortified, WFP most often works locally to fortify the food or seeks other remedies. WFP collaborates with groups that have expertise and resources to bring to bear, especially UNICEF, WHO, a Canadian micronutrient initiative, and certain NGOs. WFP noted that national, regional, or local governments may require micronutrient supplementation to be in place. Health and nutrition education—WFP told us that this is not strictly required in all WFP school feeding operations. However, such activities are highly encouraged, are frequently planned and implemented, and will be further strengthened through collaboration with appropriate partners and coworkers on the ground. WFP noted that national, regional, or local governments may require health and nutrition education to be in place. Community and parental participation—WFP told us that community and parental participation are not strictly required in all WFP school feeding operations. However, WFP said, such activities are highly encouraged,are frequently planned and implemented, and are and will be further strengthened through collaboration with appropriate partners and coworkers on the ground. WFP noted that its data indicates that as girls’ enrollment and attendance increases, so does parental participation. WFP also noted that national, regional, or local governments may require parental involvement to be in place. Education for All—WFP expects recipient governments to have demonstrated a commitment to Education for All. Sustainability—WFP requires that plans be in place for eventual take- over of a feeding program by recipient countries. WFP generally insists that programs be supported by national governments and local communities and that resources and infrastructure be provided as counterpart contributions. However, WFP will consider providing school feeding activities in some emergency and protracted relief situations where full government support is not possible. In addition, for low income countries, it is probably necessary to provide most or all of the food commodities, technical assistance, and equipment. According to a WFP official, sustainability depends on the economic status of the recipient country. There are countries where the national government has been able to take over a program. However, in the poorest, least developed countries, he said, sustainability is only possible where there is substantial community involvement. In many least developed countries, government expenditure on the education sector often represents up to 30 percent of the national budget; it is difficult enough for such countries to maintain the physical infrastructure and teachers. For least developed countries, sustainability is a long-term process. A realistic estimate is 10 to 15 years, he said. Monitoring and Evaluation WFP officials told us that there had been some problems in the past, but WFP is working hard to overcome them for both the U.S. pilot program and its other school feeding activities. As an example of problems, collection of baseline date had varied, depending on the country, the specific goals of the school feeding activity, and the resources available. Principal performance indicators that WFP tended to use were increased enrollment and attendance, reduced dropout rates, and improved performance (such as number of students who had completed primary school the previous year and gone on to higher education). WFP had looked at these indicators, especially as they relate to girls’ education, and had been able to report some notable successes. However, WFP had only done that in isolated cases/countries. Therefore, WFP intends under GFEI to standardize the indicators and upgrade its monitoring and evaluation systems so as to be able to regularly collect and report comparable and up-to-date data for its school feeding operations. WFP also said that data collection and analysis in developing countries is challenging and requires additional resources and capacity building of national counterpart staff. WFP’s guidelines for its new World School Feeding Initiative require a baseline monitoring study to establish the situation prior to the onset of the initiative, followed by periodic updates as a program is implemented. To this end, WFP developed a detailed survey instrument for collecting baseline information on its GFEI-funded projects. The survey was pilot- tested in August 2001, and WFP conducted the surveys in a sample of schools for all of the U.S. pilot program projects before the end of 2001 (details of the survey instrument are discussed in the letter). In addition, according to WFP, during 2001, it developed and successfully pilot-tested a new system of collecting key monitoring data on a timely basis directly from the schools involved in its feeding programs. The system involves school staff entering key data directly into devices, installed at the schools, that transmit the data via satellite to a data collection center in France, using the ARGOS satellite system (that is jointly managed by the governments of France and the United States). Country data is then reported from the data collection center to the country’s relevant ministry of education and to WFP. WFP is seeking donors to fund implementation of the system. WFP also conducted a major, global survey of national school feeding programs (not specific projects) between May and December 2001. The survey collected information on countries’ school feeding programs and related information on their demography; education system; nongovernmental program assistance; health-related education services at school; and evaluations, studies, and surveys about school feeding and related topics. According to WFP, the survey provides a focal point for school feeding information, which WFP will use to promote dialogue with governments and nongovernmental organizations concerning the use of food aid for education and related issues. WFP will also use the data to produce special reports and identify country specific needs and coordinate partnerships between countries with experience in school feeding and those in need. WFP is posting country-specific results on its Web site. WFP is seeking donors to fund installation of the system in its schools. Regarding evaluations, WFP’s central evaluation office generally does not conduct separate evaluations of the school feeding projects that WFP assists. (Occasionally separate evaluations of school feeding projects are undertaken if specifically requested by the executive board.) WFP mandates that evaluations of its country programs be conducted about every 4 years, on average. The evaluations are submitted to WFP’s Executive Board for review. If a country has a school feeding project, the project’s role, relevance, and performance as an activity is to be included in the review. Results from Review of Experts’ Findings and Views on School Feeding Programs This appendix provides additional information on our review of experts’ findings and views concerning (1) the effect of school feeding programs on enrollment and attendance, (2) the effect of school feeding programs on educational performance or learning, and (3) key factors contributing to effective school feeding programs (see tables 4 and 5). It also provides further information on key factors associated with effective school feeding programs (see tables 6 through 10). (See also app. V, which discusses the costs and cost effectiveness of school feeding programs.) Our review relied considerably on the views of two experts who have reviewed the results of many school feeding program studies; WFP, which has conducted school feeding programs for 4 decades and also reviewed the results of other studies; and the summary views of a meeting of experts and practitioners held at USAID in October 2000. We also conducted literature searches, reviewed the results of individual studies on school feeding programs, and spoke with experts and practitioners. Table 4 summarizes the results of studies and expert views on the relationship between school feeding and school enrollment and attendance. Table 5 summarizes the results of several studies and expert views on the relationship between school feeding and school performance. Table 6 provides results and views on how targeting factors can affect school feeding program effectiveness. Ways to target programs include focusing on areas/communities that are (1) low-income and food insecure, (2) have relatively low levels of school enrollment and attendance, and (3) where girls’ enrollment and attendance are considerably lower than boys’. Table 7 provides results and views on how learning environment factors can affect school feeding program effectiveness, including ensuring adequate numbers of teachers, teacher training, supplies of textbooks and other learning materials, and school infrastructure. Table 8 provides results and views on how health and nutrition factors can affect school feeding program effectiveness, including through treating intestinal parasitic infections, ensuring clean water and adequate sanitation facilities, addressing micronutrient deficiencies, and ensuring health and nutrition education. Table 9 provides results and views on how community and parental involvement can impact the effectiveness of school feeding programs. Table 10 provides results and views on the effect of government commitment and sustainability on the effectiveness of school feeding programs. Among the factors addressed are national government commitment to broad, national school reform programs, resource commitments by national governments and local communities, and plans for program sustainability. References Agarwal, D.K.; Upadhyay, S.K.; Tripathi, A.M.; and Agarwal, K.N. Nutritional Status, Physical Work Capacity and Mental Function in School Children. Nutrition Foundation of India, Scientific Report 6 (1987). As cited in Del Rosso, 1999. Ahmed, A.U. and Billah, K. Food for Education Program in Bangladesh: An Early Assessment. International Food Policy Research Institute, Bangladesh Food Policy Project. Dhaka, Pakistan: 1994. Berg A. “School Daze,” New & Noteworthy in Nutrition 34 (1999). Berkley, S. & Jamison D. A Conference on the Health of School Age Children. Sponsored by the United Nations Development Programme and the Rockefeller Foundation, held in Bellagio, Italy, August 12-16, 1991. As cited in Whitman et al, 2000. Briefel, R; Murphy, J.; Kung, S.; & Devaney, B. Universal-Free School Breakfast Program Evaluation Design Project: Review of Literature on Breakfast and Reporting. Mathematica Policy Research, Inc. Princeton, New Jersey (December 22, 1999). Bundy, D.A.P., & Guyatt, H.L. Global Distribution of Parasitic Worm Infections. Paris: UNESCO (1989). As cited in Whitman et al, 2000. Chambers, C.M. “An Evaluation of the World Food Program (WFP)/Jamaica 2727 School Feeding Program.” Cajunas 24(2)(1991) pp. 91-102. As cited in Del Rosso, 1999. Del Rosso, J.M. & Marek, T. Class Action: Improving School Performance in the Developing World through Better Health and Nutrition. Washington, D.C.: The World Bank (1996). Del Rosso, J.M. School Feeding Programs: Improving Effectiveness and Increasing the Benefit to Education: A Guide for Program Managers. The World Bank (August 1999). Devadas, R.P. The Honorable Chief Minister’s Nutritious Meal Program for Children of Tamil Nadu. Ciombatore, India: 1983. As cited in Del Rosso, 1996. Gopaldas, T., Gujral, S. “The Pre-Post Impact Evaluation of the Improved Mid-Day Meal Program, Gujarat (1994-continuing).” Tara Consultancy Services, Baroda, India (1996). As cited in Del Rosso, 1999. Hubley, J. “School Health Promotion in Developing Countries: A literature review.” Leeds, U.K.: Self-published (1998). As cited in Whitman et al, 2000. IFPRI. Feeding Minds While Fighting Poverty. Washington, D.C.: IFPRI (2001). Janke, C. “SFPs and Education: Establishing the Context.” Catholic Relief Service (CRS) School Feeding/Education Companion Guidebook. 1996. Jarousse, J.P., & Mingat, A. “Assistance a la formulation d’une politique nationale et d’un programme d’investiseement dans le secteur de l’education au Benin,” Project UNESCO/PNUD Benin/89/001. Paris: UNESCO (1991). As cited in Whitman et al, 2000. Khan, A. “The sanitation gap: Development’s deadly menance,” The Progress of Nations 1997. New York: UNICEF (1997). King, J. Evaluation of School Feeding in the Dominican Republic. Santo Domingo, Dominican Republic: CARE (1990). As cited in Whitman et al, 2000. Levinger, B. School Feeding Programs in Developing Countries: An Analysis of Actual and Potential Impact. AID Evaluation Special Study No. 30. USAID (January 1986). Levinger, B. Statement of Beryl Levinger before the Committee on Agriculture, Nutrition, and Forestry. U.S. Senate, July 27, 2000. Levinger, B. GAO interview with Beryl Levinger, March 9, 2001. Lopez I.; de Andraca, I.; Perales, C.G.; Heresi, M.; Castillo, M.; and Colombo, M. “Breakfast Omission and Cognitive Performance of Normal, Wasted and Stunted Schoolchildren.” European Journal of Clinical Nutrition 47 (1993). Meme, M.M.; Kogi-Makau, W.; Muroki, N.M.; and Mwadime, R.K. “Energy and Protein Intake and Nutritional Status of Primary School Children 5 to 10 Years of Age in Schools with and without Feeding Programs in Nyambene District, Kenya, “Food & Nutrition Bulletin Vol. 19, Number 4, 1998. Moore, E. & Kunze, L. Evaluation of Burkina Faso School Feeding Program. Catholic Relief Services, consultant report (February 1994) . Nazaire, J. CRS Select Targeting and Design Guidelines for School Feeding and Other Food-Assisted Education Programs. Catholic Relief Services (2000). Nokes, C.; Grantham-McGregor, S.M.; Sawyer, A.W.; Cooper, E.S.; Robinson, B.A.; & Bundy D.A. “Moderate to High Infections of Trichuris Trichura and Cognitive Function in Jamaican School Children” Parasitology Vol. 104, June 1992. Pillai, N. “Food Aid for Development? A Review of the Evidence.” In Food Aid and Human Security, Clay, E., Stokke, O., eds. London, England: Frank Cass Publishers (2000). Pollitt E. “Does Breakfast Make a Difference in School?” Journal of the American Dietetic Association, Vol. 95, October 1995. Pollitt, E. “Malnutrition and Infection in the Classroom: Summary and Conclusions,” Food and Nutrition Bulletin Vol. 12, No. 3, 1990. Ponza, M.; Briefel, R; Corson, W.; Devaney, B.; Glazerman, S.; Gleason, P.; Heaviside, S.; Kung, S.; Meckstroth, A.; Murphy, J.; & Ohls, J. Universal- Free School Breakfast Program Evaluation Design Project: Final Evaluation Design. Mathematica Policy Research, Inc. Princeton, New Jersey (December 20, 1999). Rajan, S.I, Jayakumar, A. “Impact of Noon Meal Program on Primary Education: An Exploratory Study in Tamil Nadu.” Economic and Political Weekly (1992). As cited in Del Rosso, 1999. Select Committee on Hunger, United States House of Representatives, Alleviating World Hunger: Literacy and School Feeding Programs. U.S. Government Printing Office (1987). As cited in Del Rosso, 1999. Seshandri, S. & Gopaldas, T. “Impact of Iron Supplementation on Cognitive Functions in Pre-School and School-aged Children: The Indian Experience.” The American Journal of Clinical Nutrition, Vol. 50 (1989). Shresta, R.M. “Effects of Iodine and Iron Supplementation on Physical, Psychomotor, and Mental Development in Primary School Children in Malawi.” Ph.D. thesis, University of Malawi, Wappeningen (1994). As cited in Whitman et al, 2000. Simeon, D.T., & Grantham-McGregor, S. “Effects of Missing Breakfast on the Cognitive Functions of School Children of Differing Nutritional Status.” American Journal of Clinical Nutrition 49. (1989). Stakeholders. “School Feeding/Food for Education Stakeholders’ Meeting.” Summary proceedings of a meeting at USAID of 50 practitioners and experts from USAID, USDA, the World Bank, UNICEF, the World Food Program, and other organizations that either administer or implement school feeding programs. October 3, 2000 (unpublished). UNDP. Partnership for Child Development: An International Program to Improve the Health and Education of Children through School-Based Services. Project document, interregional project. New York (1992). As cited in Whitman et al, 2000. UNESCO. Basic Learning Materials Initiative. www.unesco.org (downloaded Nov. 2001). UNICEF. Focusing Resources on Effective School Health: A FRESH Start to Enhancing the Quality and Equity of Education (2000). UNICEF. “Basic Education Fund Raising Kit.” www.unicef.org (downloaded March 12, 2001). Whitman, C.V., Aldinger, C., Levinger, B., Birdthistle, I. Thematic Study on School Health and Nutrition. Education Development Center, Inc. (March 6, 2000). World Bank. GAO interviews with World Bank officials, May 15 and August 9, 2001. World Food Program (a). Implementation of Operational Guidelines for WFP Assistance to Education (1995). World Food Program (b). “Project Pakistan 4185: Promotion of Primary Education for Girls in Baluchistan and NWFP,” (1995). As cited in Del Rosso, 1999. World Food Program (c). Thematic Evaluation of Long-Term School Canteen Projects in West Africa. WFP Office of Evaluation, (1995). World Food Program. “Report on Pilot School Feeding Programme,” Evaluation Report, WFP/MALAWI, (1996) (unpublished). As cited in Del Rosso, 1999. World Food Program, UNESCO, and World Health Organization. School Feeding Handbook. Rome, Italy (1999). World Food Program. “School Feeding/Food for Education.” World Food Program comments in Response to Oct. 3, 2000, Stakeholders’ Meeting” (2000) (unpublished). Young, M.E. “Integrated Early Child Development: Challenges and Opportunities.” World Bank, 1995. Costs of School Feeding Programs This appendix discusses actual costs of school feeding programs as determined by two World Bank studies, as well as World Food Program (WFP) cost estimates of its programs and our own estimates of school feeding programs based on WFP guidelines and cost factors and other data. It also provides information on situations where school feeding programs may not be as cost-effective in promoting learning as certain other approaches. Table 11 provides figures on the actual costs of more than 30 school feeding programs in 21 countries that were reported in two World Bank studies. Table 11 shows the annual cost of providing 1,000 calories per student on a daily basis for a 180-day school year; dollar values have been expressed in 2000 dollars. As the table shows, costs vary significantly, ranging from a low of $4.29 for one program to a high of $180.31 for another. All but four of the programs cost more than $23 per pupil, and the average cost for all programs was $58.66 per student. Cost differences can be due to a variety of factors, such as differing program objectives, type of food served, and costs in transporting the food to the country and, once there, to its final destination. In April 2001, WFP officials told us they estimated the current average cost of WFP school feeding programs ranged between about $22 to $27 per student, for a 180-day school year. They said WFP did not have precise figures available on the average costs of its school feeding programs because it has not required data to be reported in the specific category of school feeding. Many large projects have a school feeding component, they noted, but are not entirely devoted to school feeding. Subsequently, in July 2001, WFP issued a paper that reported the average cost of its school feeding development projects in 2000 at 19 cents a day (or $34.20 for a 180 day program). We prepared a separate estimate of the cost of school feeding programs using some WFP guidelines and cost factors and other data. According to WFP, the recommended daily school feeding ration for full-time primary school students can range between 600 to 2,000 calories, depending on whether schools are half day, full day, or boarding. For day school, the recommended acceptable range is between 1,200 to 1,500 calories (i.e., 60 to 75 percent of the daily energy requirements of school-age children). The guidelines also indicate that a minimum of 10 percent of calories should be obtained from consumption of edible fats. In addition, the guidelines for day schools recommend that school feeding programs provide 28 to 36 grams of protein; 13 to 17 grams of fat; and no more than 300 grams of cereals, 30 grams of pulses, and 15 grams of vegetable oil. We analyzed the nutritional value of typical food aid commodities and determined that the least costly mix of commodities—consisting of corn and vegetable oil--that met the above requirements for primary day schools would cost 3.72 cents per child per day (based on USDA valuations of the commodities for 2001). If this diet were supplied for 180 days, the food alone would cost approximately $6.69 per child. On the basis of overall WFP costs for its various food aid programs in 1998 to 1999, we estimated that administrative, storage, and transportation costs would result in an additional cost per child (for a 180-day school meal program) of $7.70. The total average cost of this diet would be $14.39 per student. When factoring in the nutritional requirements of school-age children to include other essential elements, such as vitamins, micronutrients, and minerals, we found the lowest-cost, most nutritionally-complete recipe would cost $29.67 per child ($13.80 for the food and $15.87 for administrative and transportation costs.) Situations Where Other Approaches May Be More Cost Effective According to a number of experts, school feeding programs may be less cost effective than other possible approaches, such as establishing maternal child health and early childhood development programs and providing alternative nutritional or educational interventions. According to a USAID official, if nutrition is the problem, maternal child health and preschool feeding programs are more cost effective than school feeding programs. If education is a major weakness, investments in educational reform, teacher training, and learning facilities are more cost effective. In 2001, a USAID contracted evaluation of its school feeding program in Haiti, covering the period 1996 to 2000, was completed. (The program was primarily a school feeding only operation; however, some resources were devoted to food for education activities.) The report concluded there is no causal connection between school feeding and improved educational performance. Other factors such as school quality and parental variables, have a more direct influence on educational outcomes, it said. The report found the food for education approach to be very promising, provided that food is used as leverage to improve school quality. The report recommended USAID consider devoting all of the school feeding resources to food for education activities. However, USAID decided to phase out school feeding activities over a 3-year period. According to a USAID official, Haiti was loosing too many kids before they ever got to school. As a result, USAID concluded it would be more cost effective to employ the resources in a maternal and child health program. increase the likelihood that children will be healthy when they reach school age. Table 12 provides an estimate of the cost effectiveness of nutrition-related interventions for a typical developing country, in terms of the return on each program dollar spent, as reported by the World Bank. (Impact is estimated in terms of wages rather than learning per se.) As shown in table 12, school feeding has one of the lowest return ($2.80) of the 11 interventions. Interventions with the highest returns on each program dollar spent are iron fortification of flour ($84.10), vitamin A supplementation for all children under age 5 ($50), nutrition education ($32.30), and iodized salt ($28). In a study of the cost effectiveness of 40 educational interventions in Latin America, the authors surveyed a panel of 10 world experts on educational research and practical attempts at educational reform in the region, as well as 30 Latin American planner/practitioners working primarily in education ministries. Of the 40 interventions, 4 were variations on school feeding programs. None of the school feeding options were identified as being among the top 10 interventions for increasing learning, taking account of the estimated likelihood of adequate implementation (see table 13). The school feeding options were ranked between 23 and 34 in terms of increasing learning and between 34 and 40 when cost effectiveness was also considered. According to Beryl Levinger, an expert on school feeding and food for education programs, there are children in developing countries that can effectively and efficiently benefit from school feeding programs. Short- term hunger is a genuine problem, and school feeding is one way to get and keep children enrolled in school, she said. At the same time, success in improving school enrollment, attendance, and learning is context driven, and many external factors can affect and interfere with these outcomes, she said. Therefore, according to Levinger, one needs to assess the total picture and identify the most important needs and best solutions for addressing them. For example, if the quality of education in a particular community is low and resources are limited, it is possible that resources could be better spent on improving education than addressing short-term hunger. As learning tasks become more interesting, she noted, learning goes up. Levinger estimated that providing motivational textbooks and other learning materials and training teachers in active learning methods would cost roughly about $5 per pupil per year. For an additional $2, she said, one could also provide some micronutrient supplementation and deworming treatments. Multiple studies of treatments for intestinal parasite infections, through iron supplementation and regular deworming, have shown benefits of lower absenteeism and higher scores on tests of cognition or school achievement at a cost of about $1 per child per year. This is considerably less costly than school feeding programs that average $34 per child per year. However, we are not aware of any studies that assess and compare the relative impacts of programs that only treat for parasite infections to programs that provide a school meal. In April 2000, the World Health Organization, the U.N. Educational, Scientific, and Cultural Organization, the U.N. Children’s Fund, and the World Bank proposed a strategy for Focusing Resources on Effective School Health (FRESH) to give a fresh start to improving the quality and equity of education and promoting the Education for All goal. They noted that poor health and malnutrition are important underlying factors for low school enrollment, absenteeism, poor classroom performance, and early school dropout. The agencies identified a core group of activities that they said captured the best practices from their programming experiences, were highly cost-effective, and a starting point to which other interventions might be added as appropriate. The agencies recommended that the following basic components of a school health program be made available together, in all schools: (1) health related school policies; provision of safe water and sanitation; (3) skills based health, hygiene, and nutrition education; and (4) school based health and nutrition services. Regarding the latter component, the agencies said schools can effectively deliver some health and nutritional services provided that the services are simple, safe, and familiar and address problems that are prevalent and recognized as important within the community. For example, they said, micronutrient deficiencies and worm infections may be effectively dealt with by infrequent (6-monthly or annual) oral treatment. As another example, they said changing the timing of meals, or providing a snack to address short-term hunger during school—an important constraint on learning—can contribute to school performance. In commenting on a draft of portions of this report, WFP officials said there has been no more cost-effective approach identified than school feeding for the combined objectives of increasing enrollment, attendance, and performance in developing countries--especially in areas of food insecurity. Further, when the key resource available is food, the case for school feeding to accomplish these objectives is indisputable, they said. Process to Solicit, Evaluate, and Approve Proposals for the Pilot Program USDA used a considerably different process to solicit, evaluate, and approve program proposals from interested cooperating sponsors and WFP. Cooperating sponsors, including Private Voluntary Organizations (PVO) and the government of the Dominican Republic, underwent an expedited two-stage qualification and proposal review process that either did not apply to or generally was different from that applied to WFP. Proposal formats and criteria applied to them by reviewers varied considerably. An interagency Food Assistance Policy Council (FAPC) made the final selection of project awards. Proposal Process and Information Required On September 6, 2000, USDA published a notice in the Federal Register requesting proposals from interested cooperating sponsors to carry out activities under GFEI. (See app. XI for key events under GFEI.) USDA said it would use section 416(b) of the Agricultural Act of 1949 to provide surplus agricultural commodities in support of an international school feeding program to improve student enrollment, attendance, and performance in poor countries. Proposals would be reviewed on an expedited basis. Given time constraints and the considerable effort and time involved in preparing and evaluating proposals, USDA invited interested sponsors to present an initial submission that contained only information intended to demonstrate, based on experience, the organizations’ administrative capabilities for implementing and managing school feeding or monetization of commodities for school feeding. USDA identified nine types of information that should or could be provided. The deadline for initial submissions was September 15, 2000. USDA said that sponsors found to be most capable of successfully implementing school feeding activities under step one would then be invited to provide a supplemental submission addressing their specific proposed activities. The deadline for the step-two submission was September 29, 2000. USDA said the submissions should provide information that supported the goal of establishing a preschool or school feeding program to draw children into the school environment and improve access to basic education, especially for females. Priority consideration would be given to countries that had a commitment to universal free education but needed assistance in the short run; places where preschool or school feeding programs would promote significant improvements in nutrition, school enrollment, and attendance levels; projects involving existing food for education programs; and projects where the likelihood of support from other donors was high. USDA requested that sponsors provide, to the extent possible, information on (1) literacy rates for the target population; (2) percentage of children attending schools, with special emphasis on school-age girls; (3) public expenditure on primary education; (4) whether the country currently operated a school feeding initiative (either through USAID, with assistance from the World Bank, or through internal resources); (5) program impact on areas such as teacher training, community infrastructure (e.g., PTAs and community groups), health, and nutrition; and (6) other potential donors. USDA also referred interested parties to the Code of Federal Regulations, which describes the requirements for the standard 416(b) program. These regulations provide additional guidance on factors to address in preparing a proposal. Twenty-nine PVOs submitted part one of the proposal application within the required time frame. On September 22, 2000, USDA announced that 20 PVOs had qualified for further consideration and invited them to submit the second part of the application on the specific projects they were proposing. In addition, USDA announced that the government of the Dominican Republic had submitted an application, which had been approved for further consideration, and that WFP was eligible to participate in the pilot program. The September 6, 2000 Federal Register notice stated that the pilot program was also open to WFP. USDA did not require WFP to provide either the initial or supplemental submission. WFP had already submitted a set of proposals to USDA in August 2000, following consultations with USDA officials. These proposals (1) were abbreviated; (2) concerned already existing or approved WFP school feeding projects that had not been fully funded, as well as planned expansions of these or other projects; (3) and, in general, did not address many points that USDA had asked cooperating sponsors to address in the second-stage submission. The proposals typically contained a brief half-page description of the project, accompanied by a summary budget for the commodities requested. Some, but not all, U.S. agency officials charged with reviewing the proposals were told they could obtain additional information describing the projects on WFP’s Web site. However, some projects had been approved by WFP’s Executive Board in prior years. Information posted on the Web site was sometimes incomplete and/or out of date. USDA officials noted that the United States is a member of the WFP Executive Board and as such has a vote on which WFP proposed projects should be approved. They also noted that a vote by a donor country to approve a project does not mean that the country intends to donate to that project. In addition, they noted that approved WFP projects submitted to the pilot program in August 2000 would have been approved by the executive board prior to the U.S. announcement of the pilot program and GFEI. According to WFP officials, WFP is strongly committed to addressing the key factors associated with effective food for education programs discussed in this report. The U.S. government is well aware of this commitment, and as a result WFP did not deem it necessary to make repeated reference to this commitment in the country-specific information included in its proposals. WFP officials noted that proposals submitted to USDA for projects that had already been approved by WFP’s Executive Board had gone through a long vetting process, adding that approval of a WFP project requires unanimous consensus from all executive board members, including the United States. The officials also noted that written documentation on its projects had been provided to U.S. government representatives during previous WFP Executive Board sessions when the projects had been reviewed and approved, as well as in sessions to review projects that had been operational. As a result, WFP officials said, the U.S. government had plenty of documentation for evaluating WFP proposed projects apart from documentation available at WFP’s Web site. However, USAID told us that when the United States concurs in an executive board decision to approve a project, the United States frequently states its concerns or reservations about the feasibility or sustainability of program activities and has done so in the case of school feeding programs. Therefore, the fact that a particular project had been approved by WFP’s Executive Board did not necessarily mean the project was a good candidate for the U.S. food for education pilot program. In addition, according to a USAID official, though in principle U.S. government personnel responsible for evaluating WFP proposals could have gone to WFP’s Web site to look up additional documentation, there was little time to do this because of the push to get the pilot program up and running so quickly. He added that he knew of no one who used the Web for this purpose. He also said the evaluation task force members did not receive hard copies of documentation beyond the abbreviated set of proposals provided by WFP to USDA. Proposal Evaluation Process And Agencies’ Criteria USDA/Foreign Agricultural Service (FAS) staff evaluated the initial PVO submissions on the basis of criteria in USDA’s September 6, 2000, Federal Register notice. USDA/FAS assigned different weights to the criteria. PVOs that scored above a certain level were invited to submit the second part of the requested proposals. Of 20 PVOs invited to make a second submission, 19 responded and 1 declined, citing a lack of adequate time to prepare the type of careful proposal the organization wanted to submit. The 19 PVOs submitted a total of 62 project proposals. The government of the Dominican Republic also responded with a proposal. For the second part of the proposal process, which covered the actual programs sponsors proposed to implement in various developing countries, USDA/FAS employed a more elaborate review procedure. The Food Assistance Policy Council (FAPC)was designated to make the final project selections. An FAPC working group was established to evaluate the PVO, government of the Dominican Republic, and WFP proposals and make recommendations on which ones to approve. The working group consisted of staff from FAS and its Food and Nutrition Service (FNS), the Department of State, USAID, OMB, and the White House. USDA/FAS provided the other members of the working group with copies of all of the second-stage as well as WFP set of proposals. USDA/FNS assigned a nutritionist to review all of the proposals from a nutrition perspective. The Department of State assigned two staff to review the proposals. Four offices within USAID were involved in evaluating the proposals: a country backstop officer, the appropriate regional bureau, a nutritionist analyst from the Bureau of Humanitarian Response, and an education specialist from USAID’s Global Bureau, Field Support and Research. USAID’s Food for Peace Office within the Bureau of Humanitarian Response coordinated the process within USAID. The Food for Peace Office is responsible for USAID’s food aid programs, including any programs that have funded school feeding or food for education programs. Each member of the working group conducted an evaluation of the proposals separately during October 2000 and met in early November to discuss their results and reach consensus on which proposals to submit to the FAPC for final approval. USDA/FAS did not score but recommended approval of WFP proposals for all 27 countries in which WFP had established, but unmet, food aid requirements. However, USDA scored and divided the non-WFP proposals into three distinct categories (i.e., strongly recommended, recommend approval, or not recommended). In conducting its second-stage evaluation of the non-WFP proposals, USDA/FAS employed a considerable number of written criteria, nearly all of which were taken from its standard approach to evaluating 416(b) programs. The standard criteria do not focus on school feeding or food for education programs. Apart from the standard criteria, USDA’s evaluation included some criteria that related to school feeding/food for education. (All of USDA’s second-stage criteria were weighted.) USDA considered whether: Objectives supporting the goal of establishing preschool or school feeding programs to draw children into the school environment and improve basic education for females were clearly stated. The proposal targeted a country with existing food for education programs in the host country’s development plan. The method for choosing beneficiaries (whether for preschool or school feeding) activities was clear and justifiable; emphasis on females. The cooperating sponsor provided indicators to measure program impact, including baselines and expected outcomes. Potential indicators might include literacy rates for target populations, percentage of school-age children attending school (emphasis on females), and public expenditure on primary education. The cooperating sponsor included specific performance targets as part of its proposal, such as magnitude of change in number of meals served; enrollment levels, specifically female enrollment; attendance levels; capacity building in areas necessary to sustain the feeding program, such as development of PTAs and other community groups; or infrastructure development for delivery of service. Agriculture officials told us they did not have time and adequate staff to study lessons learned from past school feeding/food for education programs given the short lead time they had to get the program up and running. Instead, they said, USDA relied considerably upon USAID for this aspect of the evaluation, since USAID had extensive experience with school feeding programs. Most of USAID’s written criteria did not focus specifically on food for education. Evaluators in the Regional Bureaus were asked to review how the proposals fit with the bureau priorities for the country and how a proposed project might affect (positively and/or negatively) USAID programs in the country. The bureaus were also responsible for providing each country proposal to the respective cognizant field mission and for incorporating mission responses and concerns into their review. Field missions were also responsible for providing input regarding the Bellmon analysis. Country backstop officers were asked to review each country proposal regarding commodities, monetization, and logistics and how these issues might affect (positively and/or negatively) USAID’s Title II food aid programs in country. The USAID nutritionist was asked to review the nutritional components of the proposal and their adequacy. USAID’s Global Bureau was asked to review the educational components of the proposals and their adequacy, as well as host country policies and commitment to basic education. All of the USAID evaluators were instructed to indicate briefly whether they approved or disapproved of a proposal and, if they approved, to indicate the priority they thought the proposed program should have (low, medium, high, very high). In USAID’s weighting scheme, the Global Bureau’s assessment of the educational component could have accounted for about 25 percent of a proposal’s total score. However, for several reasons, its analysis did not contribute to USAID’s evaluation of which proposals were the best. The USAID staff person assigned to rate this dimension of the proposals told us that although he had expertise in the education area, he was not an expert on school feeding programs. In addition, he said that nearly all of the proposals did not provide adequate information to judge the quality of the educational component. He told us it might have been possible to obtain this information if discussions could have been held with the sponsors. However, the evaluation process did not provide for such interaction. As a result, he assigned the same score to all but one of the proposals. Since virtually all proposals were scored exactly the same, education was not a discriminating factor in the Global Bureau’s overall ranking of the proposals. No formal record was kept of the interagency working group’s deliberations, but a summary of its consensus recommendations was forwarded to the FAPC for action. This summary contained a brief description of the proposed food aid to be delivered to each country, its cost and rationale, economic assessments, and prior aid. In the end, the FAPC approved 34 WFP proposals covering 23 countries. Of the 34, 26 were for approved WFP projects with unmet food aid needs and 8 were for expansion projects. FAPC approved 25 PVO projects and the only proposal submitted by a government entity (the Dominican Republic). FAPC allocated almost equal program value to WFP (about $138 million) and the other sponsors (about $150 million), with instructions that the remainder be first offered in support of additional WFP proposals. However, cost estimates that FAPC used in its award determinations were too high and have since been reduced by USDA in implementing agreements. The total cost of WFP agreements was recently estimated by USDA at about $92.5 million; cooperating sponsors’ agreements were estimated at about $135 million. Selected Information Contained in Proposals for Approved School Feeding Programs This appendix discusses selected information in school feeding program proposals approved by USDA, including proposed nonmeal components of the program, proposed funding of nonmeal components, and comments on other donor assistance. In its request for proposals, USDA indicated that PVOs could monetize some of the food to cover certain other elements important to food for education programs. Table 14 provides information on the PVOs that proposed funding for nonmeal components, including the specific components and the overall proposed funding amount for these components. As the table shows, for 17 of the 25 approved proposals, PVOs proposed to include a variety of nonmeal components. Examples include repairs to school buildings, investments in teacher training and school supplies, treatments for parasite infections, and health and nutrition education. Ten of the 17 proposals included a budget amount for some or all of these components. According to information from USDA, it provided little funding for nonmeal components of WFP projects. WFP requested funding for the underfunded school meals of already existing projects or for meals for expansion of existing projects or start-up of new projects. These requests included funding for the commodities and related costs, including ocean freight and overland transportation costs to the recipient countries; internal transportation, storage and handling costs for the commodities within the recipient countries; direct support costs; and administrative costs. According to WFP, its projects often include funding for nonmeal components, which can be obtained through donor countries, partnership arrangements with other international donors, or by recipient country governments. WFP officials told us they are working to develop more partnerships with other donor agencies to address nonmeal aspects of their food for education projects. Table 15 provides information on planned funding of nonmeal components for the pilot program approved WFP projects, based on WFP documentation that was available at WFP’s Web site. Nonfood components typically involve training, construction or rehabilitation of school facilities, or health related activities (such as deworming). Although USDA said that priority would be given to proposals where the likelihood of other donor support was high, neither USDA nor USAID included this factor in written criteria for evaluating the proposals. We reviewed the PVO proposals to assess whether sponsors in fact provided such information in their proposals. As table 16 shows, only five of the approved proposals indicated that other donors might support the project. Of the five, two proposals said other donors would support the project and identified the expected amount of support. Donor Views on Uses of Food Aid And How It Is Provided This appendix discusses the views of food aid donating countries other than the United States regarding the use of food aid and how it is provided. Table 17 lists donor countries’ views on whether food aid should be used for emergencies, development, or both and whether food aid should be provided as cash or food-in-kind. Sources USDA Uses to Finance Its Implementing Partners’ GFEI Project Costs USDA uses three funding sources to pay for implementing partners’ (PVO/government cooperating sponsors and WFP) operating costs under the GFEI pilot program. These costs cover the distribution of surplus commodities acquired under Commodity Credit Corporation Charter Act (CCC) authority and donated under Section 416(b) authority to friendly and developing countries. The funding sources are (1) local currency proceeds derived from monetization (sale) of the commodities, (2) direct cash payments made by CCC under commodity surplus removal (CCC Charter Act 5(d)) authority, and (3) direct cash payments made by CCC pursuant to specific limited appropriations authority granted to sponsors in July 2001. Section 416(b) of the Agricultural Act of 1949, as amended, is the authority that CCC uses to pay for most of the cost of removing and disposing of donated surplus commodities in connection with the GFEI pilot program. This authority allows CCC to directly pay freight forwarders selected by implementing partners for the cost of ocean transportation and reasonably related expenses of moving the commodities to a designated discharge port or point within the country’s border where the food aid is to be distributed. This cost is the largest except for the commodities themselves and is estimated to be roughly one-third of the overall pilot program. In the case of urgent and extraordinary relief requirements, CCC may also pay the partners for internal transportation, storage, and handling (ITSH) expenses but not for nonemergency development assistance, which is the principal type of aid provided by the pilot. In addition, under section 416(b) authority, CCC funds cannot be used to pay partners’ direct administrative headquarters costs of running the program. In lieu of getting CCC funding to recover their ITSH expenses for nonemergency programs and administrative costs, partners are permitted to monetize (i.e., sell) all or a portion of the commodities in the country or region. Local currency proceeds generated from the sale of section 416(b) commodities can be used to finance most of the sponsors’ operating costs—as long as they are specifically approved by USDA in program agreements. Monetization is generally how the PVOs and government sponsors recover their operating costs. Furthermore, these sponsors’ budgets and provisions for financial statement and monetization reporting as well as limitations on budget adjustments without prior USDA approval are incorporated into the program agreements. USDA’s treatment of WFP on these matters differs from that of PVOs and a government sponsor. USDA pays cash to WFP for all of these costs, including headquarters’ administrative expenses. In doing so, it relies on section 5(d) of the CCC Act. This section provides authority for CCC to expend funds in connection with disposal of surplus commodities if such expenditure is required to aid in removing the surplus. WFP’s general policy, as approved by its executive board, is not to monetize commodities. Thus WFP requires cash to cover its expenses. In addition, WFP operates under a “full cost recovery” policy, which requires that the country making a donation cover its full cost. According to USDA’s Office of General Counsel, if USDA wants to dispose of surplus commodities through WFP, it may pay associated costs using section 5(d) authority. Specifically, USDA costs incurred in connection with providing commodities to WFP under the GFEI program are governed by an agreement between CCC and WFP that covers matters related to donation of commodities furnished under section 416(b) during calendar years 2001 and 2002. Under this agreement, CCC agreed to pay WFP not only ocean transportation but other authorized expenses incurred by WFP in connection with distribution of commodities donated to it. Collectively, these other authorized expenses include internal transportation, storage and handling, direct support costs, other direct operational costs, and indirect support costs, up to the maximum amount approved by CCC.For the GFEI program, these costs amounted to about $35 million. When USDA requested sponsor proposals for the GFEI pilot program in September 2000, it said CCC cash funds might also be available to cover expenses related to implementing activities supported with commodities acquired under section 5(d) of the CCC Charter Act. USDA delivered the same message in a meeting with PVOs to discuss the planned pilot program. As a result, most PVOs submitted proposals that were based on receiving cash to cover some of their expenses. However, in January 2001, USDA informed PVOs with approved proposals that cash would not be available to them. Although USDA said it was prepared to adjust approved sponsors’ proposals to permit greater monetization of commodities to cover costs, the USDA reversal posed a few problems. First, monetized commodities cannot be used to cover the sponsors’ direct U.S. headquarters’ administrative expenses. Second, depending on the situation in a recipient country, additional monetization of commodities might risk disrupting commercial sales. Representatives of one PVO told us the organization had submitted proposals for two countries where it was not possible to monetize commodities; therefore, without cash to cover its expenses, the PVO could not go forward. Several PVOs were also upset because they felt that USDA was providing preferential treatment to WFP. USDA noted that its long-standing policy for section 416(b) projects was not to provide cash to PVOs unless the country is deemed urgent and extraordinary. It further said that PVOs and WFP were treated differently because they were fundamentally different in nature and in how they acquired their funding. USDA said that whereas PVOs are operated privately and have access to other funding sources, WFP is governed and funded only by its donor nations and thus not subject to or constrained by the limitations of the section 416(b) regulations. These reasons notwithstanding, USDA did not explain why it had earlier indicated an intention to provide cash to the sponsors. USDA’s policy reversal led to delays in USDA’s negotiating agreements for implementing approved proposals for a number of PVO projects. Some PVOs were not satisfied with the policy change and made their views known to members of Congress. Subsequently, in July 2001, the Congress approved legislation (P. L. 107-20) that included a provision authorizing USDA to approve use of CCC funds up to about $22.9 million for financial assistance to sponsors participating in the pilot program. Funds could be used for internal transportation, storage, and handling of commodities, as well administrative expenses deemed appropriate by the secretary of agriculture. As a result of the congressional action, USDA agreed to consider renegotiating agreements that it had already concluded with some of the PVOs if they so desired. Top Food Aid Donating Countries This appendix provides details on the top food aid donating countries in recent years. Table 18 lists the top 20 food aid donors based on shipments for the period 1995 through 1999. Apart from the United States, which supplied more than half of all deliveries, the other 19 donors provided about 43 percent of the food assistance during this period. Key GFEI Events from Announcement of Concept to Notification of Project Approvals This appendix outlines key events related to the GFEI pilot from the time the program was announced until early January 2001, when USDA notified proposal winners. As table 19 shows, USDA’s expedited schedule allowed interested cooperating sponsors at most 8 business days to prepare and submit the first part of the proposal. Sponsors who began preparing for the second part of the proposal at the earliest possible time (i.e., without waiting to learn whether they qualified to do so), had a maximum of 18 business days to complete and submit it to USDA. Comments from the U.S. Department of Agriculture GAO Comments 1. USDA noted that GFEI has three purposes – to improve student enrollment, attendance, and performance, but indicated it is not possible to improve learning in a 1-year pilot program. According to USDA, GAO evaluated USDA against an unrealistic standard— performance—rather than the objectives of enrollment and attendance. In addition, USDA said, a much longer time frame would be required to address all of the factors mentioned in the report (examples cited include teacher training, infrastructure, learning materials, health and nutrition programs, and community involvement). We disagree with USDA’s statements for two reasons. First, our conclusion is that school feeding programs are more likely to improve enrollment and attendance, as well as learning, if they are carefully integrated with other key factors and interventions. Second, we conclude that the pilot program could have been improved by determining in advance which proposals were for communities where key factors were already in place or would be addressed during the projects themselves. 2. USDA disagreed with our statement that USDA lacked expertise in managing development and humanitarian assistance such as food aid. We have revised that statement to specify expertise in food for education development programs. At the same time we note that a recent USDA study of its food aid monetization programs cited difficulty evaluating the programs’ impacts because of limited personnel resources, high staff turnover, and increasing demands to implement large food aid programs. In addition, the limited presence of overseas agricultural attaches has adversely affected USDA’s ability to oversee some of its sponsors’ monetization projects, the study said. USDA’s Inspector General has also expressed concern about this matter. 3. USDA said it believes that GAO’s comparisons between the proposals and the recommended program elements understate the quality of the GFEI programs, since the proposal is only the beginning text of a negotiated contractual process. We focused on the proposal process to determine to what extent USDA secured information for judging and selecting proposals that offered greater promise of improving school enrollment, attendance, and learning. 4. Regarding differences in the treatment of PVOs and WFP, USDA reiterated (as discussed in our draft report) that the United States sits on the WFP Executive Board, which approves all projects. However, executive board approval does not mean that the United States may not have concerns about a particular project. As USAID advised, even when the United States concurs with an executive board decision to approve a project, the United States frequently states its concerns or reservations about the feasibility or sustainability of program activities and, according to USAID, has done so in the case of school feeding projects. USDA also said it is confident that the information submitted by WFP contains the required information listed in the Federal Register notice or the regulations governing USDA food assistance programs. However, WFP did not have to address requirements of the Federal Register notice; the notice did not require as much information as we believe would have been useful for evaluating proposals; and USDA’s 416(b) regulations did not include specific information requirements for assessing food for education programs. 5. USDA indicated agreement with our finding that analysis of the disincentive effects of food aid projects should include the impact of commodity donations on alternative food commodities. USDA said doing so could improve analyses and be a goal for future projects. At the same time, USDA said it stands by the pilot project assessments that significant market disruptions will not occur—even though such analysis was not conducted. Our report notes that cooperating sponsors are responsible for analyzing the potential disincentive effects of their projects and that USDA does not independently verify the results of such analyses. In addition, we noted that USDA officials acknowledged that because PVOs want to provide the food aid, these organizations may not be completely unbiased in preparing analyses of disincentive effects. In its letter, USDA said the latter statement is correct but in the opposite direction suggested by GAO. According to USDA, PVOs are going to more rigorously analyze the food needs of an area, because program success depends upon community support, which is not going to occur if markets are disrupted. We agree that the latter is one possible interpretation of the statement and therefore removed the statement from the letter. Comments from the U.S. Agency for International Development Comments from the Office of Management and Budget GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to those named above, Gezahegne Bekele, Janey Cohen, Stacy Edwards, Mary Moutsos, and Rolf Nilsson made key contributions to this report. GAO’s Mission The General Accounting Office, the investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full-text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO E-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to daily e-mail alert for newly released products” under the GAO Reports heading. Order by Mail or Phone Visit GAO’s Document Distribution Center To Report Fraud, Waste, and Abuse in Federal Programs Web site: www.gao.gov/fraudnet/fraudnet.htm, E-mail: [email protected], or 1-800-424-5454 or (202) 512-7470 (automated answering system). Public Affairs
At the Group of Eight industrialized countries' summit in July 2000, President Clinton proposed a Global Food for Education Initiative (GFEI) whereby developed countries would provide school breakfasts or lunches to needy children in poor countries. The aim of the initiative is to use school meals to attract children to school, keep them attending once they enroll, and improve learning. The president also announced a one-year, $300 million pilot program to be run by the U.S. Department of Agriculture (USDA) to jump-start the proposed global effort. Research and expert views on the effectiveness of school feeding programs indicate that the programs are more likely to be successful when they are carefully targeted and integrated with other educational, health, and nutritional interventions. In establishing the pilot program, USDA did not build on some important lessons from previous school feeding programs. Although USDA expects more than eight million children to benefit from the pilot program, the structure, planning, and management fall short in ensuing that the program's objectives will be attained. Representatives of most other donor countries GAO interviewed said their governments were either noncommittal about, or unwilling to provide, substantial support for a comprehensive, long-term food for education program. This lack of support is a problem because the United States envisioned a multilateral program with other donors funding about three-quarters of the program's total cost. GFEI seems unlikely to attract much support from other donors unless the United States adopts a permanent program that does not depend on surplus agricultural commodities or the pilot program produces strong, positive results.
GAO_GAO-10-908
Background The E-rate program provides eligible schools, school districts, libraries, and consortia with discounts on telecommunications services, Internet access, and data transmission wiring and components used for educational purposes. The program is funded through statutorily mandated payments into the Universal Service Fund by companies that provide interstate and international telecommunications services. Many ir of these companies, in turn, pass on their contribution costs to the subscribers through a line item on subscribers’ telephone bills. FCC capped funding for E-rate at $2.25 billion per year, and program funds are used to cover the program’s administrative costs, including the administrative services performed by USAC and Solix. Eligible schools and libraries may apply annually for program support and will qualify for a discount of 20 to 90 percent on the cost of eligible services, based on indicators of need. Based on the broad direction in the Telecommunications Act of 1996, FCC defined two general types of services that are eligible for E-rate discounts: Priority 1 services include telecommunications services, such as local, long-distance, and wireless (e.g., cellular) telephone services, as well as data links (e.g., T-1 lines) and Internet access services, such as Web hosting and e-mail services—all of which receive first priority for the available funds under FCC’s rules. Priority 2 services include the cabling, components, routers, switches, and network servers that are necessary to transport information to individual classrooms, public rooms in a library, or eligible administrative areas, as well as basic maintenance of internal connections, such as the repair and upkeep of eligible hardware and basic technical support. USAC annually updates a list of specific, eligible products and services and the conditions under which they are eligible. The list is finalized by FCC after a public comment period and posted on USAC’s Web site. Items ineligible for E-rate discounts include, among other things, end-user products and services, such as Internet content; Web-site content maintenance fees; end-user personal computers; and end-user software. FCC delegated to USAC the day-to-day administration of the E-rate program, subject to FCC rules and under FCC oversight. USAC has, in turn, subcontracted certain key aspects of E-rate program operations to Solix. The primary responsibilities of Solix staff include reviewing applications and processing invoices for reimbursement. About 20,000 schools and libraries applied for E-rate support in 2009, although Solix processes about 40,000 applications per funding year because schools and libraries can submit multiple applications in a single funding year (e.g., an applicant can submit separate applications for Priority 1 and Priority 2 services). The process for participating in the E-rate program, which is lengthy and complicated, is summarized in the following four main steps: 1. The applicant submits to USAC a description of the services for which the applicant is requesting a discount so that service providers (i.e., telecommunications companies or equipment providers) can bid through open competition. The applicant must also confirm that it has developed an approved technology plan that provides details on how it intends to integrate technology into its educational goals and curricula, as well as how it will pay for the costs of acquiring and maintaining the technology. 2. Once the service description has been available to potential bidders for 28 days, the applicant selects the most cost-effective service provider from the bids received and submits a Form 471 (Description of Services Ordered and Certification) application for the discounted service, which is processed by Solix. The applicant then calculates its discount level, certifies that it is an eligible entity, and certifies that it will abide by applicable laws and regulations. 3. Solix reviews the application and issues a funding commitment decision letter to the applicant and selected service provider. The decision letter indicates whether the application has been approved or denied. 4. If approved, the applicant—now a beneficiary—must confirm that services have started or have been delivered. After the service provider has submitted a bill, either the beneficiary or the service provider submits a reimbursement request form for Solix to process. The beneficiary or service provider can then be compensated from the Universal Service Fund for the discounted portion of the services. See appendix II for an overview of the E-rate program application, invoice, and reimbursement processes. A memorandum of understanding (MOU) between FCC and USAC assigns USAC the responsibility for implementing effective internal controls over the operation of the E-rate program. Through the MOU, FCC directed USAC to implement an internal control structure for the E-rate program that is consistent with the standards and guidance contained in the Office of Management and Budget’s (OMB) Circular No. A-123, Management’s Responsibility for Internal Control, including a methodology for assessing, documenting, and reporting on internal controls. In February 2008, USAC engaged an independent public accounting firm to assist in establishing a formal internal control review program. USAC placed responsibility for implementing this program under the direction of a senior manager of internal controls, a position it created in late 2008. USAC also created a Senior Management Council to support the implementation of the program. Under the MOU, USAC is also responsible for periodically reporting on its internal control activities to FCC’s Office of Managing Director and Office of Inspector General. FCC also directed USAC to implement a comprehensive audit program to (1) ensure that Universal Service Fund monies are used for their intended purposes; (2) ensure that all Universal Service Fund contributors make the appropriate contributions in accordance with FCC rules; and (3) detect and deter potential waste, fraud, and abuse. To ensure compliance with FCC rules, USAC has periodically selected beneficiaries to audit. USAC also has conducted audits that were used to develop statistical estimates of error rates under the Improper Payments Information Act of 2002 (IPIA). From 2001 through 2006, USAC and other auditors conducted approximately 350 audits of E-rate program beneficiaries as part of the oversight of the E-rate program. Since 2006, USAC has conducted approximately 760 audits for both oversight and IPIA purposes. USAC is responsible for responding to the results of findings from audits of program beneficiaries, including recommendations to recover funds that may have been improperly disbursed to beneficiaries. We have produced a number of E-rate reports since the program was implemented in 1998, some of which addressed internal controls. In 2000, we reported that the application and invoice review procedures needed strengthening and made recommendations to improve internal control processes. In response to our recommendations and the findings of other parties that have reviewed USAC’s processes, such as the FCC Inspector General, USAC has implemented a number of internal controls. In 2005, we reported that FCC had been slow to address problems raised by audit findings and had not made full use of the audit findings as a means to understand and resolve problems within the program. During the course of our work, in 2004, FCC concluded that a standardized, uniform process for resolving audit findings was necessary and directed USAC to submit to FCC a proposal for resolving all audit findings and recommendations. FCC also instructed USAC to specify deadlines in its proposals “to ensure audit findings are resolved in a timely manner.” USAC submitted its Proposed Audit Resolution Plan to FCC in October 2004. Although FCC has not formally approved the plan, since 2004 it has periodically issued directives and guidance to USAC to clarify aspects of the plan’s design and implementation. A number of our reports have also found that the E-rate program lacks performance goals and measures, and we have recommended that FCC define annual, outcome-oriented performance goals for the program that are linked to its overarching goal of providing services to schools and libraries. While FCC has undertaken various efforts to address this recommendation, it has not yet established meaningful goals and performance measures for the E-rate program. In our 2009 E-rate report, we found that some nonparticipating schools and libraries elected not to apply to the program because they considered the process to be too burdensome (e.g., too complex, time-consuming, or resource-intensive). We also found that a substantial amount of funding was denied because applicants did not correctly carry out application procedures. In March 2010, an FCC task force released a National Broadband Plan that acknowledges the complexity inherent in the E-rate program and recommends, among other things, that FCC streamline the application process. For example, the National Broadband Plan notes that E-rate’s procedural complexities can sometimes result in applicant mistakes and unnecessary administrative costs as well as deter eligible entities from applying. In the National Broadband Plan, the task force suggests that FCC can ease the burden on applicants for Priority 1 services that enter into multiyear contracts, and that applications for small amounts could be streamlined with a simplified application similar to the “1040EZ” form the Internal Revenue Service makes available to qualifying taxpayers. In May 2010, as part of its efforts to begin implementing the vision of the National Broadband Plan, FCC released a Notice of Proposed Rulemaking (NPRM) to solicit comments about potential changes to the E-rate program. FCC stated in the NPRM that it is time to reexamine what is working well in the current program and what can be improved On September 23, 2010, FCC adopted an order (the order had not yet been released at the time our report was issued) in response to the May 2010 NPRM. According to FCC’s press release, the order improves the ability of schools and libraries to connect to the Internet in the most cost-effective way, allows schools to provide Internet access to the local community after school hours, indexes the E-rate funding cap to inflation, and streamlines the E-rate application process. . FCC and USAC Have Put Many Internal Controls in Place for the E-rate Program According to GAO’s standards for internal control, “control activities” are an integral part of an entity’s planning, implementing, reviewing, and accountability for stewardship of government resources and achieving effective results. Control activities are the policies, procedures, techniques, and mechanisms that enforce management’s directives and help ensure that actions are taken to reasonably address program risks. For our review of the design of E-rate’s internal control structure, we classified the control activities into three broad areas: (1) processing applications for discounted service and making funding commitment decisions, (2) processing invoices requesting reimbursement, and (3) monitoring the effectiveness of internal controls through audits of schools and libraries. We found that FCC and USAC have established a number of internal controls in each of these three areas. Processing Applications and Making Funding Commitment Decisions E-rate’s internal control structure centers around USAC’s complex, multilayered, Program Integrity Assurance (PIA) application review process. This process entails the specific internal controls that are applied to applications as they undergo the initial review for eligibility as well as a layered review process to ensure that the initial review was conducted appropriately and that the correct funding decision was reached. As applicants submit their Form 471 applications for discounted service, Solix assigns each applicant to a PIA reviewer who examines the form. USAC’s funding year 2009 PIA Form 471 Review Procedures manual contains approximately 700 pages of detailed instructions and flowcharts for Solix’s PIA reviewers to follow in addressing the various parts of the Form 471. The procedures are meant to ensure that the applicant, service provider, and requested services are eligible under the program, and that the applicant is in compliance with all of the E-rate rules. For example: To verify that an applicant is eligible for the program, the manual directs the PIA reviewer through a potential 39-step process that involves confirming information about the applicant either through USAC- approved, third-party sources or by contacting the applicant directly for documentation to support eligibility. To verify that the service provider is eligible to provide telecommunications services for the program, the reviewer is to determine that FCC has registered the service provider as an approved telecommunications provider. As a part of verifying that an applicant’s requested discount rate is accurate, the automated application system will trigger an “exception” if the discount rate on the application meets certain conditions. The manual provides instructions to the reviewer on what procedures to follow to verify that the discount rate is appropriate. To verify that the requested services are eligible for E-rate funding, the reviewer is to determine whether products and services requested in an application for discount qualify for support. This determination can be based on the categorization and information in FCC’s annual Eligible Services List, a more detailed list of specific equipment that USAC maintains, or consultation with a team of Solix technical experts. In addition to the specific internal control procedures that are part of the initial PIA review, USAC maintains a multilevel application review process as part of its internal control structure. Figure 1 illustrates the E-rate application review process. Over time, USAC has expanded its application review process by adding more types of reviews—such as “cost-effectiveness” and “special compliance” reviews—to address specific risks. The PIA initial and final reviews, selective reviews, and quality assurance reviews were components of the original application review process and are still part of the current internal control structure. Solix staff perform these reviews. USAC staff then follow up with an independent quality assurance review process for each of the other types of reviews. Regular PIA Review: As part of the multilevel process, all applications undergo an initial review and a separate final review. The PIA process is partially automated but involves a significant amount of manual review as well. Issues that are identified as potential errors or violations of program rules, either in the automated system or by manual review, trigger exceptions that are addressed by the Solix initial reviewer. The PIA process can trigger dozens of different types of exceptions, each representing a potential type of error or issue within an application that must be resolved before reaching a funding decision. After the initial review is completed, a final review is conducted by a more experienced reviewer. If the final reviewer finds an error by the initial reviewer, the application is returned to the initial reviewer for further work. As part of the regular PIA review process, after final reviews, a portion of the applications that are ready for commitment is then sampled by the Solix Quality Assurance Team. If the Solix quality assurance reviewer finds an error or issue during the review, the reviewer returns the application to the initial reviewer to address the issue. Finally, USAC conducts independent quality assurance reviews. For these reviews, USAC staff select a sample of applications for review, including some that were selected for Solix’s quality assurance review, to determine the accuracy of the application review process. Like the Solix quality assurance reviewer, USAC staff return the application back to the appropriate reviewer for further review if they discover an issue or error. Selective Review: High-risk applications, identified through either automated aspects of the PIA system or by a PIA reviewer, undergo an additional, more detailed review from Solix’s selective review team. The selective review team obtains additional information from the applicant and uses that information to help determine eligibility for E-rate funding. Applications meeting certain criteria may also go through other reviews by the selective review team. For example, the selective review team reviews applications from consortia of schools and libraries to determine whether members of the consortia are aware of their financial obligations to participate in the program, or examines applications from private schools to ensure that they do not have endowments exceeding $50 million, which would make them ineligible for E-rate funding under the statute. Applications that undergo selective reviews are also subject to final reviews and may be selected for Solix and USAC quality assurance reviews. Since the PIA review process was implemented, USAC has expanded the process in response to internal control concerns. In addition to selective reviews, USAC has implemented “special compliance” and “cost- effectiveness” reviews. Special compliance reviews, established in 1999, are tailored to address specific issues and allegations, many of which originate outside of the PIA application review process, such as from the Whistleblower Hotline, FCC Office of Inspector General audits, law enforcement investigations, and press reports. These reviews are performed by a separate team, similar to the selective review team, and constitute the additional heightened scrutiny review process that supplements the regular PIA review process. USAC created the cost- effectiveness review team in 2005 as a separate team within the PIA review team in response to an FCC order directing USAC to reduce fraud, waste, and abuse. In the order, FCC also sought comments on the benefits of establishing benchmarks to determine whether a service requested under the E-rate program is cost-effective, as defined by program requirements. In response, USAC developed cost benchmarks for eligible products and services. The cost-effectiveness team reviews applications that have been flagged by the PIA review process as exceeding these cost benchmarks. Some applications are reviewed by more than one of these teams. For example, the special compliance team may determine that a review by the cost-effectiveness team or the selective review team will best address specific issues of concern in an application. Processing Invoices Requesting Reimbursement Much of the E-rate invoice review process is automated and incorporates steps to help ensure accuracy. The invoice forms that beneficiaries and service providers file contain general information about the funding request, such as the application and funding request number for which they are seeking reimbursement, the billing frequency and billing date, the date of service delivery, and the discounted amount billed to USAC. Both the beneficiary and the service provider must certify on their forms that the information they are providing is accurate. When an invoice is filed, Solix runs nightly systemic checks of the individual lines on the invoice using an automated validation process that compares the information in the invoice line with the information in the system for the associated funding request. The automated process triggered an average of 166 edit checks from calendar years 2006 through 2009 that served to approve an invoice line for full or partial payment, reject the invoice line, or send the invoice line for a manual review. Similar to the PIA application review process, the manual review process for an invoice line includes an initial and final review from Solix staff, and can be selected for a Solix quality assurance review and a USAC quality check before a final payment decision. A completed invoice line—an invoice line for which Solix has either approved or denied payment—is forwarded to USAC for final approval. Once the line item is approved, USAC generates a payment to the service provider. Figure 2 illustrates the E-rate invoicing process. See appendix IV for more information about the USAC and Solix staffing resources dedicated to E-rate application and invoice reviews. Monitoring the Effectiveness of Internal Controls through Audits of Schools and Libraries USAC contracted with independent public accountants from 2006 to 2009 to perform audits used to estimate, under IPIA, the amount of improper payments that are made to program beneficiaries. These audits also were used to test compliance with program eligibility requirements and program rules. The beneficiary audit process has four phases—audit performance, audit resolution, audit response, and audit follow-up (see fig. 3). During the audit response and audit follow-up phases, USAC provides periodic reports to its Board of Directors, FCC, and the FCC Inspector General on the status of audit findings and corrective and recovery actions. For example, USAC prepares a monthly report on the status of all monetary and nonmonetary audit findings and a semiannual report on the status of all audit recoveries. According to USAC officials, USAC created a Performance Assessment and Reporting unit in January 2009 that has, among other things, developed an audit process that uses the results of beneficiary audits to evaluate and report on whether schools and libraries have complied with E-rate program requirements and to estimate the amount of improper payments. Design of E-rate’s Internal Control Structure May Not Appropriately Consider Program Risks The overall design of the E-rate program is complex, and FCC’s changes to the program over time through orders and guidance have made it more so. This increasing complexity, in turn, has led USAC to expand the E-rate program’s internal control structure over time to address program complexity and to address risks to the program as they became apparent. Although USAC has performed financial reporting and fraud risk assessments, USAC has not conducted a robust risk assessment of the E- rate program and, consequently, may not be efficiently using its resources to reasonably target program risks. E-rate Program Lacks Meaningful Goals and a Robust Risk Assessment In July 1998, we testified before the Senate Committee on Commerce, Science, and Transportation about the implementation of the E-rate program and recommended that FCC develop goals, measures, and performance targets for E-rate. We have continued to note FCC’s lack of goals and adequate performance measures for E-rate for more than a decade. Most recently, we recommended in our March 2009 report that FCC review the purpose and structure of the E-rate program and prepare a report to the appropriate congressional committees identifying FCC’s strategic vision for the program. As we have previously mentioned, FCC released an NPRM in May 2010 seeking comment on several proposed reforms of the E-rate program but has not addressed our recommendations regarding goals and performance measures or identifying a strategic vision for the program. FCC’s lack of goals and performance measures affects the internal control structure of the program because, as set forth in GAO’s Standards for Internal Control in the Federal Government, a precondition to risk assessment is the establishment of clear, consistent agency objectives. When clear program objectives are established up front, the internal control structure can then be designed around the fundamental risk that program objectives will not be met. When we testified before the committee in 1998, we stated that USAC had not finalized all of the necessary procedures and related internal controls for E-rate, even though USAC was close to issuing the first funding commitment letters. FCC had worked to quickly establish the E-rate program so that schools and libraries could begin benefiting from the program. However, this effort resulted in FCC establishing the program without clear objectives and quickly designing an internal control structure to help prevent and detect fraud, waste, and abuse. This internal control structure, however, was not designed on the basis of a robust risk assessment of the E-rate program. To date, FCC has not conducted a robust risk assessment of the E-rate program that is based on the program’s core processes and business practices. Although USAC has undertaken several efforts to assess risk, these efforts have been in relation to assessing risk for other purposes, such as Universal Service Fund financial reporting, and not to assess risk specifically in the E-rate program. Most recently, in February 2008, USAC hired an independent public accounting firm to conduct an assessment of USAC’s internal controls under OMB Circular No. A-123. However, the 2008 internal control review focused primarily on the Universal Service Fund and on USAC’s internal controls regarding financial reporting, not programmatic activities. The accounting firm that performed the review made recommendations to USAC that included overall changes to USAC’s administration of the Universal Service Fund and the other universal service programs. Some of the accounting firm’s recommendations specifically addressed the E-rate program. For example, the review discussed the challenge that USAC encounters in overseeing Solix from a remote location and made recommendations to enhance USAC’s oversight of Solix’s operations. Although USAC took actions to address the recommendations in the 2008 review, the review had not focused on the overall internal control structure of the E-rate program. USAC officials told us that its own internal controls team again assessed USAC’s controls beginning in the fourth quarter of 2009. However, USAC officials noted that the scope of the testing was similar to that conducted by the public accounting firm in 2008. Consequently, USAC’s assessment, like that of the public accounting firm, was performed in relation to Universal Service Fund financial reporting—not to the overall internal control structure of the E-rate program. In addition to these activities, in 2009, USAC completed a fraud risk assessment for FCC. The purpose of this assessment was to help USAC managers and staff assess the adequacy of existing controls and determine whether additional fraud countermeasures were required. As with the 2008 internal control review, the fraud risk assessment focused on the Universal Service Fund as a whole, not on the E-rate program specifically, although part of the review did examine E-rate program administration. The review examined 24 control measures that were in place for the program. The review determined that 4 of those control measures addressed risks that were “moderate,” while 12 addressed risks that were “low” and 8 that were “very low.” In addition, USAC’s Internal Audit Division produced a risk register for the E-rate program that identified risks; applied a “gross risk analysis”; noted the mitigating controls; and then calculated the “residual risk,” given the mitigating controls. According to documentation, the risk register was based on Internal Audit Division interviews with USAC staff in 2008. These various efforts to assess risk—that is, the 2008 review of internal controls, USAC’s update of that review, and the fraud risk assessment— illustrate that FCC and USAC management are conscientious about having an internal control structure in place that safeguards program funding and resources. However, these prior efforts have not risen to the level of the risk assessment that is intended under the GAO standards for internal control. Ideally, under those standards, FCC would first establish clear objectives for the E-rate program, and management would then comprehensively identify risks to meeting those objectives. The assessments undertaken to date, while important to proper stewardship of government funds, have focused primarily on financial reporting requirements and the specific internal controls that were already in place, which have developed and evolved over time around the rules that govern the program. To date, FCC has not directed USAC to undertake a robust risk assessment that would involve a critical examination of the entire E- rate program to determine whether modifications to business practices and internal controls are necessary to cost-effectively address programmatic risks. Internal Controls Have Grown Over Time, and Multiple Layers of Application Reviews May Not Effectively Target Risk Lacking a robust risk assessment, USAC has responded to risks largely by expanding the PIA process. The processes within these review levels have grown increasingly complex, and it is unclear whether these reviews appropriately target risk. For example, subjecting every application to multiple layers of review may not be the most efficient or effective method to address programmatic risks. As we have previously described, all applications are subject to at least two reviews, the initial and final PIA reviews. USAC implemented the final reviews, as well as the two levels of quality assurance reviews, to find potential errors in the initial reviews and assess the integrity of the PIA review process. However, we found large discrepancies in the number of returns triggered by the final reviewer’s evaluation of the initial reviewer’s actions in response to each exception. One type of exception triggered final reviewers to return 4,722 applications to the initial reviewer during funding years 2006 through 2009. This exception, which related to determining the eligibility of telecommunications service, comprised 62 percent of all final review returns during the period. At the same time, errors related to 13 other exceptions were the source of either zero or 1 return by a final reviewer. These data suggest that the design of the internal controls could be inefficiently using resources. It may be possible to target the internal controls toward applications that trigger exceptions that are more likely to be returned by final reviewers and those that are more likely to trigger an adjustment to an application’s eligibility or funding commitment. The PIA review process has also become more complex in response to USAC’s efforts to ensure that the applicant has complied with FCC rules as they have changed and evolved. For example, each year, USAC or Solix may propose to eliminate exceptions targeting issues that are no longer of concern or add exceptions to the PIA review process to address new areas of concern. However, from funding years 2006 through 2009, the total number of exceptions in the PIA process grew from 67 to 84 (about a 25 percent increase). The increasing complexity of the review process is also illustrated by the procedures involved in determining service and equipment eligibility. USAC maintains a list of approved services and equipment that are eligible for an E-rate discount. This list is based on broader guidance that USAC posts annually for applicants. It has grown from approximately 6,000 to 8,000 eligible items—about a 32 percent increase from funding years 2006 through 2009. In addition, USAC has developed a complex process to determine whether the services and equipment requested in applications are eligible, conditionally eligible, or partially eligible. For example, if a school with a 75 percent discount rate applies for a piece of equipment that will only be used for eligible purposes 60 percent of the time, then, under FCC rules, only 60 percent of the cost of the equipment is eligible for a 75 percent discount. In determining service and equipment eligibility, PIA reviewers rely on a detailed list that includes guidance on the specific makes and models of thousands of products. Solix has hired a small number of staff with technical backgrounds to further assist PIA reviewers in resolving technical questions about the eligibility of services or equipment. This approach of adding controls to address risks as they become apparent, or to address rule changes coming from FCC, leads to an accretion of internal controls that affects the overall internal control structure over time. While it is appropriate to respond to findings of risk and add internal controls as the program progresses, FCC and USAC have not done enough to proactively address internal controls or to step back and examine how the internal control structure has evolved. Without assessing risk and the internal control structure, USAC cannot be sure whether it is appropriately allocating resources to reasonably target risks. Automated Invoice Review Process May Not Appropriately Target Risk The internal control structure around the E-rate invoicing process is more limited than the structure around the application review process, but it is again not clear that the controls in place appropriately target risks. For example, there is no further review of the 91 percent of invoice lines—and almost 60 percent of dollars requested—that pass through the automated review process without further manual review (see table 1). According to GAO’s internal control standards, control activities should be regularly monitored to ensure that they are working as intended. However, there is no process or procedure for confirming that Solix’s automated validation process accurately reimburses providers and beneficiaries because USAC does not have a process for conducting random accuracy checks of completed invoice lines that have not been manually reviewed. These payments are not compared with an actual bill of service, unless such a comparison is done as part of a beneficiary audit. USAC officials indicated that on occasion, they pull some automated final payment determinations to verify their accuracy. However, USAC has no official procedure or process in place requiring it to verify these data or to track the results. Also, USAC officials could not determine how often or how many invoices they pull for verification. Neither USAC nor Solix regularly conducts random quality assurance checks of sample invoice lines that the automated validation process has approved or rejected to help verify the accuracy of the automated process. Therefore, there is no verification that the items or services for which service providers or beneficiaries are seeking reimbursement were actually included in the list of the items or services Solix approved and committed to fund. The invoice review process provides another opportunity, in addition to the application review process, to identify whether the beneficiary has requested reimbursement for eligible equipment and services. However, the invoice review process closely examines only a limited number of invoices to determine what services are being funded. Specifically, about 9 percent of invoice lines undergo a manual review; although, to USAC’s credit, the manual reviews do appear to target risk by representing about 42 percent of dollars requested (see table 1). Invoice lines are generally chosen for a special manual review because they are considered to be “high risk.” A reviewer may determine that the manually reviewed invoice lines be fully paid, partially paid, denied, or placed “on hold.” An invoice is put on hold during a manual review either as a result of the procedures for a specific type of review or as a result of instructions by the USAC or Solix group requesting special review. Most of the edits that trigger an invoice for manual review require that the reviewer obtain a copy of the actual bill of service. All invoice lines that receive a manual review also receive a secondary, final review. In addition, Solix and USAC sample manually reviewed invoice lines for a quality assurance review prior to payment. In response to our work, USAC stated in its comments on our draft report that it has begun to develop a process to randomly sample invoices that are only reviewed through the automated process. USAC stated that it expects this process to supplement its new Payment Quality Assurance program that was put in place in August 2010, which will randomly test the accuracy of E-rate disbursements for the purpose of estimating rates of improper payments. We also found that USAC does not have a single document or procedures manual that documents the invoice review process. Policies and procedures are forms of controls that help to ensure that management’s directives to mitigate risks are carried out. Control activities are essential for proper stewardship and accountability for government resources and for achieving effective and efficient program results. We requested an invoice review procedures manual from USAC. USAC officials provided a collection of stand-alone documents that each cover various parts of the process and procedures. The numerous individual documents that USAC officials provided in response to our request included descriptions of the procedures a reviewer would follow to manually review an invoice line as well as the procedures for a second or final review. We also obtained descriptions of the automatic validation process and the Solix and USAC quality assurance review procedures. USAC officials noted that the various documents are housed electronically in a central location. However, the documents being housed electronically in a central location differs from the lengthy and detailed PIA procedures manual, which provides, in a single document, an overview of the application review process as well as detailed descriptions of activities a reviewer must follow to address a specific exception. The procedures manual also goes on to explain the multiple layers of the application review process. In response to our work, USAC has stated that it plans to create a single manual that documents the entire invoice review process. Audit Findings Are Not Effectively Considered in Assessing Internal Controls of the E-rate Program Although FCC and USAC use the results of beneficiary audits to identify and report beneficiary noncompliance, they have not effectively used the information gained from audits to assess and modify the E-rate program’s internal controls. A systematic approach to considering the results of beneficiary audits could help identify opportunities for improving internal controls. Lessons learned from an analysis of audit results could, for example, lead to modifications of the application and invoice approval processes as well as modifications to the nature, extent, or scope of the beneficiary audits. Furthermore, the audit process that USAC currently uses is not governed by a set of documented and approved policies and procedures. The process used is a combination of the procedures contained in an audit resolution plan drafted in 2004 and other procedures developed and implemented since 2004. GAO’s standards for internal control provide that when identifying and assessing risks, management should consider the findings from audits, the history of improper payments, and the complexity of the program. These standards also state that management should consider audit findings when assessing the effectiveness of internal controls, including determining the extent to which internal controls are being monitored, assessing whether appropriate policies and procedures exist, and assessing whether they are properly maintained and periodically updated. We obtained information from USAC management on audits that had been completed to identify how and to what extent the results of beneficiary audits were considered in assessing internal controls for the E-rate program. USAC officials provided us with management reports on the results of E-rate beneficiary audits completed in 2006, 2007, and 2008. These reports identified the nature and extent of beneficiary noncompliance with E-rate requirements. However, the information did not demonstrate whether USAC had identified and assessed the specific E- rate program risks and core causes of beneficiary noncompliance. USAC officials also provided us with a list of suggested actions that could be taken to prevent and reduce improper payments across all of the Universal Service Fund programs, along with estimates of the resources that would be required to implement these actions. The list, which USAC initially provided to FCC in response to its request, included a suggested action to perform assessments of USAC’s internal controls in accordance with applicable OMB guidance. However, the information provided to us did not explain how the suggested actions would address specific program risks. Moreover, assessment of internal controls with identification of risks and vulnerabilities should occur before specific, targeted actions can be identified. USAC officials told us that they performed assessments of internal controls for 2008 and 2009. However, as we describe in this report, these assessments primarily focused on USAC’s controls over financial reporting and were not designed to identify and address specific E-rate program risks and vulnerabilities. We found that, although FCC and USAC have taken actions to address audit findings, the same rule violations, such as reimbursements for ineligible services or for services at higher rates than authorized, were repeated in each funding year for which beneficiary audits were completed. Furthermore, we found that FCC and USAC have not analyzed the findings from beneficiary audits to determine whether corrective actions implemented by beneficiaries in response to previous audits were effective. We analyzed the audit findings from 3 years’ worth of audits to identify the extent of repeat findings. Of the 655 beneficiaries that were audited from 2006 through February 2010, 64 were audited more than once. Of those 64 beneficiaries, 36 had repeat audit findings of the same program rule violation, such as those that we previously mentioned, in each of the audited years. Instances of repeat audit findings and the likelihood that they would be identified in successive audits are examples of the risks and vulnerabilities that, once identified and assessed, could inform the E-rate program’s internal controls, including providing data about where modifications to the nature, extent, or scope of beneficiary audits are most needed. Moreover, goals and metrics for reducing the rate of program rule violations by beneficiaries and service providers are important elements to provide incentives and focus on properly identifying and assessing the E-rate program’s internal controls and monitoring the effect that implemented control strategies have on beneficiary compliance. However, according to FCC officials, they have not developed specific goals and do not have metrics to measure progress made. Timely resolution of audit findings and approval of beneficiary audit reports are important components of a systematic process for assessing and continuously modifying internal controls for the E-rate program. We found that the beneficiary audit process did not result in the timely resolution of audit findings and approval of audit reports. For example, the average time between when USAC received a draft audit report and when the USAC board’s Schools & Libraries Committee approved the final audit report was approximately 224 days. As of April 2010, nearly 20 percent of these audits had not been approved by the committee. According to USAC officials, internal reviews of all audit findings, as well as quality assurance reviews and other internal processes, can take several months. However, this means that the results of 1 year’s audits are not available to be used in assessing internal controls until after the following year. According to USAC officials, the increases in the number and timing of IPIA beneficiary audits have adversely affected their ability to effectively complete audit follow-up work in a timely manner. To begin to address this issue, FCC and USAC officials met with OMB staff to discuss the approach used to develop estimates of improper payments and modifications to the methodology used that would also address workload issues. FCC and USAC officials stated that beginning in fiscal year 2011, the improper payments estimate for the program will be based on tests of a sample of monthly disbursements using the USAC-designed Payment Quality Assurance program. Also according to these officials, beginning in fiscal year 2011, beneficiary audits will be performed using a USAC-designed compliance audit program. We also found that FCC and USAC do not have documented and approved policies and procedures for the beneficiary audit process. Without documented and approved policies and procedures, management may lack assurance that control activities are appropriate, actually applied, and applied properly. Policies and procedures could also contribute positively to a systematic process for considering audit results when assessing the program’s internal controls and in identifying opportunities for modifications to existing controls. We determined that FCC and USAC’s audit process currently used for the E-rate program is essentially a combination of procedures contained in the 2004 draft audit resolution plan, periodic directives from FCC to USAC, and procedures that USAC management have implemented (either formally or informally) over the last 6 years. As of August 2010, FCC had not approved the draft audit resolution plan. According to USAC officials, USAC has implemented most aspects of the plan and refined and revised it over time. However, our work showed that the procedures set forth in the various documents are not consistent with one another or with USAC’s current practices for addressing audit findings. For example, the draft audit resolution plan states that a response to audit findings will be developed within 60 days of receipt of a final audit report, yet the deadline is 30 days according to the Schools & Libraries Division’s audit response procedures. Also, the audit resolution plan states that USAC’s Audit Committee will review and approve the final beneficiary audit reports and USAC’s proposed response. However, in April 2006 USAC’s Board of Directors approved modifications to the Audit Committee’s charter to remove this responsibility. Furthermore, two USAC divisions have overlapping responsibilities for maintaining the audit results database. It is unclear from these various procedures who, for example, is responsible for maintaining information on the status of audit findings (e.g., open or closed) and the recovery of improper payments. Other inconsistencies may exist between the processes used and the processes that management believes are in use to address audit findings. Program officials have acknowledged the importance of documented and approved policies and procedures for the beneficiary audit process and are taking action to address this need. USAC officials stated that in September 2009, they began an initiative to update and streamline existing policies and procedures, including those related to the beneficiary audit process. According to these officials, procedures were updated and approved in July 2010 specific to the Schools & Libraries Division’s responsibilities for audit response and follow-up. USAC officials stated that all other existing audit process policies and procedures are scheduled to be completed and submitted to FCC for review and approval by December 2010. Conclusions Since the establishment of the E-rate program, FCC and USAC have taken steps to revise the program’s internal controls to address problems they have identified as well as concerns raised by external auditors, such as GAO, the FCC Inspector General, and others. However, FCC and USAC have generally been reactive, rather than proactive, regarding internal controls, and they have not conducted a robust risk assessment of the program’s design and core activities and functions. The continuing lack of performance goals and measures in the E-rate program limits FCC’s ability to efficiently identify and address problems with the program, indicates a lack of strategic vision for the program, and affects the program’s internal control structure. The E-rate program’s internal control structure is a product of accretion and is not clearly targeted to reasonably and effectively address programmatic risks. Because the administrative costs for the program (i.e., the costs to fund USAC and Solix operations) come out of the Universal Service Fund, an internal control structure that has not been well-designed could be using more resources than necessary and, thus, could be reducing the amount of program dollars available to beneficiaries. Without an overall assessment, FCC and USAC might not know how to appropriately balance their resources to better target risks and best ensure that the program fulfills its overall goal of providing technology funding to schools and libraries. Following that, periodic examinations of the design of the E-rate program’s internal control structure can help ensure that it is well-designed and -operated, is appropriately updated to meet changing conditions, and provides reasonable assurance that the internal controls appropriately address risk across the entire E-rate program. In addition, although FCC uses beneficiary audits as an oversight tool to assist in assessing schools’ and libraries’ compliance with E-rate program requirements, these audit results could also help inform systematic assessments of the program’s internal control structure. Using this information as part of a continuous improvement effort could help strengthen internal controls by better targeting the nature, extent, or scope of the beneficiary audits. Maximizing the use of beneficiary audits as a core safeguard of Universal Service Fund monies would require a sustained FCC and USAC effort. Our work has shown that sustained efforts can best be supported by documented policies and procedures that address the timely and appropriate resolution of audit findings and consideration of the results of audits when assessing a program’s internal controls. Finally, it is important to note that the overall design of E-rate’s internal control structure is complex because the E-rate program itself is complex. The National Broadband Plan’s recommendation to streamline aspects of the program opens the door for both an examination of the program as a whole and of its internal control structure. A broad evaluation of the E- rate program’s procedures and internal controls would present opportunities for FCC to improve the design of the program to ease the administrative burden on schools and libraries and to better address the risks of fraud, waste, and abuse. Recommendations for Executive Action To improve internal controls over the E-rate program, we recommend that the Federal Communications Commission take the following four actions: conduct a robust risk assessment of the E-rate program; based on the findings of the risk assessment, conduct a thorough examination of the overall design of E-rate’s internal control structure to ensure that the procedures and administrative resources related to internal controls are aligned to provide reasonable assurance that program risks are appropriately targeted and addressed; implement a systematic approach to assess internal controls that appropriately considers the results of beneficiary audits and that is supported by a documented and approved set of policies and procedures; and develop policies and procedures to periodically monitor the internal control structure of the E-rate program, including evaluating the costs and benefits of internal controls, to provide continued reasonable assurance that program risks are targeted and addressed. Agency Comments and Our Evaluation We provided a draft of this report to the Federal Communications Commission and the Universal Service Administrative Company for their review and comment. In its written comments, FCC agreed with our recommendations. FCC stated that it intends to work closely with USAC and provide the appropriate directives concerning the implementation of a risk assessment. FCC’s full comments are reprinted in appendix V. In its written comments, USAC noted that it was pleased that we had recognized that FCC and USAC have implemented many internal controls for processing E-rate applications and making E-rate funding commitments. However, USAC stated that it does not believe that the facts, viewed in their full context, support some of our conclusions. USAC does not agree with our conclusion that the E-rate program has not been subjected to a robust risk assessment. USAC believes that we too narrowly construed the review performed by an independent public accounting firm in 2008 when we determined that the review focused on the risks associated with financial reporting. USAC states that the 2008 review did assess and test specific internal controls for the E-rate program. We agree that some E-rate internal controls were in fact assessed and tested; nonetheless, the focus of the public accounting firm’s work was neither the E-rate program nor its programmatic aspects. No risk assessment that USAC has undertaken to date has been the type of risk assessment that we envision under the first recommendation we make in this report. Such an assessment would consider the existing design of the E-rate program as a whole, including the roles of FCC, USAC, beneficiaries, and service providers; whether the design and mix of preventive and detective controls already in place for the E-rate program are appropriate; and whether the program lacks internal controls that are needed. USAC also does not agree with our findings regarding its analysis of audit findings, including repeat audit findings; the timeliness of its beneficiary audit process; and the division of responsibility within USAC for maintaining the audit results database. We continue to believe that USAC could analyze audit findings on a timely basis and use the information to address risks and reduce instances of repeat audit findings. USAC’s comments are reprinted in Appendix VI, followed by our full response to USAC. We made no changes to our recommendations based on USAC’s comments, although we did add material to the report to acknowledge some of the internal control changes that USAC discusses in its letter, including its new Payment Quality Assurance program and USAC’s plans to implement new internal controls in its invoicing process in response to our work. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the appropriate congressional committees, the Chairman of the Federal Communications Commission, the Acting Chief Executive Officer of the Universal Service Administrative Company, and other interested parties. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology Our report addresses the following questions: (1) What actions have the Federal Communications Commission (FCC) and the Universal Service Administrative Company (USAC) taken to establish internal controls in the E-rate program? (2) Does the design of the E-rate program’s internal control structure appropriately consider program risks? This appendix describes the various procedures that we undertook to answer these questions. We conducted the following background research that helped inform each of our reporting objectives: We reviewed prior GAO reports on the E-rate program; provisions of the Telecommunications Act of 1996; FCC regulations, orders, and other documents related to the administration of the E-rate program; the memorandum of understanding (MOU) between FCC and USAC; risk assessments conducted on the E-rate program; internal and external audits and reports concerning USAC and the E-rate program; and documents from FCC and USAC regarding the structure and operation of the program. We interviewed officials from FCC’s Office of Managing Director, Wireline Competition Bureau, and Office of Inspector General to learn what efforts have been made to address internal controls and concerns about fraud, waste, and abuse within the program. We also interviewed officials from USAC’s Schools & Libraries Division to understand their roles and responsibilities in relation to FCC and USAC’s subcontractor, Solix, Inc., as well as the overall structure of the E-rate program. We interviewed staff at the independent public accounting firm that conducted the 2008 internal controls review for USAC to learn more about the 2008 review and the firm’s conclusions related to the E-rate program. We also spoke with a former USAC official to understand more fully the history of implementing internal controls for the program. Analysis of the E-rate Application and Invoice Processes To understand the internal control structure within the application and invoice review processes and understand the risks addressed by the internal control components, we reviewed internal USAC and Solix procedures and guidance, and interviewed USAC and Solix staff. We reviewed documentation of the program’s key internal controls and risk assessments, and related policies and procedures. Specifically, we reviewed the design of the program’s key internal controls for (1) processing applications and making funding commitments, (2) processing invoices requesting reimbursement, and (3) monitoring the effectiveness of internal controls through audits of schools and libraries. We assessed the design of these internal controls against GAO’s Standards for Internal Control in the Federal Government. We spoke with FCC, USAC, and Solix officials about program risks, the design and functioning of internal controls, and how internal controls are monitored and assessed. To determine the results of various reviews within the overall application and invoice review process, we requested and reviewed the following data for funding years 2006 through 2009. Record-level data of all applications, including the name and identification number of the applicant, the original requested amount, the amount committed, and the exceptions that were triggered by USAC’s Program Integrity Assurance (PIA) review process. These data were from the Streamlined Tracking and Application Review System (STARS), which is used to process applications for funding and to track information collected during the application review process. Record-level and summary data from the Invoice Streamlined Tracking and Application Review System (ISTARS), including a summary of data for each type of edit that can be flagged during the automated validation process. These data included a description of the edit; the number of occurrences; the total dollar amount without the E-rate discount; the total dollar amounts requested, approved, and modified; the total percentage of the modification as part of the undiscounted amount; the total number of invoice lines with edits; and the total number of invoice lines without edits. Summary data for the results of various reviews that make up the application review process, including the final review, quality assurance reviews, and the heightened scrutiny reviews. For the final and quality assurance reviews, we reviewed data on the number of returns that were triggered by the reviews and the exception(s) associated with each return. The heightened scrutiny review data included the total number of applications or applicants reviewed, categorized by funding determinations (i.e., modifications, withdrawals, denials, and full approvals), and the total dollar amount associated with each type of determination. To provide these data, Solix performed queries on the system and provided the resulting reports to us between December 2009 and May 2010. Data from the STARS and ISTARS systems can change on a daily basis as USAC processes applications for funding and reimbursement, applicants request adjustments to requested or committed amounts, and other actions are taken. As a result, the data we obtained and reported on in this report reflect the amounts at the time that Solix produced the data and could be somewhat different if we were to perform the same analyses with data produced at a later date. To assess the reliability of the data, we contacted experts at USAC and Solix to determine whether major changes in how data are processed have been made since GAO determined that the STARS system was reliable in 2007. We also clarified that ISTARS and STARS share the same platform and security, and that data can be accessed across both systems. For the summary data for the heightened scrutiny reviews, we also reviewed descriptions from USAC on how each team processes its data. We did not include an analysis of some of the data that we requested. For example, we did not include an analysis of data from the application review process in this report, because limitations with the data process did not allow us to produce a relevant analysis within our available time frame. Similarly, we did not include an analysis of the invoice data that summarized the invoice edits by type of edit because limitations with the data process did not allow us to produce a relevant analysis within our available time frame. With these exceptions, we determined the data were sufficiently reliable for the purposes of our review. Analysis of the E-rate Audit Process We interviewed USAC and FCC officials and reviewed USAC’s policies and procedures governing its audit process, including the process for reporting audit results and the status of audit follow-up to USAC’s Board of Directors and FCC. We also reviewed applicable regulations, FCC orders and directives, as well as provisions of the MOU between FCC and USAC. Furthermore, we analyzed data in USAC’s audit tracking systems on beneficiary audits performed between 2006 and 2010. To do so, we obtained an understanding of how beneficiary audit data are processed and maintained for each phase of the audit process in USAC’s Improper Payment Audit Tracking System (IPATS) and Consolidated Post Audit Tracking System (CPATS). We interviewed USAC officials about the quality of the data maintained in these database systems. CPATS was implemented in 2009; thus, numerous CPATS data elements for prior years’ Improper Payments Information Act of 2002 (IPIA) beneficiary audits were blank. Therefore, we appropriately modified our analysis of the audit data and determined that these data were sufficiently reliable for our purposes. Specifically, we (1) calculated the average number of days between draft audit report and board approval of the final audit report, using data from audits performed in 2009 and 2010; (2) evaluated the frequency of reported audit findings from audits performed in 2006 through 2010; and (3) evaluated the frequency with which schools and libraries were audited in 2006 through 2010 to determine whether there were repeat audit findings in successive audits. We conducted this performance audit from August 2009 to September 2010 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: E-rate Program Application, Funding, and Reimbursement Processes Appendix III: Analysis of Selected E-rate Program Beneficiaries We did not conduct any transaction testing related to the E-rate application process; however, we did conduct follow-up to audit work performed for our March 2009 E-rate report. For that report, we determined the percentage of eligible entities participating in the E-rate program by performing a matching analysis using funding year 2005 data from USAC and school year 2005 or 2005-2006 data from the Department of Education’s National Center for Education Statistics (NCES). At the end of our matching analysis, we found that we could not match a number of schools and libraries from USAC’s database to schools and libraries from the NCES data. We found various reasons that could explain some of the nonmatching schools and libraries. For example, schools often submit multiple applications, some of which may only cover specific school buildings that would then not show up as a match to a particular school, even though the building is a subpart of an eligible school. We determined that the issue of the nonmatched schools was an issue of internal controls, which was not the subject of our 2009 E-rate report, and would best be handled during our internal controls work. Therefore, for this report, we selected the 1,208 private schools from our list of nonmatching schools for further examination because we determined that private schools present a greater risk of fraud for the E-rate program. We subsequently determined that we would focus our follow-up analysis on the 408 private schools from our funding year 2005 list that had also applied for E-rate support in funding year 2008. We sent USAC a list of 274 private schools that received funding year 2008 funding commitments and asked USAC to reverify the eligibility of each entity to participate in the E-rate program using USAC’s PIA Form 471 Review Procedures manual. USAC was able to verify the eligibility of 265 of these private schools through their Form 471 review procedures manual, which includes matching the schools to the NCES database, other acceptable third-party documentation, or documentation provided by the school itself. We were able to determine the eligibility of an additional 5 private schools from the NCES database. We did not assess USAC’s procedures or make our own assessment of the adequacy of the documentation provided by the schools. USAC determined that 4 private schools within the scope of our request could not be validated as eligible for the program, including 3 schools that were confirmed as being closed. USAC determined these schools to be ineligible for funding as a result of the revalidation process initiated by our request. USAC officials noted that they verify that a school has been closed by receiving confirmation from the applicant or a valid third party, such as a state E-rate coordinator. According to USAC officials, once USAC staff confirm that the school is closed, the staff will provide USAC’s invoicing team with the entity number and closed date. The invoicing team will place the entity on watch to prevent any invoices from being paid. Based on the closing date provided by the applicant or third party, USAC may also send a commitment adjustment (COMAD) referral to the COMAD team, which will adjust any previous commitments or recover funds in the cases where payments were made after the closed date. Appendix IV: E-rate Program Full-Time- Equivalent Positions During calendar years 2006 through 2009, Solix and USAC dedicated between 21.5 and 22.5 full-time-equivalent (FTE) positions annually for the invoice review process. In the same period, they dedicated 131.5 to 149.5 FTEs annually for the application review process (see table 2). Appendix V: Comments from the Federal Communications Commission Appendix VI: Comments from the Universal Service Administrative Company Following are GAO’s comments on the Universal Service Administrative Company’s letter dated September 20, 2010. GAO Comments 1. As stated in our report, USAC had not analyzed the findings from beneficiary audits to determine whether corrective actions implemented by beneficiaries in response to previous audits were effective. We also stated that, consistent with our standards for internal control in the federal government, repeat audit findings (information that would be available to USAC from analysis of the audits) are examples of the risks and vulnerabilities which, once identified and assessed, could provide information about where modifications to the nature, extent, or scope of beneficiary audits are most needed. This is consistent with the objectives of internal controls in the federal government and FCC’s and USAC’s responsibilities to establish and maintain internal controls that appropriately safeguard program funding and resources. We recognize that USAC cannot be held responsible for the conduct of beneficiaries; however, USAC is responsible for recognizing the risks that beneficiaries will not comply with program rules and for implementing controls that appropriately target those risks. Therefore, beneficiary conduct that affects such things as the commitment of funds and payments to beneficiaries must be part of USAC’s assessment of program controls and are not, as stated by USAC, unrelated to USAC’s internal controls for the E-rate program. 2. Our work included consideration of the 2008 internal control review consistent with our standards and audit objectives. As stated in our report, the 2008 internal control work the independent public accounting firm performed was a review of USAC’s controls for all four Universal Service Fund programs and was not specific to any single program, including E-rate. Further, the review did not address program risks associated with beneficiary self-certification of key information, nor did it consider the nature, extent, and scope of beneficiary audits or the results from those audits. A comprehensive assessment focused on the E-rate program would consider the existing design of the E-rate program as a whole, including the roles of FCC, USAC, beneficiaries, and service providers; whether the design and mix of preventive and detective controls already in place for the E-rate program are appropriate; and whether the program lacks internal controls that are needed. With respect to the 2009 internal control assessment performed by USAC’s own staff, as stated in our report, this assessment was also not designed to identify and address specific E-rate program risks and vulnerabilities. 3. We do not agree with USAC’s statements concerning the timeliness of its beneficiary audit process. The measurements USAC provides—239 days and 64 days—exclude weekends and holidays and therefore do not portray the entire processing time. In our report, we stated that the beneficiary audit process did not result in the timely resolution of audit findings and approval of audit reports and, to illustrate, we analyzed USAC data for a 3-year period and found that the average time between when USAC received draft audit reports and when final audit reports were approved was approximately 224 days. We focused on the amount of time after USAC receives draft audit reports because this is the period that is used to review the audit reports, have quality control procedures performed by others, and approve the reports. Also, as discussed in our report, we found that USAC was not effectively analyzing the audit findings although the findings could have been used to provide information about where modifications to the nature, extent, or scope of beneficiary audits were most needed. Therefore, this time period covers the time taken for the process steps that are relevant to identification of an issue that requires management attention to ensure that the results of audits are timely considered when assessing and modifying the program’s internal controls. 4. Our work did take into account the structure of the Universal Service Fund audit approach. Instances of repeat audit findings and the likelihood that they would be identified in successive audits are examples of the risks and vulnerabilities that, once identified and assessed, could inform the E-rate program’s internal controls. We recognize that the timing of some of the audits may have made it difficult for some audited beneficiaries to address and rectify non- compliant findings discovered in the first audit before a second audit was completed. However, we found beneficiaries with repeat audit findings from audits conducted in the first and third years of the 3-year period, which should have been sufficient time to avoid repeated findings. In any case, it is incumbent on USAC to analyze the results of beneficiary audits to identify instances of repeat audit findings and assess whether corrective actions were effective. 5. As we stated in our report, it is unclear from USAC’s procedures who is responsible for maintaining information on the status of audit findings. We also reported that we found other inconsistencies between activities in practice versus written procedures regarding the audit process. It will be important that these inconsistencies are addressed by the updated audit process policies and procedures that USAC told us it expects to complete by December 2010. Appendix VII: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the contact named above, Faye Morrison and Robert Owens (Assistant Directors), Frederick Evans, John Finedore, Natasha Guerra, Christopher Howard, Bonnie Pignatiello Leer, Scott McNulty, Sara Ann Moessbauer, Joshua Ormond, Steven Putansu, Amy Rosewarne, Matt Shaffer, Betty Ward-Zukerman, and Mindi Weisenbloom made key contributions to this report.
Since 1998, the Federal Communications Commission's (FCC) Schools and Libraries Universal Service Support Mechanism--commonly known as the "E-rate" program--has been a significant federal source of technology funding for schools and libraries. FCC designated the Universal Service Administrative Company (USAC) to administer the program. As requested, GAO examined the system of internal controls in place to safeguard E-rate program resources. This report discusses (1) the internal controls FCC and USAC have established and (2) whether the design of E-rate's internal control structure appropriately considers program risks. GAO reviewed the program's key internal controls, risk assessments, and policies and procedures; assessed the design of the internal control structure against federal standards for internal control; and interviewed FCC and USAC officials. FCC and USAC have established many internal controls for the E-rate program's core processes: (1) processing applications and making funding commitment decisions, (2) processing invoices requesting reimbursement, and (3) monitoring the effectiveness of internal controls though audits of schools and libraries that receive E-rate funding (beneficiaries). E-rate's internal control structure centers around USAC's complex, multilayered application review process. USAC has expanded the program's internal control structure over time to address the program's complexity and to address risks as they became apparent. In addition, USAC has contracted with independent public accountants to audit beneficiaries to identify and report beneficiary noncompliance with program rules. The design of E-rate's internal control structure may not appropriately consider program risks. GAO found, for example, that USAC's application review process incorporates a number of different types and levels of reviews, but that it was not clear whether this design was effectively and efficiently targeting resources to risks. Similarly, GAO found no controls in place to periodically check the accuracy of USAC's automated invoice review process, again making it unclear whether resources are appropriately aligned with risks. While USAC has expanded and adjusted its internal control procedures, it has never conducted a robust risk assessment of the E-rate program's core processes, although it has conducted risk assessments for other purposes, such as financial reporting. A risk assessment involving a critical examination of the entire E-rate program could help determine whether modifications to business practices and the internal control structure are needed to appropriately address the risks identified and better align program resources to risks. The internal control structure--once assessed and possibly adjusted on the basis of the results of a robust risk assessment--should then be periodically monitored to ensure that the control structure does not evolve in a way that fails to appropriately align resources to risks. The results of beneficiary audits are used to identify and report on E-rate compliance issues, but GAO found that the information gathered from the audits has not been effectively used to assess and modify the E-rate program's internal controls. As a result, the same rule violations have been repeated each year for which beneficiary audits have been completed. For example, of 64 beneficiaries that had been audited more than once over a 3-year period, GAO found that 36 had repeat audit findings of the same rule violation. GAO found that the current beneficiary audit process lacks documented and approved policies and procedures. Without such policies and procedures, management may not have the assurance that control activities are appropriate and properly applied. Documented and approved policies and procedures could contribute positively to a systematic process for considering beneficiary audit findings when assessing the E-rate program's internal controls and in identifying opportunities to modify existing controls.
GAO_GAO-08-891
Background A passport is an official government document that certifies an individual’s identity and citizenship and permits a citizen to travel abroad. According to State, many people who have no overseas travel plans have applied for a passport because it is viewed as the premier citizenship and identity document, which allows the bearer to board an airplane, prove citizenship for employment purposes, apply for federal benefits, and fulfill other needs not related to international travel. Under U.S. law, the Secretary of State has the authority to issue passports, which may be valid for up to 10 years. Only U.S. nationals may obtain a U.S. passport, and evidence of nationality is required with every passport application. State Passport Operations The Deputy Assistant Secretary for Passport Services oversees the Passport Services Office, within State’s Bureau of Consular Affairs. Passport Services, the largest component of Consular Affairs, consists of five headquarters offices: Field Operations, Technical Operations, Passport Integrity and Internal Controls Program, Planning and Program Support, and Legal Affairs and Law Enforcement Liaison. In addition to these headquarters offices, State operates 17 passport issuing agencies in Aurora, Colorado; Boston; Charleston, South Carolina; Chicago; Honolulu; Houston; Los Angeles; Miami; New Orleans; New York; Norwalk, Connecticut; Philadelphia; Portsmouth, New Hampshire; San Francisco; Seattle; and two offices in Washington, D.C.—a regional passport agency and a special issuance agency that handles official U.S. government and diplomatic passports. State also opened new passport production facilities for the personalization of passport books in Hot Springs, Arkansas, in March 2007 and in Tucson, Arizona, in May 2008. As of May 2008, State employed more than 3,300 government and contract staff to receive, process, and adjudicate passport applications and print and mail out passport books. This number of staff has risen dramatically in recent years to handle the increased number of passport applications. Between October 2006 and May 2008, the number of passport specialists— staff responsible for approving and issuing most U.S. passports—more than doubled, to 1,353. In addition, State’s passport agencies employ roughly 1,500 staff as contractors, who perform nonadjudicative support functions such as data entry, printing, and mailing out passports. Separately, as of May 2008, State also employed about 600 full- and part- time staff at the National Passport Information Center (NPIC), which handles customer service inquiries from the public. Passport Application Process Figure 1 summarizes the passport application process, from the submission of an application at an acceptance facility or by mail, through payment processing and basic data entry at lockbox facilities operated by the financial agent, to adjudication and printing at passport agencies around the country. State is authorized to designate acceptance facilities—in addition to its own passport agencies—to provide passport execution services to the American public. The majority of passport applications are submitted by mail or in person at passport application acceptance facilities nationwide. Passport acceptance facilities are located at certain U.S. post offices, courthouses, and other institutions and do not employ State personnel. The passport acceptance agents at these facilities are responsible for, among other things, verifying whether an applicant’s identification document (such as a driver’s license) actually matches that applicant. These agents collect the application package, which includes the passport application, supporting documents, and payment, and send it to State’s centralized lockbox facility. According to State, the number of active acceptance facilities changes frequently as new facilities are added and others are dropped. In recent years, State has expanded its network of acceptance facilities to accommodate increasing passport demand. As of June 2008, there were over 9,400 such facilities nationwide, an increase from fewer than 7,000 facilities in March 2005. Lockbox Passport acceptance agents send application packages to a lockbox facility operated by a Treasury financial agent. The lockbox is responsible for opening and sorting passport application packages, verifying the completeness of the packages, processing payments, and batching the applications. In addition, lockbox staff scan the first page of the passport application, along with the payment check or money order, and apply a processing date to the application. Once data on the application are captured by software using character recognition and confirmed manually by data entry staff, the information is transferred to a server, which passport agencies can access to download into their passport issuance system. The physical passport application, along with supporting documents such as a birth certificate, is also sent via courier to a passport agency. The lockbox generally performs all application processing functions within 24 hours of receipt of the application from an acceptance facility. Adjudication, Personalization, and Delivery Once a passport application has been received by one of the passport agencies, it is examined by a passport specialist who determines, through a process called adjudication, whether the applicant should be issued a passport. Adjudication requires the specialist to scrutinize identification and citizenship documents presented by applicants to verify their identity and U.S. citizenship. It also includes the examination of an application to detect potential indicators of passport fraud and the comparison of the applicant’s information against databases that help identify individuals who may not qualify for a U.S. passport. When passport applications are submitted by mail or through acceptance facilities, specialists adjudicate the applications at their desks. A relatively small number of passport applications are submitted directly by applicants to one of the passport agencies. Applicants are required to demonstrate imminent travel plans to set an appointment for such services at one of the issuing agency’s public counters. “Counter” adjudication allows specialists to question applicants directly or request further information on matters related to the application, while “desk” adjudication requires contacting the applicants by telephone or mail in such cases. Once an applicant has been determined eligible for a passport by a passport specialist, the passport is personalized with the applicant’s information at the passport agency or one of the centralized printing facilities and then delivered to the applicant. Customer Service The National Passport Information Center, located in Dover, New Hampshire, and Lansing, Michigan, is State’s centralized customer service center. NPIC is a contractor-operated center that provides information and responds to public inquiries on matters related to passport services. Linked electronically to all passport agencies, NPIC provides an automated telephone appointment service that customers can access nationwide 24 hours a day and an online service for customers to check the status of their applications. A separate telephone number and e-mail address are dedicated for congressional staff inquiries. Historical Passport Trends State has experienced a tremendous increase in the number of passports it processes in recent years. Between 2004 and 2007, the number of passports issued more than doubled to nearly 18.5 million passports (see fig. 2). This rate of increase far surpasses historical trends—a 2005 study on passport operations noted that the number of passports issued in the 30 years between 1974 and 2004 increased just 72 percent. Demand for passports is seasonal in nature, with applications usually peaking between January and April, as the public prepares for spring and summer vacations, and then falling off from September through December (see fig. 3). In estimating future demand for passports, State factors in this seasonality. According to State data, about 28 percent of the U.S. population has a passport, with 85.5 million U.S. passports in circulation as of February 2008. Of these, more than 24 million will expire in the next 5 years. State noted that the number of people applying for passport renewal varies depending on the laws and regulations in effect, the economy, and other factors. In addition, people may apply for a passport renewal before their book expires or up to 5 years after it expires. In response to this rapid increase in demand for passports, State’s requested budget for passport activities has increased tenfold since 2002 (see fig. 4). This request is part of State’s Border Security Program, which includes funding for passport operations, systems, and facilities. The Border Security Program is funded through a combination of Machine Readable Visa fees, the Western Hemisphere Travel surcharge, Enhanced Border Security Program fees, and Fraud Prevention fees, as well as through appropriated funds. The majority of this funding comes from the Machine Readable Visa fees, which amounted to nearly $800 million of the Border Security Program budget in fiscal year 2007. State also collects fees for expedited passports and the Passport Security surcharge. New Regulations for Travel Documents Contribute to Increase in Passport Demand The increased demand for passports is primarily the result of WHTI, DHS’s and State’s effort to specify acceptable documents and implement document requirements at 326 air, land, and sea ports of entry. When fully implemented, WHTI will require all citizens of the United States and nonimmigrant citizens of Canada, Mexico, and Bermuda to have a passport or other accepted travel document that establishes the bearer’s identity and citizenship to enter or re-enter the United States at all ports of entry when traveling from within the Western Hemisphere. Prior to this legislation, U.S. citizens did not need a passport to enter the United States if they were traveling from within the Western Hemisphere, except from Cuba. DHS is implementing WHTI in two phases: first, for air ports of entry, and second, for land and sea ports of entry (see fig. 5). On January 23, 2007, DHS implemented WHTI document requirements at air ports of entry. On January 31, 2008, DHS began implementing the second phase of WHTI at land and sea ports of entry by ending the routine practice of accepting credible oral declarations as proof of citizenship at such ports. DHS is required by law to implement WHTI document requirements at the land and sea ports of entry on the later of two dates: June 1, 2009, or 3 months after DHS and State certify that certain implementation requirements have been met. During the 2007 surge in passport demand, due to the passport application backlog, certain WHTI requirements were suspended. Specifically, on June 8, 2007, State and DHS announced that U.S. citizens traveling to Canada, Mexico, the Caribbean, and Bermuda who have applied for but not yet received passports could temporarily enter and depart from the United States by air with a government-issued photo identification and Department of State official proof of application for a passport through September 30, 2007. Passport Card In October 2006, to meet the documentation requirements of WHTI and to facilitate the frequent travel of persons living in border communities, State announced plans to produce a passport card as an alternative travel document for re-entry into the United States by U.S. citizens at land and sea ports of entry. The passport card is being developed as a lower-cost means of establishing identity and nationality for American citizens and will be about the size of a credit card. Individuals may apply for either a traditional passport book or a passport card, or both. Applications for the passport card will undergo the same scrutiny and security checks as applications for the traditional passport book, and the card will incorporate security features similar to those found in the passport book. State began accepting applications for the passport card in February 2008 and began producing the card in July 2008. State and other officials have suggested that the availability of the passport card may generate additional demand, as individuals may apply for a card for identification for nontravel purposes, such as voting. State Was Unprepared for 2007 Surge in Passport Demand, Leading to Lengthy Wait Times for Applicants State was unprepared for the record number of passport applications it received in 2007 because it underestimated overall demand for passports and did not anticipate the timing of this demand. Consequently, State struggled to process this record number of passports, and wait times rose to record levels. State’s efforts to respond to the demand for passports were complicated by communications challenges, which led to large numbers of applicants being unable to determine the status of their applications. State Was Unprepared for Record Number of Passport Applications in 2007 State’s initial estimate for passport demand in fiscal year 2007, 15 million applications, was significantly below its actual receipt of about 18.6 million passport applications, a record high. Because of its inability to accurately determine the increase in applications, State was unable to provide revisions in its estimates to the lockbox financial agent in enough time for the lockbox to prepare for the increased workload, leading to significant backlogs of passport applications. State Underestimated Demand for Passports State was largely unprepared for the unprecedented number of passport applications in 2007 because it did not accurately estimate the magnitude or the timing of passport demand. In January 2005, State estimated that it would receive 15 million passport applications in fiscal year 2007—about 44 percent more than it received in fiscal year 2005. However, actual receipts totaled about 18.6 million applications in fiscal year 2007, about 23 percent more than State had originally estimated. According to State officials, planning efforts to respond to increased demand are predicated on demand estimates, highlighting the need for accurate estimates. Limitations in the survey methodology used by State’s contractor responsible for collecting survey data on passport demand contributed to State’s underestimate. State based its estimate partly on a survey of an unrepresentative sample of land border crossers. This survey initially estimated an increase over the baseline demand for passports of more than 4 million applications in fiscal year 2007 due to implementation of the first phase of WHTI. However, our analysis of the survey methodology found several limitations. First, the survey was conducted in July 2005, over a year before the beginning of fiscal year 2007 and roughly 2 years before the peak of the surge in demand. According to contractor officials, many respondents have a limited ability to estimate their likely travel plans that far in advance. Moreover, State officials noted that travel document requirements were changed several times by Congress and by regulation between 2005 and 2007, likely affecting passport demand. Second, the 2005 survey did not estimate total passport demand because it did not collect new data on air and sea travelers. Third, the survey was unable to provide estimates on when the increased demand would occur. To refine its estimate, State adjusted the figures provided by the survey by using monthly application trends from previous years. According to these trends, State expected to receive 4.7 million passport applications in the first 3 months of 2007. However, demand for passports in 2007 did not follow previous seasonal trends, and State ultimately received about 5.5 million applications during those first 3 months. According to the then- Assistant Secretary for Consular Affairs, this unprecedented level of demand in a compressed period contributed to State’s inability to respond to demand. State’s efforts to estimate demand for passports were also complicated by several external factors, including preparations for the introduction of the passport card for land border crossers and changes in implementation timelines for WHTI. For example, in its fiscal year 2007 budget request and Bureau Performance Plan for Consular Affairs, submitted to the Office of Management and Budget in January 2005, State anticipated the receipt of 15 million passport applications in 2007 and requested $185 million for passport operations, facilities, and systems to meet this demand. However, due to these changing circumstances, State revised the 2007 estimates in subsequent planning and budget documents, estimating 16.2 million receipts in April 2006 and 17.7 million receipts in March 2007. State Did Not Communicate Effectively with the Lockbox Facility State’s fluctuating demand estimates also complicated efforts to prepare for the surge in demand at the lockbox operated by the financial agent, which provides passport application data entry and payment processing services. According to lockbox agent documents, between May 2006 and February 2007, State provided lockbox officials with at least five sets of estimates of passport applications for fiscal year 2007. Although the lockbox agent began preparing for an increased workload in the end of 2006, lockbox officials told us that they had difficulty adjusting to these changing estimates, because it takes roughly 60 to 90 days to prepare for increased demand, such as by hiring additional staff and ordering additional scanners. Further, these officials told us they did not expect the volume of applications they eventually did receive. According to State officials, the lockbox agent planned to process 325,000 applications per week, but actual workload peaked at 500,000 applications per week, an increase of over 50 percent. As a result, large numbers of passport applications accumulated at the lockbox facility, and applications took far longer to be processed than the typical 24 hours. In April 2007, according to lockbox data, many applications took as long as 3 weeks to process before being sent to passport agencies for adjudication. The primary issues contributing to this backlog, according to lockbox officials, were incorrect demand estimates from State and insufficient lead time. State Struggled to Process a Record Number of Passport Applications in 2007 State issued a record number of passports in fiscal year 2007, but deficiencies in its efforts to prepare for this increased demand contributed to lengthy backlogs and wait times for passport applicants. Reported wait times for routine passport applications peaked at 10 to 12 weeks in the summer of 2007—with hundreds of thousands of applications taking significantly longer—compared to 4 weeks in 2006. State Issued a Record Number of Passports in 2007 According to State data, the department issued a record number of passports in fiscal year 2007—about 18.5 million passports, over 50 percent more than the 12.1 million passports it issued in fiscal year 2006. State officials characterized the increase in passport demand as exponential over the past few years and attributed it mostly to the increased number of applications from Americans complying with the WHTI requirements. As noted earlier, the number of passports issued doubled between 2004 and 2007. In January 2007, State began to notice a sharp increase in passport applications. Department officials initially believed this increase was temporary because of their efforts, initiated in December 2006, to publicize new travel document requirements related to the WHTI; however, State reported that the number of applications it received increased from about 1.5 million per month in January and February 2007 to about 1.8 million or more in each of the following 3 months. Additionally, as noted in a 2007 study, passport applications in 2007 did not conform to historical trends, contributing to State’s lack of preparedness. Wait Times Reached Record Highs in 2007 due to Unprecedented Demand As a result of the increased number of passport applications in the first half of 2007, reported wait times more than doubled, causing applicants to wait 10 to 12 weeks for their passports on average, though many applicants waited significantly longer. According to State data, the average time to process a passport—from the time one of State’s passport agencies receives the application until the time it mails the passport to the applicant—was about 3½ weeks in January 2007, better than the goal of 5 weeks that State had during that period. However, by the summer of 2007, processing times had risen to about 8½ weeks, which, according to State officials, led to wait times of between 10 and 12 weeks. Further, data provided by State show that 373,000 applications—or about 12 percent of all routine applications—took over 12 weeks to process during the peak of the surge in July and August 2007. By contrast, average processing times peaked at just over 4 weeks in 2006 and just over 3 weeks in 2005 (see fig. 6). Furthermore, expedited passport applications, which State guaranteed would be processed within 3 business days of receipt, took an average of over 6 days to process in July 2007, leading to reported wait times of 2 to 3 weeks for expedited applications. In addition, there were wide variations in routine application processing times between the different passport agencies during the surge. According to State’s data, average processing times for individual passport agencies ranged between 13 and 58 days during the peak of the surge in July 2007. Communications Issues Contributed to Customer Frustration State does not have consistent service standards or goals for timeliness of passport processing. During the 2007 surge, many applicants found it difficult to get timely, accurate information from State regarding wait times for passports; as a result, State experienced a record number of customer service inquiries from the public and Congress during the surge, drawing resources away from adjudicating passports and increasing wait times. In addition, State does not systematically measure applicants’ wait times—measuring instead processing time, which does not include the applicant’s total wait time—further contributing to the confusion and frustration of many applicants. State Does Not Have a Consistent Customer Service Standard for Passport Processing Times State does not provide passport applicants with a committed date of issuance for passports; rather, it publishes current processing times on the department Web site. Over the past year, State has changed the information provided on its Web site from estimated wait time to expected processing time. Because these processing times fluctuate as passport demand changes, applicants do not know for certain when they will receive their passports. For example, at the beginning of the surge, reported wait times were 6 to 8 weeks. By the summer of 2007, however, reported wait times had risen to 10 to 12 weeks before falling to 6 to 8 weeks in September and 4 to 6 weeks in October 2007. According to passport agency staff, however, the times on State’s Web site were not updated frequently enough during the surge, which led to inaccurate information being provided to the public. Further, State has not had consistent internal performance goals for passport timeliness (see table 1). While State generally met its goals for passport processing times—which decreased from 25 to 19 days—between 2002 and 2005, the department changed its timeliness goal in 2007 from processing 90 percent of routine applications within 19 days to maintaining an average processing time of 35 days for routine applications. According to State officials, the department relaxed its goals for 2007 and future years due to the large increase in workload and the expectation of future surges in passport demand. However, even with the unprecedented demand for passports in 2007 and State’s lack of preparedness, the department managed to maintain a reported average processing time of 25 days over the course of the year, raising questions about whether State’s 35-day goal is too conservative. Inaccurate Information on Processing Times Contributed to the Strain on Passport Operations During the 2007 surge in passport demand, applicants found it difficult to get information about the status of their applications, leading many to contact several entities for information or to reapply for their passports. Many of the applicants who did not receive their passports within their expected time frame called NPIC—State’s customer service center— overwhelming the center’s capacity and making it difficult for applicants to get through to a customer service representative. Other applicants contacted passport agencies or acceptance facilities directly. However, passport agency staff told us that there was little or no contact between their customer service representatives and the acceptance facilities, leading to applicants receiving inconsistent or inaccurate information regarding wait times. Passport agency staff said that officials in Washington provided processing time estimates to postal facilities that were far below actual processing times. In addition to contacting State and State’s partners, thousands of applicants contacted their Members of Congress for assistance in getting their passports on time, according to State data. One Senator noted that he increased the number of staff in his office responding to passport inquiries from one to seven during the height of the surge in passport demand. According to State officials, many applicants made inquiries about the status of their passports through multiple channels—through NPIC, passport agencies, State headquarters, or congressional offices—leading to several cases in which multiple staff at State were tasked with searching for the same application. This duplication of effort drew resources away from passport adjudication and further contributed to delays in processing. According to State officials, many applicants who were unable to receive timely, accurate information on the status of their passport applications appeared in person at passport agencies to resubmit their applications— some having driven hundreds of miles and others having taken flights to the nearest passport agency. For example, according to officials at the New York passport agency, whose workload consists primarily of counter applications, the number of in-person applicants nearly doubled at the height of the surge. These officials told us they generally issue 450 to 550 passports on any given day, but during the surge they experienced an extra 400 to 600 daily applicants without appointments, most of whom were resubmitting their applications. Officials at another passport agency added that customers appearing in person at the passport agency stated that they would have made alternative arrangements had they known how long the wait time was going to be. This high number of resubmissions further slowed State’s efforts to reduce passport backlogs during the surge. The inundation of in-person applicants led to long lines and large crowds at many passport agencies during the summer of 2007. For example, officials in New York said that customers waited in line outside the building for up to 6 hours before appearing at an appointment window— and then waited even longer to see a passport specialist. According to these officials, this line snaked around the building, and the agency had to work with local law enforcement to control the crowds. Officials in Houston also said that crowd control during the surge was a significant challenge for their agency due to the large numbers of applicants appearing without appointments. State’s Estimated Processing Times Do Not Measure Applicant’s Total Expected Wait Time The passport processing times that State publishes on its Web site do not measure the total length of time between the applicant’s submission of an application and receipt of a passport. According to State officials, processing times are calculated based on passport aging statistics—that is, roughly the period beginning when the passport agency receives a passport application from the lockbox facility and ending when the passport is mailed to the applicant. Consequently, State’s measure of processing times does not include the time it takes an application to be sent from an acceptance facility to the lockbox, be processed at the lockbox, or be transferred from the lockbox to a passport agency. While this time may be as short as 1 to 2 days during nonpeak periods, during the surge, when hundreds of thousands of passport applications were held at the lockbox facility for as long as 3 weeks, this time was significantly longer. Passport agency officials told us that during the surge, applicants were confused about the times published on State’s Web site, as they were not aware that State did not start measuring processing times until a passport agency received the application from the lockbox facility. Finally, customers wishing to track the status of their applications are unable to do so until 5 to 7 days after they have submitted their passport application, because applications do not appear in State’s tracking system until the department receives them from the lockbox facility. State Took Emergency Measures and Accelerated Some Planned Efforts to Increase Passport Production Capacity State increased the capacity of its staffing, facilities, customer service, and lockbox functions during the surge. Passport agencies also developed their own efforts to increase the efficiency and effectiveness of passport operations. State’s actions, combined with seasonal declines in passport applications, decreased wait times to normal levels by October 2007. State estimated the cost of the emergency measures to respond to the surge to be more than $40 million. State Increased the Capacity of its Staffing, Facilities, Customer Service, and Lockbox Functions In reaction to the 2007 surge in passport demand, State took a variety of actions related to staffing to increase its production capacity. State instituted mandatory overtime for all government and contract staff and suspended all noncritical training and travel for passport staff during the surge. State hired additional contract staff for its passport agencies to perform nonadjudication functions. State also issued a directive that contractor staff be used as acceptance agents to free up passport specialist staff to adjudicate passport applications, and called upon department employees—including Foreign Service officers, Presidential Management Fellows, retirees, and others—to supplement the department’s corps of passport specialists by adjudicating passports in Washington and at passport agencies around the United States. State also obtained an exemption from the Office of Personnel Management to the hiring cap for civil service annuitants, so that it could rehire experienced and well-trained retired adjudicators while it continued to recruit and train new passport specialists. In addition, the department dispatched teams of passport specialists to high-volume passport agencies to assist with walk- in applicants and process pending passport applications. These teams also provided customer support, including locating and expediting applications of customers with urgent travel needs. Finally, consular officers at nine overseas posts also remotely adjudicated passports, using electronic files. In addition, State took steps to increase the capacity of its facilities to handle the increased workload. State expanded the hours of operations at all of its passport agencies by remaining open in the evenings and on weekends. Several agencies also added a second shift, and State’s two passport processing centers operated 24 hours a day, in three shifts. Public counters at passport agencies were also opened on Saturdays for emergency appointments, which were scheduled through State’s centralized customer service call center. In addition to increasing work hours, State realigned workspace to make more room for adjudication purposes. For example, passport agencies used training and conference rooms to accommodate additional passport specialists. One passport agency borrowed space from another government agency housed in the same building to prescreen applicants. Some passport agencies that had more than one shift instituted desk sharing among staff. In some instances, because of the lack of workstations, adjudication staff also manually adjudicated applications with a pen and paper and entered the application’s approval into State’s information system at a later time. In addition, one passport agency renovated its facility by expanding the fraud office to add desks for more staff. To further increase the capacity of its customer service function, State extended NPIC’s operating hours and, according to State officials, increased the number of its customer service representatives from 172 full- time and 48 part-time staff in January 2007 to 799 full-time and 94 part-time staff in September 2007. In response to heavy call volume at NPIC during the surge, State installed 18 additional high-capacity lines, each of which carries 24 separate telephone lines, for a total of 432 new lines—25 percent of which were dedicated to congressional inquiries, according to State officials. State also established an e-mail address for congressional inquiries. To supplement NPIC, State also established a temporary phone task force in Washington composed of department employees volunteering to provide information and respond to urgent requests, augmented an existing consular center with about 100 operators working two shifts, and temporarily expanded its presence at a federal information center with 165 operators available to assist callers 7 days a week. State also took emergency measures in coordination with Treasury to bolster the lockbox function in reaction to the surge. First, Treasury coordinated with State to amend the terms of its memorandum of understanding with its financial agent responsible for passport application data entry and payment processing, to increase the agent’s lockbox capacity. Specifically, under the revised memorandum, the financial agent committed to processing up to 3 million applications per month at the lockbox. According to Treasury officials, to increase its processing capacity, the financial agent increased the number of its staff at the lockbox facility from 833 in January 2007 to 994 in September 2007; offered a pay incentive to increase the number of its employees working overtime; and opened an additional lockbox facility—operating 24 hours a day, 7 days a week in three shifts. In addition, the financial agent implemented some process improvements at the lockbox during the surge, including automating data entry, presorting mail by travel date, and implementing a new batching process to increase the number of applications processed. The financial agent also increased the number of scanners, the capacity of its application server and data storage, and the bandwidth of its network to accommodate the heavy volume of passport applications. In addition to the measures described above, Treasury and State held weekly conference calls with the financial agent to discuss concerns and determine various courses of action to clear the passport application backlog. Treasury and State officials also visited lockbox facilities to review operations and received daily status reports from the financial agent indicating the processing volumes and holdover inventory. In addition to the emergency steps that State took, it also accelerated some planned efforts such as hiring more permanent staff and opening a new passport book printing facility. While State’s hiring of additional permanent staff was already in CA’s long-term planning efforts to handle an increase in passport demand, the time frame to do so was moved up to respond to the passport demand surge, according to State officials. Consequently, State hired an additional 273 staff in the last quarter of fiscal year 2007; however, according to State officials, not all of these staff were on board at the end of the fiscal year because of delays in processing security clearances for new hires. Additionally, State opened a new passport book printing center in March 2007, ahead of its schedule to open in June 2007, to centralize its book printing function and free up space at passport agencies for adjudication. Passport Agencies Took Actions to Improve Their Operations in Reaction to the Surge Passport agencies took various actions to meet their specific needs in reaction to the surge. During our site visits, State officials told us that their passport agencies had undertaken such actions as developing a software program to better track suspense cases; creating a batch tracking system whereby each shelf was numbered and all batches boxed on this shelf were marked with the same number; developing a “locator card” for customers, which was color-coded to indicate different situations—such as customers submitting new applications, inquiring about pending applications, and resubmitting applications—to enable the agency to locate the application file before the customer came into the agency; providing customers with a ticket that provided expedited service if they had to return on another day; and using students for nonadjudication tasks for the summer. In addition, according to State officials, some passport agencies used security guards to prescreen applicants at the entrance to control crowds and improve the efficiency of operations. Finally, other agencies organized teams to handle inquiries from congressional staff and State headquarters staff. In an effort to document and disseminate such initiatives, State compiled a best practices document for passport operations during the surge. These best practices were submitted by passport agencies on a variety of issues, including work flow improvements, counter management, and communication, among others. To provide a forum for feedback for passport agencies and improve passport operations, State also conducted a lessons learned exercise following the surge. State gathered information from passport staff at all levels and compiled a lessons learned document, which was made available on CA’s internal Web site. According to this review, the primary lesson learned from the surge was that the United States passport is increasingly viewed by the American public not only as a travel document, but as an identity document. Accordingly, the lessons learned document outlined lessons learned in five main categories—process, communications, technology, human resources, and contracts—to help meet future demand for passports. However, State officials told us that this document was a draft and State has not formally embraced it. State’s Actions and Seasonal Decline in Applications Helped Reduce Processing Times by October 2007 The extraordinary measures that State implemented to respond to the surge in passport demand, combined with the normal seasonal decline in passport applications between September and January, helped State reduce wait times by October 2007. According to data provided by State, the department returned to normal passport processing times of 4 to 6 weeks by October 2007. These data show that State has maintained these processing times through July 2008, according to State’s Web site. Emergency Response to Passport Surge Cost State over $40 Million State estimated the cost of its emergency measures to respond to the 2007 surge in passport demand to be $42.8 million. This amount included $28.5 million for contract-related costs, $7.5 million for overtime pay for staff from CA and other bureaus within State, and $3.1 million spent on travel to passport agencies for temporary duty staff. In addition, State spent $3.2 million on costs associated with buying equipment and furniture. State also spent an additional $466,000 on costs related to telephone services for its call centers, for rentals, and for Office of Personnel Management position announcements for hiring additional passport staff during the surge. These estimates do not include the costs of other measures such as hiring additional staff, which State had already planned but accelerated in order to respond to the 2007 surge. To cover costs incurred due to the surge, State notified Congress in June 2007 of its plans to devote an additional $36.9 million to the Border Security Program. According to State officials, this amount included $27.8 million for passport operations, such as $15 million for a passport processing center, additional costs for Foreign Service Institute training for new passport specialists, and salaries for 400 new staff to be hired in fiscal year 2007. In September 2007, State notified Congress of its intent to obligate an additional $96.6 million for its Border Security Program, including $54 million for additional passport books, according to State officials. In December 2007, State sent a revised spending plan for fiscal year 2008 to Congress to increase its resources to enable it to handle processing of 23 million passports. This plan included an additional 700 personnel to meet anticipated passport demand and a new passport adjudication center. The plan also provided for three passport gateway agencies to be established in fiscal year 2008. Despite Improvements in Its Ability to Handle Near-Term Increases in Demand, State Lacks a Comprehensive Long- Term Strategy to Improve Passport Operations State has enhanced its capacity for responding to surges in passport demand in the near term, such as by improving its efforts to estimate passport demand. However, State lacks a comprehensive, long-term strategy for improving passport operations. State commissioned a review of its passport operations, completed in 2005, that identified several deficiencies and proposed a number of potential measures to guide modernization efforts; however, State does not have a plan to prioritize and synchronize these efforts. We have reported that an enterprise approach could help agencies develop more efficient processes, and this type of approach could help State improve passport operations and better prepare for future changes in passport demand. State Has Improved its Ability to Respond to Near- Term Increases in Passport Demand State has taken several steps to increase its passport production capacity and improve its ability to respond to near-term increases in passport demand. As we have noted, State hired more staff and improved individual components of passport operations, such as centralizing the printing of passport books and upgrading information technology, during and following the 2007 surge in passport demand. Additionally, State developed two shorter-term plans to address a future increase in demand, including an adjudicative capacity plan, which establishes a set of triggers for determining when to add capacity. According to State officials, the department has completed some preparations for future surges in demand, such as opening a second book printing facility in May 2008 and creating a reserve adjudication force. State also expects to open new passport agencies in Dallas and Detroit by the end of 2008 and in Minneapolis by March 2009, according to these officials. However, it faces challenges in completing others. For example, State hired only 84 out of a planned 400 additional staff called for in the first quarter of fiscal year 2008. Similarly, according to officials, State has not yet established an additional mega processing center and is behind schedule in renovating and expanding some of its existing facilities. According to these officials, these plans were developed to expand State’s capacity to issue passports. State has also taken several steps to improve future estimates of passport demand since it underestimated demand in fiscal year 2007. In particular, State’s contractor designed a new passport demand survey to overcome limitations in its 2005 survey, which was not representative of all border crossers and did not include air and sea travelers. The contractor’s 2007 estimate of total demand for passports in 2008 was derived from (1) a land border crosser survey to collect data on the impact of WHTI on passport demand in 2008, and (2) a nationally representative panel survey of 41,000 U.S. citizens, which included data on overall passport demand, including for sea and air travel and for nontravel identification purposes. State then applied average monthly application rates from previous years to the contractor’s data to estimate the number of passport applications for each month and to identify peak demand. We found these methodologies sound in terms of survey design, sample selection, contact procedures, follow-up, and analysis of nonrespondents. However, estimating passport demand faces several limitations. State and contractor officials outlined some of these limitations, which include the following. It can be difficult for respondents to anticipate travel many months—or years—into the future. For example, the most recent surveys were conducted in May and September of 2007 and were used to estimate travel throughout 2008. Survey respondents tend to overstate their prospective travel and thus their likelihood of applying for a passport. While the contractor adjusted for this phenomenon in its 2007 survey, it did so based on assumptions rather than data. Some survey respondents did not understand certain regulations and options for passports and border crossings, such as WHTI requirements, suggesting that some of these individuals are unaware of the future need to apply for passports for travel to Canada or Mexico. Changes in personal or professional circumstances, or in the economy, can lead to changes in individuals’ international travel plans. Changes in regulations can affect passport demand. For example, New York State signed a memorandum of understanding with DHS to issue enhanced driver’s licenses that could be used for land border crossings and could reduce the demand for passports or passport cards obtained solely to meet WHTI requirements. State Lacks a Comprehensive Strategy to Improve Long-Term Passport Operations A 2005 study of passport operations commissioned by State identified several limitations in State’s passport operation, many of which were exposed during the department’s response to the 2007 surge in demand. This study and other plans, as described above, have also proposed numerous improvements to passport operations—many of which were generated by State officials themselves—and the department has begun to implement some of them. However, State does not have a long-term strategy to prioritize and synchronize these improvements to its operation. As we have reported previously, using a business enterprise approach that examines a business operation in its entirety and develops a plan to transition from the current state to a well-defined, long-term goal could help State improve its passport operations in the long term. Review of Passport Operations Identified Several Deficiencies and Proposed Modernization Efforts In 2004, State contracted with an independent consulting firm to study its passport operations, which had not been formally examined for over 25 years. The study, issued in 2005, outlined the current state of passport operations and identified several issues that limited the efficiency and effectiveness of passport operations. Several of these limitations were exposed by State’s response to the 2007 surge in passport demand, and many of them remain unresolved. For example, the study found that State’s practice of manually routing the original paper passport application through the issuance process—including mailing, storage, management, and retrieval of physical batch boxes containing paper applications— slowed the process, extended processing time, and made upgrade requests difficult to handle. Due to the overwhelming number of applications during the surge, a few passport agencies told us that there was no extra space available at their facilities; according to agency officials, this situation led to duplicative efforts. In addition, the study found that limited information was available to management and that reporting tools, such as Consular Affairs’ Management Information System, could not produce customized reports. Further, the study found that this system could not provide information on the performance of its business partners, such as acceptance facilities or the lockbox, resulting in data being available only for applications that had been received at a passport agency. As a result, during the surge, State was not immediately aware of the growing workload at the lockbox. The study also found limitations in State’s communications, including challenges to communicating among passport agencies, providing feedback to headquarters in Washington, and conducting public outreach. For example, during the surge, State did not effectively disseminate management decisions and communicate changes in internal processes and resources available to field staff, according to State’s lessons learned document. In addition to identifying limitations, the study proposed a guide for State’s modernization efforts, including a framework to put in place for passport services by the year 2020. As part of this guide, the study identified key factors that affected State’s methods for conducting business and the performance of passport operations. For example, the study identified increased demand as one such factor, due to normal trends in passport demand, the impact of WHTI implementation, and the passport’s increasing role as an identification document for everyday transactions. To address these issues, the study suggests that State will have to take steps such as redistributing workload through centralization to meet increasing volumes—State has begun to implement this suggestion by establishing passport printing facilities in Arkansas and Arizona. Additionally, the study notes that passport adjudication practices will become an even more important part of combating terrorism and other security concerns in the future, which will require State to utilize technology and external data to improve its risk assessment and fraud detection methods. Finally, the study also suggests that State will face changing customer expectations in the future, requiring more frequent and effective communications and, possibly, changes to service standards. As we previously noted, this issue continues to be a challenge for State. Although the study proposed several initiatives to improve passport operations, State officials told us that the department has not developed a formal plan to implement the initiatives, nor does it have a strategic plan outlining how it intends to improve its entire passport operations. State officials told us that because the department has been largely focused on carrying out its day-to-day operations—especially as it responded to the 2007 surge in passport demand—it has not had time to document its strategic plan. While State has taken a few steps to implement some of the proposed initiatives of the study—such as developing and implementing the e-Passport, opening two passport adjudication centers, and issuing passports remotely—State does not have a systematic strategy to prioritize and synchronize these potential improvements to its passport operations. Some of these proposed initiatives that State has not implemented could be useful to State’s current operations, including the following: leveraging electronic work flow management—enabling State to develop flexible, streamlined work streams that improve its ability to monitor and manage passport operations while reducing manual processes for the physical movement and storage of paper applications and supporting documentation—to ensure a more efficient work flow that supports the issuance of increasing numbers of passports every year; providing management visibility over the end-to-end passport issuance process extending across State and partner organizations, to effectively manage the process and enforce performance standards; applying validations and identity checks automatically upon receipt or modification of an application by consistently applying a comprehensive set of business rules, to strengthen an adjudication process that supports the integrity of the passport as a primary identity document; offering an online point of service with expanded functionality as a means for self-service by the public to facilitate a simplified, flexible, and well- communicated application process to enhance service to the passport customer; and conducting a comprehensive workforce analysis to define a sustainable workforce structure and plans through 2020 and enhancing communications within State and its business partners to improve efficiency and promote knowledge sharing. The recent increases in passport demand have made the need for a plan to prioritize the study’s proposed initiatives that State intends to implement more urgent. While the study assumed that State would issue a minimum of 25 million passports by 2020, this time frame has already become outdated, as actual issuances were 18.6 million in fiscal year 2007 and, in July 2007, were estimated by State to reach 30 million as early as 2010. An Enterprise Approach Could Help State Improve Passport Operations We have reported that using an enterprise approach to examine and improve the entirety of a business process can help agencies develop more efficient processes. An enterprise approach defines day-to-day operations in order to meet agency needs and processes that result in streamlined operations, rather than simply automating old ways of doing business, and effectively implements the disciplined processes necessary to manage the project. A key element of this approach is the concept of operations, which assesses the agency’s current state, describes its envisioned end state, and provides a transition plan to guide the agency from one state to the other. An effective concept of operations would also describe, at a high level, how all of the various elements of an organization’s business systems relate to each other and how information flows among these systems. Further, a concept of operations would serve as a useful tool to explain how all the entities involved in a business system can operate cohesively, rather than in a stovepiped manner—in the case of passport issuance, this tool would include acceptance agents, the lockbox facility, and the various components of passport operations within State. Finally, it would provide a road map that can be used to (1) measure progress and (2) focus future efforts. Using an enterprise approach could provide State with management visibility over the passport issuance process extending across its entire passport operations, thereby improving these operations in the long term. While State has made several improvements to its passport operations, it has yet to develop and implement a comprehensive strategy for passport operations. The 2005 study, which included a proposed concept of operations, recognized the need for a comprehensive approach and was designed to analyze the entire passport issuance process—including the applicant, passport agency, acceptance facility, lockbox facility, passport processing center, and passport book printing center. However, according to State officials, State has not adopted the framework for improving passport operations proposed by this study, nor has it developed an alternative strategy for prioritizing and synchronizing its varied efforts to improve these operations. Conclusion The 2007 surge in passport demand exposed serious deficiencies in State’s passport issuance process. Passport wait times reached record highs, leading to inconvenience and frustration for many thousands of Americans. Once it recognized the magnitude of the problem it was facing, State took extraordinary measures to reduce wait times to normal levels by October 2007. However, these actions were not part of a long-term, comprehensive strategy to improve passport operations. State estimates that demand for passports will continue to grow significantly, making such a strategy an urgent priority. Indeed, a study State commissioned to identify potential improvements to its passport operations was premised upon demand estimates for 2020 that are likely to be surpassed as early as this year. State needs to rethink its entire end-to-end passport issuance process, including each of the entities involved in issuing a passport, and develop a formal strategy for prioritizing and implementing improvements to this process. Doing so would improve State’s ability to respond to customer inquiries and provide accurate information regarding expected wait times by increasing its visibility over a passport application from acceptance to issuance. It would also encourage greater accountability by providing transparency of State’s passport operations to the American public. Recommendations for Executive Action In order to improve the effectiveness and efficiency of passport operations, we recommend that the Secretary of State take the following two actions: Develop a comprehensive, long-term strategy for passport operations using a business enterprise approach to prioritize and synchronize the department’s planned improvements. Specifically, State should fully implement a concept of operations document that describes its desired end state for passport operations and addresses how it intends to transition from the current state to this end state. Begin tracking individual passport applications from the time the customer submits an application at an acceptance facility, in order to maintain better visibility over the passport process and provide better customer service to passport applicants. Agency Comments and Our Evaluation State provided written comments on a draft of our report, which we have reprinted in appendix II. State concurred with our recommendations; however, it expressed disappointment with our finding that the department lacks a comprehensive strategy to improve its passport operations. Although the department has developed short- term and contingency plans for increasing passport production capacity and responding to future surges in demand, we do not believe these efforts constitute a comprehensive strategic plan. However, we believe the establishment and staffing of the Passport Services Directorate’s Strategic Planning Division is a step in the right direction, and we encourage this office to focus on the modernization efforts discussed in this report. State also disputed our characterization of the 2005 study it commissioned to review existing processes and propose recommendations for improving these processes. We did not intend to suggest that the department fully adopt all of the recommendations in that study and have clarified that point in our findings. State and Treasury also provided technical comments and updated information, which we have included throughout this report as appropriate. We are sending copies of this report to the Secretaries of State and the Treasury and will make copies available to others upon request. We will also make copies available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-4128 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology In this report, we review (1) the extent to which the Department of State (State) was prepared for the surge in passport demand in 2007 and how State’s readiness affected passport operations, (2) how State increased its passport production capacity in response to the 2007 surge, and (3) State’s readiness for near-term surges in demand and whether State has a comprehensive strategy in place to improve long-term passport operations. To determine the extent to which State was prepared for the surge in passport demand in 2007, how State’s readiness affected passport operations, and how State increased its passport production in response to the surge, we observed passport operations and interviewed U.S. government officials at six passport agencies—Hot Springs, Arkansas; Charleston, South Carolina; Houston; New Orleans; New York; and Washington. We selected these sites based on their workload volume and geographic locations. We visited State’s lockbox facility in New Castle, Delaware, and interviewed officials from the financial agent responsible for providing lockbox functions. We reviewed State’s passport demand estimates for fiscal year 2007 and analyzed the survey methodology supporting these estimates. We also collected and analyzed data on passport receipts and issuances, and staffing. In addition, we interviewed officials from State’s Bureau of Consular Affairs, the Department of Homeland Security, the Department of the Treasury’s Financial Management Service, and State contractors responsible for collecting survey data on passport demand. To determine State’s passport processing times during the 2007 surge in demand, we interviewed cognizant officials, analyzed data provided by State, and reviewed public statements by State officials and information on State’s Web site. We determined that these data were sufficiently reliable to illustrate the sharp rise in processing times that occurred in the summer of 2007, and place that rise in the context of yearly and monthly trends from 2005 to 2007. However, we found that the rise in application processing time in the summer of 2007 was likely understated to some degree. This understatement likely occurred because the turnaround time for entering applications into State’s data system increased greatly at some points during 2007, due to the abnormally large volume of applications. To determine the reliability of data on passport issuances from 1997 through 2007, we interviewed cognizant officials and analyzed data provided by State. We determined that the data were sufficiently reliable to illustrate a relatively stable level of demand for passports between 1997 and 2003, followed by a significant increase in passport issuances since 2003. To determine whether State is prepared to more accurately estimate future passport demand and has a comprehensive strategy in place to address such demand, we assessed State’s passport demand study for fiscal year 2008 and beyond, a draft report on lessons learned from the 2007 surge in passport demand, and State’s long-term road map for the future of passport operations. We also reviewed prior GAO reports on enterprise architecture and business systems management. In addition, we interviewed Bureau of Consular Affairs officials in Washington and at the regional passport agencies. We conducted this performance audit from August 2007 through July 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of State GAO Comments 1. State said that the 2005 study it commissioned to review existing passport processes and propose recommendations for improving these processes did not identify or mention deficiencies. We disagree. The version of the study provided to us notes that it includes an “assessment of the deficiencies of the current state” (p. 3) and identifies issues that “limit the efficiency and effectiveness of passport operations” (p. 4). These deficiencies included the reliance on a manual, paper-based work flow, ineffective communications, and inflexible passport systems. 2. We did not intend to suggest that the department should have adopted all of the 2005 study’s recommendations and have made slight modifications to our finding to clarify this point. Our intent was to note that the department has developed a variety of recommendations to improve its passport operations—many of which were developed by staff in Consular Affairs—but still needs a comprehensive strategy to prioritize and synchronize the improvements it intends to undertake. 3. While our report recognizes that State has developed several plans designed to increase passport production capacity, improving the department’s ability to respond to near-term increases in demand, these plans are not the same as a comprehensive strategy for improving passport operations. Our recommendation addresses State’s need for such a strategy to guide its modernization efforts, by using a business enterprise approach, and not just to increase capacity. 4. State notes that it has improved its efforts to track the 72 percent of passport applications it receives from U.S. Postal Service acceptance facilities for accountability purposes. From a customer service standpoint, we believe that the department should track all applications from the time of execution in order to give the customer an accurate estimate of when to expect his or her passport. Doing so would help eliminate customer confusion, which contributed to the strain on State’s customer service operation experienced during the 2007 surge. Appendix III: GAO Contacts and Staff Acknowledgments GAO Contact Acknowledgments In addition to the person named above, Michael Courts (Assistant Director), Robert Ball, Melissa Pickworth, and Neetha Rao made key contributions to this report. Technical assistance was provided by Carl Barden, Joe Carney, Martin de Alteriis, Chris Martin, and Mary Moutsos.
In 2007, following the implementation of new document requirements for travelers entering the United States from within the Western Hemisphere, the Department of State (State) received a record number of passport applications. In June 2009 further document requirements are scheduled to go into effect and will likely lead to another surge in passport demand. GAO examined (1) the extent to which State was prepared for the surge in passport demand and how its readiness affected passport operations, (2) State's actions to increase passport production capacity in response to the surge, and (3) State's readiness for near-term surges in demand and its strategy to improve passport operations. GAO interviewed officials from State and the Departments of the Treasury and Homeland Security, conducted site visits, and reviewed data on passport processing times and reports on passport operations. State was unprepared for the record number of passport applications it received in 2007, leading to significant delays in passport processing. State underestimated the increase in demand and consequently was not able to provide enough notice to the financial agent it uses for passport application payment processing for the agent to prepare for the increased workload, further adding to delays. As a result, reported wait times reached 10 to 12 weeks in the summer of 2007--more than double the normal wait--with hundreds of thousands of passports taking significantly longer. State had difficulty tracking individual applications and failed to effectively measure or communicate to applicants the total expected wait times, prompting many to re-apply and further straining State's processing capacity. State took a number of emergency measures and accelerated other planned efforts to increase its passport production capacity in 2007. For example, to help adjudicate passports, State established four adjudication task forces and deployed passport specialists to U.S. passport agencies severely affected by the surge. In addition, State accelerated hiring and expansion efforts. As a result of these efforts and the normal seasonal decline in passport applications, wait times returned to normal by October 2007. According to State estimates, these emergency measures cost $42.8 million. Although State has taken steps to improve its ability to respond to near-term surges in passport demand, it lacks a comprehensive strategy to improve long-term passport operations. State previously identified several deficiencies limiting the efficiency and effectiveness of passport operations, such as reliance on a paper-based work flow and ineffective communications, and these deficiencies were exposed by State's response to the surge. While State also identified a framework to guide its modernization efforts, it does not have a comprehensive plan to prioritize and synchronize improvements to its passport operations. A comprehensive strategy for making these improvements--for example, using a business enterprise approach--would better equip State to handle a significantly higher workload in the future.
GAO_GAO-10-703T
Background The mission of the FBI section that operates the National Instant Criminal Background Check System (NICS Section) is to ensure national security and public safety by providing the accurate and timely determination of a person’s eligibility to possess firearms and explosives in accordance with federal law. Under the Brady Handgun Violence Prevention Act and implementing regulations, the FBI and designated state and local criminal justice agencies use NICS to conduct checks on individuals before federal firearms licensees (gun dealers) may transfer any firearm to an unlicensed individual. Also, pursuant to the Safe Explosives Act, in general, any person seeking to (1) engage in the business of importing, manufacturing, or dealing in explosive materials or (2) transport, ship, cause to be transported, or receive explosive materials must obtain a federal license or permit, respectively, issued by the Bureau of Alcohol, Tobacco, Firearms and Explosives (ATF). To assist ATF, in February 2003, the FBI began conducting NICS background checks on individuals seeking to obtain a federal explosives license or permit. Persons prohibited by federal law from possessing firearms or explosives include convicted felons, fugitives, unlawful controlled-substance users and persons addicted to a controlled substance, and aliens (any individual not a citizen or national of the United States) who are illegally or unlawfully in the United States, among others. One of the databases that NICS searches is the FBI’s National Crime Information Center (NCIC) database, which contains criminal justice information (e.g., names of persons who have outstanding warrants) and also includes applicable records from the Terrorist Screening Center’s (TSC) consolidated terrorist screening database. In general, individuals who are reasonably suspected of having possible links to terrorism—in addition to individuals with known links—are to be nominated for inclusion on the consolidated terrorist watchlist by the FBI and other members of the intelligence community. One of the stated policy objectives of the government’s consolidated watchlist is the coordinated collection of information for use in investigations and threat analyses. Terrorist watchlist records in the NCIC database are maintained in the Known or Suspected Terrorist File (formerly the Violent Gang and Terrorist Organization File), which was designed to provide law enforcement personnel with the means to exchange information on known or suspected terrorists. About 90 Percent of NICS Transactions Involving Individuals on the Terrorist Watchlist Have Been Allowed to Proceed Because There Was No Legal Basis Identified to Deny the Transactions In May 2009, we reported that from February 2004 through February 2009, a total of 963 NICS background checks resulted in valid matches with individuals on the terrorist watchlist. Of these transactions, approximately 90 percent (865 of 963) were allowed to proceed because the checks revealed no prohibiting information, such as felony convictions, illegal immigrant status, or other disqualifying factors. Two of the 865 transactions that were allowed to proceed involved explosives background checks. The FBI does not know how often a firearm was actually transferred or if a firearm or explosives license or permit was granted, because gun dealers and explosives dealers are required to maintain but not report this information to the NICS Section. About 10 percent (98 of 963) of the transactions were denied based on the existence of prohibiting information. No transactions involving explosives background checks were denied. For today’s hearing, we obtained updated statistics from the FBI through February 2010. Specifically, from March 2009 through February 2010, FBI data show that 272 NICS background checks resulted in valid matches with individuals on the terrorist watchlist. One of the 272 transactions involved an explosives background check, which was allowed to proceed because the check revealed no disqualifying factors under the Safe Explosives Act. According to FBI officials, several of the 272 background checks resulted in matches to watchlist records that—in addition to being in the FBI’s Known or Suspected Terrorist File—were on the Transportation Security Administration’s “No Fly” list. In general, persons on the No Fly list are deemed to be a threat to civil aviation or national security and therefore should be precluded from boarding an aircraft. According to FBI officials, all of these transactions were allowed to proceed because the background checks revealed no prohibiting information under current law. In total, individuals on the terrorist watchlist have been involved in firearm and explosives background checks 1,228 times since NICS started conducting these checks in February 2004, of which 1,119 (about 91 percent) of the transactions were allowed to proceed while 109 were denied, as shown in table 1. According to the FBI, the 1,228 NICS transactions with valid matches against the terrorist watchlist involved about 650 unique individuals, of which about 450 were involved in multiple transactions and 6 were involved in 10 or more transactions. Based on our previous work, the NICS Section started to catalog the reasons why NICS transactions involving individuals on the terrorist watchlist were denied. According to the NICS Section, from April 2009 through February 2010, the reasons for denials included felony conviction, illegal alien status, under indictment, fugitive from justice, and mental defective. In October 2007, we reported that screening agencies generally do not check against all records in TSC’s consolidated terrorist watchlist because screening against certain records (1) may not be needed to support the respective agency’s mission, (2) may not be possible due to the requirements of computer programs used to check individuals against watchlist records, or (3) may not be operationally feasible. Rather, each day, TSC exports applicable records from the consolidated watchlist to federal government databases that agencies use to screen individuals for mission-related concerns. We raised questions about the extent to which not screening against TSC’s entire consolidated watchlist during NICS background checks posed a security vulnerability. According to TSC officials, not all records in the consolidated watchlist are used during NICS background checks. The officials explained that in order for terrorist information to be exported to NCIC’s Known or Suspected Terrorist File, the biographic information associated with a record must contain sufficient identifying data so that a person being screened can be matched to or disassociated from an individual on the watchlist. The officials noted that since not all records in TSC’s consolidated watchlist contain this level of biographic information required for this type of screening, not all records from the watchlist can be used for NICS background checks. According to TSC officials, the majority of records that do not contain sufficient identifying data are related to foreign nationals who would not be prospective purchasers of firearms or explosives within the United States and therefore would not be subject to NICS checks. We are continuing to review this issue as part of our ongoing review of the terrorist watchlist. FBI Has Taken Actions to Use Information from NICS Checks to Support Counterterrorism Efforts The FBI has taken additional actions to use information obtained from NICS background checks to support investigations and other counterterrorism activities. These actions include providing guidance to FBI case agents on how to obtain information related to NICS checks and efforts to analyze and share information on individuals matched to the terrorist watchlist. FBI Has Provided Guidance to Case Agents The FBI has provided guidance to its case agents on how to obtain information on individuals matched to the terrorist watchlist during NICS background checks. According to FBI Counterterrorism Division officials, TSC notifies the division when a NICS background check is matched to an individual on the terrorist watchlist. After verifying the accuracy of the match, the Counterterrorism Division will advise the FBI case agent that the individual attempted to purchase a firearm or obtain a firearm or explosives license or permit. The division will also provide the agent with contact information for the NICS Section and advise the agent to contact the section to answer additional questions. According to Counterterrorism Division officials, the case agent is also advised to contact ATF to obtain a copy of the form the individual used to initiate the transaction. For verified matches, NICS Section personnel are to determine if FBI case agents have information that may disqualify the individual from possessing a firearm or explosives—such as information that has been recently acquired but not yet available in the automated databases searched by NICS. To assist the division in searching for prohibiting information, NICS Section personnel are to share all available information that is captured in the NICS database with the case agent—name, date of birth, place of birth, height, weight, sex, race, country of citizenship, alien or admission number, type of firearm involved in the check (handgun, long gun, or other), and any exceptions to disqualifying factors claimed by an alien. According to FBI officials, these procedures have been successful in enabling the NICS Section to deny several gun transactions involving individuals on the terrorist watchlist based on disqualifying factors under current law. The FBI did not maintain specific data on the number of such denials. In response to a recommendation made in our January 2005 report, FBI headquarters provided guidance to its field offices in April 2005 on the types of additional information available to a field office and the process for obtaining that information if a known or suspected terrorist attempts to obtain a firearm from a gun dealer or a firearm or explosives license or permit. Regarding gun purchases, the guidance notes that if requested by an FBI field office, NICS personnel have been instructed to contact the gun dealer to obtain additional information about the prospective purchaser—such as the purchaser’s residence address and the government-issued photo identification used by the purchaser (e.g., drivers license)—and the transaction, including the make, model, and serial number of any firearm purchased. According to the guidance, gun dealers are not legally obligated under either NICS or ATF regulations to provide this additional information to NICS personnel. If the gun dealer refuses, the guidance notes that FBI field offices are encouraged to coordinate with ATF to obtain this information. ATF can obtain a copy of the form individuals must fill out to purchase firearms (ATF Form 4473), which contains additional information that may be useful to FBI counterterrorism officials. Regarding a firearm or explosives permit, the FBI’s April 2005 guidance also addresses state permits that are approved by ATF as alternative permits that can be used to purchase firearms. Specifically, if requested by an FBI field office, NICS personnel have been instructed to contact the gun dealer to obtain all information from the permit application. Further, the guidance notes that the use and dissemination of state permit information is governed by state law, and that the FBI has advised state and local agencies that also issue firearm or explosives permits to share all information with FBI field personnel to the fullest extent allowable under state law. According to the guidance, any information that FBI field offices obtain related to NICS background checks can be shared with other law enforcement, counterterrorism, or counterintelligence agencies, including members of an FBI Joint Terrorism Task Force that are from other federal or state law enforcement agencies. In general, under current regulations, all personal identifying information in the NICS database related to firearms transfers that are allowed to proceed (e.g., name and date of birth) is to be destroyed within 24 hours after the FBI advises the gun dealer that the transfer may proceed. Nonidentifying information related to each background check that is allowed to proceed (e.g., NICS transaction number, date of the transaction, and gun dealer identification number) is retained for up to 90 days. By retaining this information, the NICS Section can notify ATF when new information reveals that an individual who was approved to purchase a firearm should have been denied. ATF can then initiate any firearm retrievals that may be necessary. According to NICS Section officials, the section has made no firearm-retrieval referrals to ATF related to transactions involving individuals on the terrorist watchlist to date. Under provisions in NICS regulations, personal identifying information and other details related to denied transactions are retained indefinitely. The 24-hour destruction requirement does not apply to permit checks. Rather, information related to these checks is retained in the NICS database for up to 90 days after the background check is initiated. FBI is Analyzing and Sharing Information from NICS Checks The FBI is analyzing and sharing information on individuals matched to the terrorist watchlist to support investigations and other counterterrorism activities. In our May 2009 report, we noted that the FBI is utilizing a TSC database to capture information on individuals who attempted to purchase a firearm and were a match to the watchlist. Specifically, the FBI began analyzing each separate instance to develop intelligence and support ongoing counterterrorism investigations. Further, we reported that in October 2008, the FBI’s Counterterrorism Division conducted—for the first time—a proactive analysis of the information related to NICS background checks involving individuals on the terrorist watchlist that is captured in the TSC database. This analysis was conducted to identify individuals who could potentially impact presidential inauguration activities. Based on the value derived from conducting this analysis, the Counterterrorism Division decided to conduct similar analysis and produce quarterly reports that summarize these analytical activities beginning in May 2009. In updating our work, we found that the FBI’s Counterterrorism Division is now issuing these analytic reports on a monthly basis. According to division officials, the reports contain an analysis of all NICS background checks during the month that involve individuals on the terrorist watchlist. The officials noted that the individuals discussed in the reports range from those who are somewhat of a concern to those who represent a significant threat. The reports are classified and distributed internally to various components within the FBI, including all FBI field offices and Joint Terrorist Task Forces. The officials stated that these reports have played a key role in a number of FBI counterterrorism investigations. According to Counterterrorism Division officials, the names of individuals discussed in the reports are shared with other members of the intelligence community for situational awareness and follow-on analytical activity. TSC also generates reports that cover all instances of screening agencies coming in contact with an individual on the terrorist watchlist, including those related to NICS transactions. TSC provides the reports to numerous entities, including FBI components, other federal agencies, and state and local information fusion centers. These reports are distributed via the FBI’s Law Enforcement Online system. At the time of our updated review, TSC was exploring the possibility of electronically communicating this information to the intelligence community as well. According to officials from the FBI’s Counterterrorism Division, for investigative purposes, FBI and other counterterrorism officials are generally allowed to collect, retain, and share information on individuals on the watchlist who attempt to purchase firearms or explosives. If the Attorney General Is Given Statutory Authority to Deny Transactions, Guidelines Would Help to Ensure Accountability and Civil Liberties Protections In our May 2009 report, we noted that the Department of Justice (DOJ) provided legislative language to Congress in April 2007 that would have given the Attorney General discretionary authority to deny the transfer of firearms or the issuance of a firearm or explosives license or permit under certain conditions. Specifically, such transactions could be denied when a background check on an individual reveals that the person is a known or suspected terrorist and the Attorney General reasonably believes that the person may use the firearm or explosives in connection with terrorism. The legislative language also provided due process safeguards that would afford an affected person an opportunity to challenge an Attorney General denial. At the time of our 2009 report, neither DOJ’s proposed legislative language nor then pending related legislation included provisions for the development of guidelines further delineating the circumstances under which the Attorney General could exercise this authority. We suggested that Congress consider including a provision in any relevant legislation to require that the Attorney General establish such guidelines, and this provision was included in a subsequent legislative proposal. Such a provision would help DOJ and its component agencies provide accountability and a basis for monitoring to ensure that the intended goals for, and expected results of, the background checks are being achieved. Guidelines would also help to ensure compliance with Homeland Security Presidential Directive 11, which requires that terrorist-related screening— including use of the terrorist watchlist—be done in a manner that safeguards legal rights, including freedoms, civil liberties, and information privacy guaranteed by federal law. Furthermore, establishing such guidelines would be consistent with the development of standards, criteria, and examples governing nominations to, and the use of, the watchlist for other screening purposes. Because individuals are nominated to the terrorist watchlist based on a “reasonable suspicion” standard, the government generally has not used their inclusion on the watchlist to automatically deny certain actions, such as automatically prohibiting an individual from entering the United States or boarding an aircraft. Rather, when an agency identifies an individual on the terrorist watchlist, agency officials are to assess the threat the person poses to determine what action to take, if any, in accordance with applicable laws or other guidelines. For example, the Immigration and Nationality Act, as amended, establishes conditions under which an alien may be deemed inadmissible to the United States. Also, the former White House Homeland Security Council established criteria for determining which individuals on the terrorist watchlist are deemed to be a threat to civil aviation or national security and, therefore, should be precluded from boarding an aircraft. Subsequent to the December 25, 2009, attempted terrorist attack, the President tasked the FBI and TSC to work with other relevant departments and agencies—including the Department of Homeland Security, the Department of State, and the Central Intelligence Agency—to develop recommendations on whether adjustments are needed to the watchlisting nominations guidance, including the No-Fly criteria. These efforts are ongoing. At the time of our May 2009 report, DOJ was noncommittal on whether it would develop guidelines if legislation providing the Attorney General with discretionary authority to deny firearms or explosives transactions involving individuals on the terrorist watchlist was enacted. Subsequent to that report, Senator Lautenberg introduced S. 1317 that, among other things, would require DOJ to develop such guidelines. We continue to maintain that guidelines should be a part of any statutory or administrative initiative governing the use of the terrorist watchlist for firearms or explosives transactions. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions that you or other Members of the Committee may have. Contacts and Acknowledgments For additional information on this statement, please contact Eileen Larence at (202) 512-6510 or [email protected]. In addition, Eric Erdman, Assistant Director; Jeffrey DeMarco; and Geoffrey Hamilton made key contributions to this statement. Contact points for our offices of Congressional Relations and Public Affairs may be found on the last page of this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Membership in a terrorist organization does not prohibit a person from possessing firearms or explosives under current federal law. However, for homeland security and other purposes, the FBI is notified when a firearm or explosives background check involves an individual on the terrorist watchlist. This statement addresses (1) how many checks have resulted in matches with the terrorist watchlist, (2) how the FBI uses information from these checks for counterterrorism purposes, and (3) pending legislation that would give the Attorney General authority to deny certain checks. GAO's testimony is based on products issued in January 2005 and May 2009 and selected updates in March and April 2010. For these updates, GAO reviewed policies and other documentation and interviewed officials at FBI components involved with terrorism-related background checks. From February 2004 through February 2010, FBI data show that individuals on the terrorist watchlist were involved in firearm or explosives background checks 1,225 times; 1,116 (about 91 percent) of these transactions were allowed to proceed because no prohibiting information was found--such as felony convictions, illegal immigrant status, or other disqualifying factors--and 109 of the transactions were denied. In response to a recommendation in GAO's January 2005 report, the FBI began processing all background checks involving the terrorist watchlist in July 2005--including those generated via state operations--to ensure consistency in handling and ensure that relevant FBI components and field agents are contacted during the resolution of the checks so they can search for prohibiting information. Based on another recommendation in GAO's 2005 report, the FBI has taken actions to collect and analyze information from these background checks for counterterrorism purposes. For example, in April 2005, the FBI issued guidance to its field offices on the availability and use of information collected as a result of firearm and explosives background checks involving the terrorist watchlist. The guidance discusses the process for FBI field offices to work with FBI personnel who conduct the checks and the Bureau of Alcohol, Tobacco, Firearms and Explosives to obtain information about the checks, such as the purchaser's residence address and the make, model, and serial number of any firearm purchased. The guidance states that any information that FBI field offices obtain related to these background checks can be shared with other counterterrorism and law enforcement agencies. The FBI is also preparing monthly reports on these checks that are disseminated throughout the FBI to support counterterrorism efforts. In April 2007, the Department of Justice proposed legislative language to Congress that would provide the Attorney General with discretionary authority to deny the transfer of firearms or explosives to known or suspected "dangerous terrorists." At the time of GAO's May 2009 report, neither the department's proposed legislative language nor related proposed legislation included provisions for the development of guidelines further delineating the circumstances under which the Attorney General could exercise this authority. GAO suggested that Congress consider including a provision in any relevant legislation that would require the Attorney General to establish such guidelines; and this provision was included in a subsequent legislative proposal. If Congress gives the Attorney General authority to deny firearms or explosives based on terrorist watchlist concerns, guidelines for making such denials would help to provide accountability for ensuring that the expected results of the background checks are being achieved. Guidelines would also help ensure that the watchlist is used in a manner that safeguards legal rights, including freedoms, civil liberties, and information privacy guaranteed by federal law and that its use is consistent with other screening processes. For example, criteria have been developed for determining when an individual should be denied the boarding of an aircraft.
GAO_GAO-14-546
Background Role of the UN Secretariat and Its Professional Staff The UN Secretariat administers the programs and policies established by the other principal entities of the UN, including the General Assembly and the Security Council. The duties of the Secretariat include administering peacekeeping operations, mediating international disputes, surveying economic and social trends and problems, and preparing studies on human rights and sustainable development. The Secretariat is headquartered in New York City and has employees at other locations throughout the world. According to the UN, as of June 30, 2013, the Secretariat employed 41,273 people, of which about 30 percent (12,220) were professional staff. Elements of the UN Total Compensation Package The total compensation package for UN employees consists of salary, benefits, and allowances. UN salaries consist of a base salary and post adjustment added to account for the cost of living at individual posts. The UN administers a staff assessment, which is used to contribute to a tax equalization fund, and which does not affect take-home pay. This fund is used to compensate UN employees from countries that require them to pay taxes on their UN income. UN benefits consist of items including retirement, health insurance, and retiree health insurance. In addition, about 20 allowances are available to UN professional Secretariat staff, including danger pay, hardship pay, mobility allowance, education grants, and others. Receipt of these allowances is based on whether a UN staff member meets the eligibility criteria established by the UN for each allowance. Our 2013 report on UN compensation provides more detailed information on benefits and allowances available to UN professional staff. Administration of UN Compensation Elements Separate bodies within the UN administer the different elements of total compensation or provide information to the General Assembly about staff demographics and trends in compensation. International Civil Service Commission (ICSC). The General Assembly established the ICSC in 1974 as an independent expert body with a mandate to regulate and coordinate the conditions of service of staff in the UN common system. As a part of its mandate, the ICSC makes recommendations or determinations for certain compensation elements for employees within the UN common system, such as salary and allowances. On some matters, the ICSC can act independently, while on others its decisions are subject to General Assembly approval. For example, the ICSC can determine changes to the post adjustment, or cost-of-living adjustment portion of UN salaries, without prior approval from the General Assembly. In addition, the ICSC can set some allowances, such as the hardship allowance for employees serving at posts where living conditions are considered difficult, without General Assembly approval. However, proposed ICSC changes to UN base salary must be approved by the General Assembly. The ICSC is also responsible for annually reviewing the percentage difference between UN professional salaries and those of U.S. civil service employees in a process called the margin calculation. The UN’s process uses net salary, rather than total compensation, to compare UN and U.S. staff salaries because the General Assembly has directed the ICSC to conduct the margin calculation only using salaries. Joint Staff Pension Fund Board. The UN Joint Staff Pension Fund provides retirement and related benefits to UN staff and is administered by the Joint Staff Pension Fund Board in accordance with regulations adopted by the General Assembly. The Joint Staff Pension Fund is a defined benefit fund that provides benefits to more than 67,000 retirees and beneficiaries. As of 2013, approximately 121,000 active participants from 23 organizations in the UN system are accumulating pension benefits under the Joint Staff Pension Fund. Health and Life Insurance Section. This office is part of the UN Office of Program Planning, Budget, and Accounts, and administers health plans, after-service health insurance, and life insurance for UN Secretariat staff. United Nations System Chief Executives Board for Coordination (CEB). The CEB provides coordination and strategic guidance, and prepares reports on staff demographics and other compensation issues for use by UN organizations. The Human Resources Network of the CEB brings together directors of human resources departments from more than 30 UN and related organizations at biannual meetings. While the UN and the U.S. Government Offer Generally Similar Benefits and Allowances, Each Provides Certain Benefits and Allowances with Higher Monetary Values than the Other Benefits and allowances offered by the UN and the U.S. government are generally similar, and each provides certain benefits or allowances with greater monetary value than the other. Similar UN and U.S. benefits include retirement plans and health insurance for staff and retirees, while similar allowances include hardship pay and education grants. Where comparable data were available, our comparisons of UN and U.S. government benefits and allowances show that each entity provided some benefits or allowances of greater monetary value than those provided by the other. We did not compare some benefits and allowances, such as danger pay and retiree health benefits, because of the limited availability of comparable data. Benefits and Allowances Offered to UN Staff Are Generally Similar to Those Offered to U.S. Employees, with Some Differences in Structure The UN and the U.S. government offer generally similar benefits, with some minor differences. For example, both the UN and the U.S. government offer health, dental, and retiree health insurance to their staff. Table 1 compares the benefits available to eligible UN professional staff and U.S. civil service employees. While similar in purpose, some of the benefits offered by the UN and the U.S. government differ in design and structure. For instance, while both the UN and the U.S. government offer retirement benefits, the UN offers its professional staff a pension plan that provides retirees a defined benefit, based on factors including an employee’s age, years of participation in the plan, and salary history. The UN pension plan provides benefits similar to the Civil Service Retirement System (CSRS), a retirement plan providing U.S. federal civilian retirees with a defined benefit. CSRS was created in 1920 and was the only retirement plan for most federal civilian employees until 1984. However, most current U.S. civil service employees, including all those hired after 1984, are covered FERS is a three- by the Federal Employees Retirement System (FERS).part retirement plan consisting of (1) a basic defined benefit pension, (2) Social Security benefits, and (3) the Thrift Saving Plan (TSP)—a retirement savings and investment plan in which employee contributions are matched by the employer up to a certain percentage of salary. Compared with participants in the UN Pension Fund or CSRS, U.S. civil service employees participating in the FERS plan receive a smaller defined benefit pension, but receive Social Security and earnings from accumulated TSP investments upon retirement. Further information on eligibility requirements and key plan provisions of the UN Pension Fund and FERS can be found in appendix II. Many allowances offered by the UN to its professional staff are similar to those offered by the U.S. government, but both entities also offer allowances unique to their own employees. Figure 1 compares similar allowances provided by both the UN and the U.S. government to eligible staff, as well as allowances offered by one but not the other. As shown in figure 1, both the UN and the U.S. government, for example, provide allowances to compensate staff for serving in a dangerous or hardship duty station and to account for the cost of living at particular duty stations. However, the UN and the U.S. government each offer some allowances to staff that are not offered by the other. For example, eligible UN professional staff may receive a dependents allowance, to compensate for having a child or other qualifying dependent; or a mobility allowance, to provide incentive for staff to change duty stations. Similar allowances are not part of the compensation package for U.S. Foreign Service employees and civil service employees serving overseas. In addition, U.S. government employees can be eligible for student loan repayments, at the discretion of each agency, if the employee signs a service agreement to remain with the agency for at least 3 years. Appendix III provides more detailed information on UN and U.S. allowances. The UN and the U.S. Government Each Provided Certain Benefits and Allowances of Greater Monetary Value than Those Provided by the Other Using available data, we compared specific benefits and allowances provided by the UN and the U.S. government, and found that each entity provided certain benefits and allowances of greater monetary value than those provided by the other. However, the lack of available or comparable data prevented us from making a monetary comparison of all UN and U.S. benefits and allowances. UN and U.S. Benefits and Allowances That Were Comparable We were able to compare certain UN and U.S. compensation elements, including retirement benefits, health insurance, and allowances such as hardship pay. Table 2 summarizes the benefits and allowances that we compared and the results of our analysis. To compare the UN and U.S. civil service retirement systems, we estimated income replacement rates for UN employees and U.S. civil service employees covered by FERS under different scenarios, including differing work histories, and retirement contribution rates. Income replacement rates provide a method of comparing retirement programs that describe how much of a worker’s preretirement salary is replaced in the first year of retirement by a retirement plan. For example, an employee with a preretirement salary of $100,000 who received $60,000 in the first year of retirement from his or her retirement plan has an income replacement rate of 60 percent. Our scenarios had varying results. One scenario showed that income replacement rates under FERS were higher than for the UN Pension Fund, given the economic conditions of the time periods we analyzed. In another scenario, the income replacement rates for the two systems were similar. Both scenarios are summarized in table 3. One of our scenarios compares UN staff and U.S. civil service employees covered by FERS who retire at age 62, contribute an equivalent percentage of salary to their retirement, and have 30 years of service. Our estimates show that the FERS retirement package replaces between 63 and 69 percent of salary, while the UN Pension Fund replaces between 63 and 68 percent of salary. We also compared UN staff and U.S. employees with 20 years of service who retire at age 62 and contribute an equivalent percentage of salary to their retirement. Our estimates show that FERS replaces between 48 and 55 percent of salary, and the UN Pension Fund replaces between 40 and 44 percent of salary. While our analysis of income replacement rates showed varying results, several factors affected our estimates. First, to achieve higher income replacement rates than UN staff, U.S. employees have to make voluntary contributions to their retirement plan and accept a higher degree of risk in their retirement income because of their TSP investments. TSP investment risks include both individual ability and willingness to defer income and to make appropriate asset allocation choices, as well as market risk on returns. Because the amount that U.S. civil service employees contribute to their TSPs affects the income replacement rate, we also analyzed scenarios for U.S. employees who make different contributions to their TSPs. For example, we analyzed a scenario where U.S. civil service employees who have 30 years of government service, retire at age 62, and contribute the average contribution rate among FERS employees who made elective contributions to the TSP in 2012, and found that FERS replaced between 77 to 82 percent of their salaries. In contrast, if employees make no elective contribution to the TSP throughout their careers, our estimates show income replacement rates from 57 to 64 percent. Second, UN staff generally earn higher salaries than U.S. civil service employees in comparable jobs. Therefore, UN pensions may not replace as great a percentage of preretirement salary as the FERS retirement plan, but in some cases may have the same or greater estimated monetary value. Last, the income replacement rates we obtained for FERS employees reflect the economic conditions over the 30-year period from 1983 to 2012, with relatively high wage growth and asset fund returns in the early years of this period. Scenarios using different economic conditions would have obtained different income replacement rates. We compared the costs of health care plans for UN staff based in New York City, New York, and U.S. civil service employees, and found that the average organizational cost per employee in 2012 was 5 percent higher for U.S. employees than for UN staff. The average health care subsidy for UN professional staff in New York who participated in a plan in 2012 was $6,228, according to UN payroll data. Approximately 98 percent of UN professional staff in New York received a health care subsidy, and when we included both participants and nonparticipants, we found that the overall average cost per UN staff member in New York was approximately $6,100. According to OPM, the average agency cost of providing health care plans to U.S. employees who participated in federal employee health benefit plans in 2012 was $8,022, excluding employees of the U.S. Postal Service. When we accounted for the 80 percent participation rate in federal employee health benefit plans, we found that the average agency cost per employee, excluding U.S. Postal Service employees, was approximately $6,417, including both participants and nonparticipants. Comparing allowances, we found that the UN and the U.S. government both provide certain allowances with higher monetary value than those provided by the other. Using available data, we were able to compare three allowances with similar purposes. Table 4 provides a summary of our analysis of these allowances. As shown in table 4, UN staff received higher average dollar amounts for additional hardship allowance for nonfamily duty stations than State and DOD staff received for the separate maintenance allowance. UN staff also received a higher average dollar amount for the nonremoval allowance than DOD staff did for the foreign transfer allowance. Finally, State staff received higher average dollar amounts for hardship differentials than UN staff did for hardship pay, but UN dollar amounts were higher than those paid to DOD staff. We also compared other allowances that are unique to either the UN or the U.S. government. Table 5 shows the allowances, total number of recipients, total spent by the UN or U.S. government on each allowance in 2012, and average monetary value per recipient. As shown in table 5, the UN provides allowances to its staff that are not provided by the U.S. government to Foreign Service employees and civil service employees serving overseas. For example, the UN provides a mobility allowance as an incentive for its staff to move among duty UN staff who stations, including moving to more difficult duty stations.received this allowance received an average amount of $8,036 in 2012. As shown in table 5, the U.S. government also provides some eligible Foreign Service employees and civil service employees serving overseas, in addition to employees serving domestically, with a student loan repayment incentive, but the UN does not provide a similar allowance to its professional staff. According to State officials, the agency spent approximately $12 million in 2012 on 1,337 employees serving both domestically and abroad for student loan repayments. According to DOD, the agency spent approximately $20.9 million in 2012 on 3,306 employees for student loan repayments. Other Benefits and Allowances That Could Not Be Compared The lack of available and, in some cases, comparable data prevented us from comparing certain other UN and U.S. benefits and allowances, including the costs to fund retirement benefits and retiree health insurance, certain allowances, and leave benefits. Costs to fund retirement benefits and retiree health insurance. While we compared income replacement rates for the UN Pension Fund and FERS, we were unable to estimate future costs to the UN and the U.S. government of providing retirement benefits or retiree health insurance because of the lack of comparable data. For instance, while both the UN and OPM conduct regular studies to estimate the future costs of their respective retirement systems, these studies use differing methods and assumptions to determine future costs, including different assumptions on key factors such as investment growth rates and rates of inflation as well as different actuarial methods of assigning retirement costs across years of employee service. Because of these differences, the UN’s and OPM’s current estimates cannot be used to produce a valid comparison of future costs. Allowances, including danger pay and education grants. Comparable data were not available for certain allowances, including danger pay and education grants. Payroll data on these allowances are maintained at individual duty stations and are not linked to the State, UN, and DOD central payroll systems. Therefore, these allowances were not reflected in the payroll data that we collected. Leave benefits. While we previously reported that UN Secretariat staff are eligible for more generous leave benefits than those received by U.S. civil service employees, we were unable to compare the monetary value of leave used by UN staff and U.S. employees because the UN and U.S. agencies were unable to provide comparable data on leave amounts used by their employees. See appendix IV for more information on UN and U.S. civil service leave benefits. The UN Has Begun to Address Concerns about the Long-Term Sustainability of Rising Compensation Costs, but Its Review of Total Compensation Does Not Incorporate Key Elements The UN has begun to take action to address concerns about the long- term sustainability of its rising total compensation costs, but its ongoing effort to review total compensation does not incorporate the costs of key elements, such as pensions and health insurance. Staff-related expenditures rose steadily from $1.95 billion in 2002-2003 to $2.98 billion in 2010-2011, the most recent period for which data were available, at an average rate of about 7 percent per 2-year budget, when adjusted for inflation. Concerns about the level of total compensation costs and long- term sustainability have been raised by the Secretary-General, General Assembly, member states, and other UN organizations. In response, the General Assembly, the ICSC, the UN Joint Staff Pension Fund, and others have taken actions aimed to address these concerns, such as freezing current allowance amounts. Efforts to study and revise the total compensation package include the ICSC’s review of total compensation and the CEB’s baseline study of compensation costs. The General Assembly has called upon the ICSC to include all elements of total compensation in its review. However, we found that the ICSC review does not incorporate key elements of total compensation, such as retiree health insurance. The UN Has Begun to Take Action to Address the Long-Term Sustainability of Its Rising Total Compensation Costs The UN has recently begun taking action in response to concerns about its total compensation costs raised by the General Assembly, member nations including the United States, and various UN organizations. These concerns have focused on the long-term sustainability of UN total compensation, as well as the present and historical costs of specific elements of UN compensation. According to budget data provided by the UN Secretariat, staff-related expenditures rose steadily from $1.95 billion in 2002-2003 to $2.98 billion in 2010-2011, at an average rate of about 7 percent per 2-year budget, when adjusted for inflation. Figure 2 shows that staff-related expenditures accounted for over half of the regular budget during this period. Between 2004-2005 and 2010-2011, growth in staff-related expenditure was accompanied by faster growth in the Secretariat’s regular budget, causing a decline in the share of staff- related expenditures in the regular budget over this time period. Concerns have also been raised in regard to specific elements of the UN total compensation package. For instance, the General Assembly has brought attention to the growing margin, or percentage difference, between average UN salaries and average U.S. civil service salaries. The General Assembly has stipulated that the UN salaries should be between 110 and 120 percent of U.S. civil servant salaries, with a desirable midpoint of 115 percent over 5 years. We previously reported that the margin between UN and U.S. civil service salaries increased from 109.3 percent in 2002 to 116.9 percent in 2012.again to 119.6 percent, also raising the 5-year average above 115 percent to 115.7 percent. In 2013, the margin rose Additionally, the General Assembly has raised questions regarding the long-term sustainability of other elements of UN compensation, such as retiree health insurance. In 2013, the Secretary-General reported the unfunded liability for its after-service health insurance program to be almost $4 billion. The General Assembly expressed “deep concern” over these costs, and the Secretary-General noted that the UN lacks the assets to settle this liability. The Secretary-General further reported that a long-term funding strategy is needed. In 2012, a General Assembly resolution noted that the UN Joint Staff Pension Fund ran a deficit for a second consecutive biennium, and emphasized the need to ensure the long-term sustainability of the fund. The General Assembly, the ICSC, and others have taken actions to address these concerns. For instance, to address the rising costs of salaries and allowances, the General Assembly and the ICSC have taken steps such as freezing allowance amounts for at least 1 year, freezing the post adjustment for New York in 2014, raising the retirement age for new hires from 62 to 65, and conducting a review of the UN total compensation package for professional staff. UN actions taken in response to specific concerns are summarized in table 6. In addition, the UN has initiated a review of its total compensation package. Specifically, in 2012, the ICSC began a review of UN total compensation, and the General Assembly endorsed this action. The ICSC added the total compensation review to its 2013-2014 work plan and has begun collecting data related to many elements of the UN compensation package. The CEB has assisted the ICSC with this data collection effort. As a result of its review, the ICSC plans to issue several recommendations to the General Assembly on changes to the overall compensation structure. The study is scheduled for completion in 2015. The ICSC’s Total Compensation Review Does Not Incorporate Key Elements of Total Compensation While the UN is undertaking efforts to examine its compensation package, we found that the ICSC’s review does not incorporate all key elements of total compensation. The General Assembly’s 2013 resolution commenting on the ICSC’s total compensation review noted that the ICSC should review all elements of total compensation holistically, including both monetary and nonmonetary elements. Further, the ICSC has reported that a holistic review of all elements of compensation is important to prevent fragmentation of the UN compensation system. However, according to ICSC officials and documents, the ICSC’s review of total compensation will not incorporate all key elements of total compensation. Instead, the ICSC review will focus on certain compensation elements, such as salary and allowances. Other key elements of compensation with significant costs, including benefits such as pensions, health insurance, and after-service health insurance, will not be incorporated into the ICSC review. According to the ICSC, the compensation review will result in the development of a compensation calculator and a series of recommendations to the General Assembly about possible changes to the UN compensation structure. The calculator will be based on a series of estimates about the current and future costs of individual elements of compensation. For example, an estimate of danger pay might multiply the total number of staff serving at duty stations eligible for that allowance by the danger pay rate of $1,600 per month as of 2013. However, according to ICSC officials, the calculator will not generate estimates for key elements of compensation, including pensions, health insurance, and retiree health insurance. ICSC officials noted that because the various elements of compensation affect one another, their study may have effects on elements of compensation not directly included in their review. For example, any changes proposed to the salary structure could affect other items that are linked to salary, such as pensions. However, ICSC officials stated that, as part of their review, they will not make specific recommendations related to compensation elements outside of their area of administrative responsibility. Within the UN system, several different entities have administrative responsibility for the various elements of total compensation. The ICSC is responsible for matters pertaining to salary and allowances, the Joint Staff Pension Fund Board administers the pension fund, and the Health and Life Insurance Section administers health and retiree health insurance plans. ICSC officials told us that their review will focus only on the elements of compensation—salary and allowances—that are within their area of responsibility. In addition, ICSC officials told us that issues related to elements outside their responsibility will be flagged for separate review by the UN entities with responsibility for their administration, such as the UN Joint Staff Pension Fund and the UN Health and Life Insurance Section. Until all aspects of UN total compensation have been reviewed by the ICSC and other relevant entities, the General Assembly and member states will not have a comprehensive set of information with which to make fully informed decisions about proposed changes to address concerns about the compensation system. The cost to the UN of some of the elements not fully incorporated in either study is significant. For example, the unfunded liability of the UN’s retiree health insurance plan was estimated in 2012 to be almost $4 billion. In addition, according to the UN Secretariat, contributions to the Joint Staff Pension Fund and health insurance plans totaled $419 million in 2010-2011, which was 8 percent of the 2-year regular budget. Conclusions We found that the UN and the U.S. government offer generally similar benefits and allowances to their employees, with some differences. Our comparisons of the monetary value of certain UN and U.S. benefits and allowances show that each offered compensation elements of higher value than the other. However, the lack of available or comparable data prevented a comparison of other UN and U.S. benefits and allowances. The Secretary-General; General Assembly; member states, including the United States; and other UN organizations have expressed concerns about the rising costs and long-term sustainability of the UN’s total compensation package. Many of these concerns relate to the organization’s retiree health insurance system and its pension fund. The retiree health insurance system, for example, has a large unfunded liability that may place the long-term viability of the program at risk. The UN has recognized these issues and begun taking actions to address the costs of its compensation package. The ICSC’s ongoing review of total compensation and the CEB’s baseline compensation cost study are important steps in understanding the current costs of the compensation package, with the ICSC study making recommendations about possible changes to the structure of the system and developing a cost calculator that could be used to estimate the impact of these possible changes. However, because various entities within the UN system have administrative responsibility over different elements of the compensation package, the ICSC review will not include key elements of compensation, particularly pensions, health insurance, and retiree health insurance. Without a holistic evaluation of its compensation package that incorporates all key elements of compensation, the General Assembly and member states will not be able to make fully informed decisions about proposed changes to the compensation system. Recommendation for Executive Action To assist member states in their oversight of the budgetary implications and financial sustainability of UN total compensation, the Secretary of State should work with other member states to ensure that the costs of key elements of total compensation are reviewed to address rising staff costs and sustainability. Agency Comments and Our Evaluation We provided a draft of this report for comment to the Secretary of State, the Secretary of Defense, the Director of OPM, the Executive Director of the Federal Retirement Thrift Investment Board (FRTIB), and the UN. State, OPM, and the UN provided us with technical comments, which we incorporated into the report as appropriate. State also provided written comments, which we reprinted in appendix V. State agreed with our recommendation and stated that it generally accepts and endorses our findings. State noted that our report reveals that several elements of compensation, including pension benefits and after-service health benefits, are not included in the ICSC’s compensation review. State further commented that the ICSC review faces inherent challenges, including complexities associated with the Noblemaire Principle, which requires that compensation for UN professional staff be set in comparison to the highest compensated national service, which the UN has considered to be the U.S. federal civil service. State commented on ambiguities with the Noblemaire Principle, including ambiguities over the comparison group and which elements of compensation should be included in the comparison. We agree that there are ambiguities associated with the Noblemaire Principle and therefore we did not use it as the basis for our comparison of UN and U.S. government benefits and allowances. As we discuss in the report, the UN General Assembly has directed that only salaries be used as part of the margin calculation, rather than total compensation, which would include benefits and allowances in addition to salaries. It was beyond the scope of our review to comment on how the Noblemaire Principle should be applied, and our comparisons of UN and U.S. government benefits and allowances should not be interpreted as a statement or opinion on how Noblemaire comparisons should be conducted. We are sending copies of this report to the appropriate congressional members, the Secretary of State, the Secretary of Defense, the Director of OPM, the Executive Director of the FRTIB, the UN, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-9601 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology We were asked to review the structure of United Nations (UN) total compensation, including benefits and allowances, and how it compares with that of U.S. government employees. In this report, we (1) examine similarities between the UN and U.S. government benefits and allowances and compare their monetary values, and (2) examine UN efforts to address concerns about the sustainability of total compensation costs. To compare UN and U.S. government benefits and allowances, we reviewed relevant UN data and interviewed UN officials to determine the elements of the UN total compensation package available to UN Secretariat professional staff in 2012. Similarly, we reviewed relevant U.S. data and interviewed U.S. officials to determine similar elements of total compensation offered in 2012 to U.S. civil service employees, and for U.S. civil service employees serving overseas for the Departments of State (State) and Defense (DOD), and State Foreign Service employees. We examined the similarity of these compensation elements by comparing the purpose, structure, and eligibility requirements in the UN and U.S. systems. To compare the monetary value of elements of the UN total compensation package with similar elements of the U.S. government total compensation package, we collected available salary, benefit, and allowance data for UN Secretariat professional staff from the UN’s Payroll Operations Unit for calendar year 2012. We also collected available data for U.S. government employees, including civil service employees serving overseas and Foreign Service employees, from the Office of Personnel Management (OPM), State, and DOD. For data on health insurance, life insurance, and retiree health care benefits, we collected data from OPM on U.S. civil service employees, because OPM does not disaggregate these data by salary scale, occupation, or location. For data on retirement benefits, we collected data on the cost of the Federal Employees Retirement System (FERS) pension plan from OPM on U.S. civil service employees hired under FERS, because OPM does not disaggregate these data by salary scale or occupation; and we collected data on Thrift Saving Plan (TSP) participation rates and deferral rates from the Federal Retirement Thrift Investment Board (FRTIB) for U.S. civil service employees who are enrolled in the FERS retirement plan, because the FRTIB does not disaggregate these data by salary scale or occupation. For data on allowances, we collected data from State and DOD for Foreign Service employees and civilians serving overseas, because these employees are comparable to UN Secretariat professional staff. Data from State and DOD were for calendar year 2012. Because of variations in the structure, administration of payments, and data availability for individual compensation elements, we calculated monetary value for individual elements using different approaches. As a result, individual calculations cannot be summed into a single total for all benefits, allowances, or compensation. To assess the reliability of UN data, we interviewed UN officials, reviewed available technical documentation, and performed basic reasonableness checks of the data for obvious inconsistency errors and completeness. When we found discrepancies, we brought them to the attention of relevant agency officials and worked with these officials to correct the discrepancies before conducting our analyses. We determined that these data were sufficiently reliable for our analyses, including determining the monetary value of available UN payroll transactions, and the number of employees receiving these benefits and allowances in calendar year 2012; determining the monetary value of retirement contributions for UN staff for income replacement rate scenarios that we conducted and for determining the UN pension benefit formula; and for discussing UN staff- related costs and the level of these costs as a percentage of the regular budget. However, these data do not include complete information on certain allowances that are not captured in the UN’s central payroll system, such as danger pay allowances. To assess the reliability of U.S. data, we interviewed officials from OPM, State, DOD, and the FRTIB; reviewed available technical documentation; and performed basic reasonableness checks for obvious inconsistency errors and completeness. When we found discrepancies, we brought them to the attention of relevant agency officials and worked with officials to correct the discrepancies before conducting our analyses. We determined that these data were sufficiently reliable for our analyses. However, these data do not include information on certain allowances that are not captured by State’s and DOD’s central payroll systems, such as danger pay. Using these data, we calculated benefit and allowance amounts provided to UN staff and U.S. government employees. We also estimated benefit amounts UN and U.S. retirement programs would pay to employees—which we expressed in the form of income replacement rates—under different scenarios, including differing years of service and contributions toward retirement. We note, however, that these are illustrative examples, which do not represent actual or average benefits received by UN staff or U.S. government employees. We calculated these income replacement rates for UN staff and for U.S. civil service employees covered by FERS. The income replacement rates we calculated divided a workers’ gross retirement income in the first year of retirement by the worker’s net salary in his or her final year of work. We use this measure for the purpose of comparing FERS and the UN Joint Staff Pension Fund; it is not meant to assess the absolute generosity or For more information on our appropriateness of either retirement plan.assumptions and methodology in estimating income replacement rates, see appendix II. To identify health care costs for the UN, we used data on health care subsidies from the UN payroll system. UN payroll data contain the amount of health care subsidy that each employee receives from the organization. We examined the mean and median values of the health care subsidy for professional staff located in New York, first for the entire staff population, and then for staff members who participate in a health care plan. Because approximately 98 percent of UN staff members receive a health care subsidy, there is little difference between average plan costs for the entire staff population, and average plan costs per participant. We also examined the mean and median values of the health care subsidy for participants with and without dependents. To identify health care costs for U.S. civil service employees, we used data provided by OPM about the average cost to the agency of providing health insurance. OPM provided the average cost per participant, overall, and broken down into participants in self-only plans and participants in self-and-family plans. We multiplied this average cost by the participation rate in 2012, approximately 80 percent, to obtain the average cost to federal agencies of providing health insurance to U.S. civil service employees. A limitation of using OPM health care cost data to compare U.S. civil service employees with UN staff is that OPM’s data are not restricted to professional employees. To the extent that professional civil service employees choose different plans or have different patterns of health care usage than nonprofessional employees, federal agency costs to provide health insurance to professional employees may differ from the overall average. To show growth in UN staff costs over time, we collected data on staff- related expenses from the UN Secretariat. We adjusted these figures for inflation using the annual U.S. gross domestic product deflator from the U.S. Bureau of Economic Analysis, which we normalized to use 2011 as the base year. To examine UN efforts to address concerns about the long-term sustainability of total compensation costs, we reviewed UN documents and interviewed UN officials regarding the long-term costs of UN benefits and allowances and actions taken to address them, including the International Civil Service Commission’s total compensation review and the UN Chief Executive Board’s baseline compensation cost study. In addition, we analyzed UN General Assembly resolutions that direct other UN organizations to perform further actions to address these concerns. We conducted this performance audit from June 2013 to June 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Additional Information on GAO’s Estimates of Income Replacement Rates for United Nations Staff and U.S. Federal Civilian Employees To compare the UN and U.S. government retirement systems, we estimated income replacement rates for UN staff and U.S. federal civilian employees at various salary levels and under different scenarios, including varying years of UN or U.S. government employment and retirement contribution rates for U.S. federal civilian employees. Income replacement rates are a method of comparing retirement programs that describe how much of a worker’s preretirement salary is replaced in the first year of retirement by a retirement plan. We focused our analysis on U.S. federal civilian employees hired under FERS, which covers approximately 89 percent of current federal civilian employees and all federal civilian employees hired since 1984. Eligibility Requirements and Plan Provisions for the UN Pension Fund and FERS Retirees from the UN receive a pension, while the vast majority of active U.S. federal civilian employees receive retirement benefits under FERS. The UN Joint Staff Pension Fund and FERS have some structural differences. The UN Joint Staff Pension Fund provides retirees a defined benefit, based on factors including an employee’s age, length of participation in the plan, and salary history. The UN pension is similar to the Civil Service Retirement System (CSRS), which also provides U.S. FERS is a three-part federal civilian retirees with a defined benefit.retirement plan consisting of (1) a basic defined benefit pension, (2) Social Security, and (3) the TSP—a tax-advantaged retirement savings and investment plan in which employee contributions are matched by the employer up to a certain percentage of salary. Compared with participants in the UN Pension Fund or CSRS, U.S. federal civilian employees participating in the FERS plan receive a smaller defined benefit pension, but receive Social Security and earnings from accumulated TSP investments upon retirement. Table 7 provides additional information on the eligibility requirements and other plan provisions of the UN Pension Fund and FERS, including retirement age, contribution rates, and benefits formulas. Key Assumptions for Estimates of Income Replacement Rates When analyzing income replacement rates for UN staff and U.S. federal civilian employees under FERS, we made several simplifying assumptions. We assumed that staff (1) retired at the end of December 2012 at age 62, and (2) started collecting retirement income in January 2013. Additionally, we assumed wages grew at the same rate for both UN staff and U.S. federal civilian employees. In certain scenarios, we assumed that staff worked for 30 years at their respective UN or U.S. government employers, and in others we assumed that staff had 20-year work histories. Both UN and U.S. staff are assumed to be married at retirement, and U.S. employees are assumed to select a FERS pension with a 50 percent benefit for their surviving spouse.Social Security benefits, we assumed that all employees had a 35-year Social Security work history, as Social Security benefits are based on a person’s highest 35 years of earnings. Thus, we assumed that employees receiving Social Security have a work history prior to their years of service with the federal government or the UN. We made the following additional assumptions for estimating retirement income from TSP portfolios for U.S. federal civilian employees: 1. We assumed that employees participated in the TSP for each year of their 20- or 30-year work histories. 2. We assumed that TSP investments were fully invested in the TSP’s G fund, and we allowed the TSP rate of return to vary each year, reflecting the annual rate of return of the G fund for the years 1988- 2012.contributions to calculate his or her final TSP balance. We applied these rates of return to an individual’s TSP 3. We assumed that FERS retirees purchased an annuity using their We used an annuity interest entire TSP balance at the end of 2012. rate of 1.95 percent, which was the annual average rate in 2012. We also assumed that the retiree’s spouse was the same age as the retiree and that the retiree selected an annuity with a 50 percent survivor benefit with increasing payments to adjust for the cost of living. Methodology for Estimating Income Replacement Rates A TSP life annuity is an insurance product that provides guaranteed monthly payments for as long as the retiree, or his or her designated survivor, is alive. from the UN’s published salary scales, and is equal to the base salary for an employee’s assigned grade and step, an allowance for staff members with dependents, and a post adjustment reflecting the cost of living for the location where the staff member is posted. In our scenarios, we define “salary” for UN staff as the net remuneration for a staff member of the assigned grade and step, living in New York City, New York, receiving salary at the dependent rate. For U.S. federal civilian employees, take- home pay is calculated by applying federal and New York state tax rates to the General Schedule gross salary scale for New York. Estimating Equivalent Contribution Rates First, we estimated the income replacement rates for UN staff and U.S. federal civilian employees who contribute an equivalent percentage of salary to their retirement. Obtaining this equivalent contribution rate involves identifying the percentage of net remuneration that UN staff contribute to their retirement, and identifying the comparable contribution rate from take-home pay made by U.S. employees. To identify the percentage of net remuneration that UN staff contribute to their retirement, we first calculated the dollar value of the pension contribution that UN staff at each grade and step must make, and then divided that dollar value by the appropriate level of net remuneration. UN staff are required to contribute 7.9 percent of their pensionable remuneration to their pension plan. plan at each grade and step, we multiplied the appropriate level of pensionable remuneration by 7.9 percent. We then divided this dollar value by the appropriate level of net remuneration, resulting in the percentage of net remuneration contributed to pensions by UN workers who live in New York City and are paid at the dependent rate, at various grade and step levels. UN workers who are posted in New York City and paid at the dependent rate contribute between 8.9 percent and 9.8 percent of net remuneration to their pension plans. The mean and median contribution rate, across all grade and step levels (not weighted by population), is 9.4 percent. Because UN pensions are subject to taxation in the country where the staff member receives the pension benefit, they are established in gross terms to account for the amount of taxes UN retirees will have to pay on their pension benefits. “Pensionable remuneration” is a term used by the UN to describe the “grossed-up” salary scale on which pension contributions and subsequent gross pensions are based. Thus pensionable remuneration is higher than net remuneration for UN staff. To obtain an equivalent contribution rate for U.S. civil service employees, we estimated income replacement scenarios for U.S. civil service employees who contribute 9.4 percent of their net salary to retirement. We defined net salary for U.S. employees as gross salary, net of U.S. federal and New York state taxes. We used net salary as a measure for U.S. employees, as it was more comparable to UN net remuneration, or take-home pay. After we converted gross salary to net salary, we obtained the corresponding dollar value U.S. employees contributed to retirement. When applying the U.S. federal tax code and the New York tax code, we assumed that our U.S. civil service employees were married, filing jointly, with no children. Once we obtained the dollar value of the worker’s contribution to retirement, we converted this into a percentage of gross income that each worker contributes to retirement. TSP Contributions Once we obtained the percentage of gross salary that each U.S. civil service worker in our scenarios contributes to retirement, we were able to determine the worker’s contribution rate to the TSP in each year. In our scenarios in which UN staff and U.S. federal civilian employees have equivalent contribution rates, we determined the TSP contribution rate as a residual result of the other calculations. As a result, the TSP contribution rate equals the total contribution rate to retirement, minus the mandatory Social Security contribution rate each year, minus the mandatory 0.8 percent contribution rate to the FERS pension plan.example, in the scenarios in which we set equivalent retirement contributions between UN staff and U.S. federal civilian employees, an employee is assumed to contribute 0.8 percent of gross pay to a FERS For pension, 4.2 percent of gross pay to Social Security, and between 2.7 to 3.5 percent of gross pay to the TSP depending on salary. In addition to this indirect approach, where TSP contributions are identified as a residual, we also present scenarios where we directly set the level of TSP contribution. In the direct approach shown in tables 12 and 13, we entered a TSP contribution as a constant percentage of gross salary and multiplied this contribution rate by each year’s salary, to obtain an annual contribution to the TSP for each employee. Once we calculated each employee’s annual contribution rate to the TSP, through either the indirect or the direct method, we then calculated the employing agency’s matching contribution rate to the TSP. We then summed the employee and employer contribution to the TSP, to obtain the total dollar value of contributions to the TSP for each employee in each year. Once we determined each employee’s annual contributions to the TSP fund, including both the employee and employer contributions, we calculated the employee’s final TSP balance. As stated earlier, we assumed that employees invest their entire TSP balance in the G fund. Once the final TSP balance was obtained, we calculated a lifetime TSP annuity for each worker using the TSP Annuity Calculator Worksheet, published by the FRTIB, along with accompanying tables from the FRTIB, including monthly annuity factors and interest adjustment factors. In directly setting the level of TSP contribution, we presented income replacement rates for U.S. federal civilian employees with two different TSP contribution levels: 8.5 percent of gross salary, and 0 percent of gross salary. We selected 8.5 percent of gross salary because, according to an FRTIB report, the contribution deferral rate to the TSP among FERS participants in 2012 was 8.5 percent of salary. We selected 0 percent in order to present the lower bound on income replacement rates for U.S. FERS employees. Our scenarios examine workers retiring at the end of December 2012. However, changes have been made to the FERS retirement system for U.S. federal civilian employees hired on January 1, 2013, or later. Employees enrolled in FERS and first hired in 2013 contribute 3.1 percent of gross salary toward their pension plans, while employees first hired in 2014 or later contribute 4.4 percent. unchanged; thus, employees are funding a larger portion of their retirement income. In addition, although employees may choose not to contribute to the TSP, the default contribution rate was set at 3 percent for employees hired after August 2010. FERS employees who contribute 4.4 percent of gross salary toward their pension plan contribute more toward their retirement plans than UN employees, even when they contribute nothing to their TSP accounts. Under our scenario 4 below, we estimated income replacement rates for U.S. federal civilian employees who did not defer any of their salary to the TSP in any years of service. Social Security Benefits OPM estimates the cost of the FERS basic annuity at an amount equal to 12.7 percent of pay. For FERS employees first hired before 2013, the federal government contributes 11.9 percent of pay and employees contribute 0.8 percent of pay. For FERS employees hired in 2013 or later, the federal government pays 9.6 percent of pay. Employees first hired in 2013 pay the remaining 3.1 percent and employees hired after pay 4.4 percent. 2012, and then determining salaries for earlier years. To set wage growth rates, we used the annual salary growth rate assumptions for various years that are listed in OPM’s assumptions for the Civil Service Retirement and Disability Fund. Wage growth rates varied over the 35- year period, ranging from 5.5 percent in 1979 to 3.25 percent in 2012. We applied the same salary growth rates to both the UN staff and U.S. civil service employees. To estimate Social Security contributions, we calculated the dollar value of employee contributions to Social Security each year by multiplying each year’s Old-Age, Survivors, and Disability Insurance (OASDI) tax rate by the lesser of the employee’s salary in that year or the Social Security maximum taxable earnings. We show income replacement rates for U.S. federal civilian employees who retire at age 62. Employees who retire at age 62 in 2013 have not reached Social Security’s full retirement age for 2013, and therefore receive a reduced Social Security benefit that is 25 percent lower than the benefit received by employees retiring at the full retirement age (66 in 2013) with equivalent earnings histories. Calculating Income Replacement Rates Our income replacement rate estimates are defined as gross retirement income divided by take-home pay in the final year of work. Specifically, we estimated the income replacement rate as follows: The income replacement rate for U.S. employees equals gross retirement income in the first year of retirement divided by net salary in the final year of work. The income replacement rate for UN staff equals gross retirement income in the first year of retirement divided by net remuneration in the final year of work. In order to compare the employee and employer contributions to retirement plans, we also calculated the amount of retirement income attributable to employee contributions, and the amount of retirement income attributable to employer contributions. In conducting these calculations, For the UN pension, we attributed one-third of the pension benefit to the employee contributions and two-thirds of the pension benefit to employer contributions, based on the UN contribution rates. For the FERS pension, we attributed 6 percent of pension income to employee contributions and 94 percent to employer contributions, based on the percentage of salary that employees contribute (0.8 percent) and the percentage of salary that agencies contribute (11.9 percent). For Social Security, we attributed half of the Social Security income to employee contributions and one-half to employer contributions, because in most years of the program, employees and employers contribute equally to Social Security. For the TSP annuity, we attributed a percentage of the annuity to employee contributions that equaled the sum of all TSP contributions made by the employees over their working lives divided by the sum of all TSP contributions made by the employer and the employee combined over the employee’s working life. We attributed a percentage of the annuity to employer contributions that equaled the sum of all TSP contributions made by the employer over the employee’s working life divided by the sum of all TSP contributions made by the employer and the employee combined over the employee’s working life. Using these calculations, we constructed the income replacement rates for the employee and employer, where the income replacement rate (employee) equals retirement income attributable to the employee’s contributions divided by final year salary, and the income replacement rate (employer) equals retirement income attributable to the employer’s contributions divided by final year salary. Below we present income replacement rates for UN staff or U.S. federal civilian employees under five scenarios. Scenario 1—UN Staff and U.S. Federal Civilian Employees with 30-Year Work Histories Making Equivalent Contributions In our first scenario, UN staff and U.S. federal civilian employees have worked for 30 years at their respective UN organization or U.S. government agency and contribute an equivalent percentage of their salaries to their retirement. Because UN staff contribute, on average, 9.4 percent of net remuneration to their pensions, in this scenario we assume that U.S. federal civilian employees also contribute 9.4 percent of take- home pay to their retirement. This corresponds to 7.3 to 8.3 percent of gross salary, allocated across contributions to the FERS pension, Social Security, and the TSP. U.S. federal civilian employees’ TSP contributions in this scenario therefore range from 2.7 to 3.5 percent of 2012 gross pay. Table 8 shows our estimates of income replacement rates for UN staff who retire at age 62, while table 9 shows our estimates of income replacement rates for U.S. FERS employees who retire at age 62. In each table, we also show the percentage of salary replaced by the employee’s and employer’s contributions to the retirement package. As can be seen in table 8, the total income replacement rates for UN staff range from 63 percent to 68 percent, and the portion of income replaced by the UN ranges from 42 percent to 45 percent. As shown in table 9, the total income replacement rates for U.S. federal civilian employees who retire at age 62 range from 63 to 69 percent, and the portion of income replaced by the U.S. government ranges from 48 to 49 percent. Scenario 2—UN Staff and U.S. Federal Civilian Employees with 20-Year Work Histories Who Make Equivalent Retirement Contributions Tables 10 and 11 reflect the assumptions made under our second scenario, where UN staff and U.S. federal civilian employees have worked for 20 years at their respective employers and contribute an equivalent percentage of their salaries to their retirement. Table 10 provides our estimates of income replacement rates for UN staff who retire at age 62, and table 11 shows our estimates of income replacement rates for U.S. FERS employees who retire at age 62. In each table, we also show the percentage of salary replaced by the employee’s and employer’s contributions to the retirement package. As can be seen in table 10, the total income replacement rates for UN staff range from 40 percent to 44 percent, and the portion of income replaced by the UN range from 27 to 29 percent. As shown in table 11, the total income replacement rates for U.S. federal civilian employees who retire at age 62 range from 48 percent to 55 percent, and the portion of income replaced by the U.S. government ranges from 35 percent to 38 percent. Scenario 3—U.S. Federal Civilian Employees with 30 Years of Service Who Contribute the Average Amount of Gross Salary to the TSP In scenario 3, we estimate income replacement rates for U.S. federal civilian employees who contribute the 2012 U.S. average contribution to the TSP in each year of service, which was 8.5 percent of gross salary for those FERS employees who made any elective contributions. We estimated this scenario for U.S. employees who retire at age 62. As shown in table 12, income replacement rates for employees who retire at age 62 range from 77 to 82 percent, with the portion of income replaced by the U.S. government ranging from 52 to 53 percent. Scenario 4—U.S. Federal Civilian Employees with 30 Years of Service Who Contribute 0 Percent of Gross Salary to the TSP Under scenario 4, we estimate income replacement rates for U.S. federal civilian employees who did not defer any of their salary to the TSP in any years of service. TSP portfolios for these employees consist entirely of their agencies’ automatic 1 percent contribution to the TSP each year. As shown in table 13, the income replacement rates for employees who retire at age 62 range from 57 to 64 percent, with the portion of income replaced by the employer ranging from 46 to 47 percent. Scenario 5—UN Staff Who Are U.S. Citizens Employed in the United States with a 20-Year UN Work History In our final scenario, we provide estimates of income replacement rates for UN staff members who are U.S. citizens employed in the United States. Like other UN staff members, U.S. citizens participate in the UN pension plan. In addition, they must also contribute to Social Security, and therefore are eligible for Social Security benefits. For the purposes of taxation, the U.S. Internal Revenue Service (IRS) treats UN staff who are U.S. citizens working in the United States as “self- employed” workers. According to IRS rules governing the taxation of such workers, UN staff who are U.S. citizens working in the United States must pay both the employer portion and the employee portion of Social Security taxes. However, the UN normally reimburses its U.S. citizen staff for one half of the Social Security tax. Therefore, UN staff who are U.S. citizens effectively pay the same rate of Social Security taxes as other salaried employees in the United States. Our estimates show that UN staff who are U.S. citizens have higher income replacement rates than other UN staff who retire with 20 years of service because they contribute to and receive Social Security benefits. As seen in table 14, the income replacement rate for UN employees who are U.S. citizens and retire at age 62 ranges from 56 to 68 percent, and the portion of income replaced by the employer ranges from 38 to 45 percent. However, U.S. citizen employees of the UN who work in the United States have to contribute both to the UN pension plan and to Social Security. A UN official commented that UN staff who are U.S. citizens and who are working in the United States make up only a small percentage of total UN staff, and very few of these staff spend their entire career with the UN working in the United States. Appendix III: Comparison of Allowances for United Nations Secretariat Staff and U.S. Civil Service Staff Overseas and Foreign Service employees United Nations (UN) Allowances offered to eligible UN staff and U.S. Foreign Service employees and U.S. civil service employees Serving in a dangerous duty station Danger pay is a special allowance that has been established for internationally and locally recruited staff members who are required to work in a duty station where very dangerous conditions prevail, including those where staff face high risk of becoming collateral damage in a war or active armed conflict or where medical staff risk their lives when deployed to deal with a public health emergency. The Chairman of the International Civil Service Commission is responsible for authorizing the application of danger pay to a duty station based on the recommendations of the UN Department of Safety and Security or the World Health Organization. Danger pay is granted for up to 3 months at a time, subject to ongoing review. As of the publication of this report, the UN offered danger pay to staff in 15 duty stations. For internationally recruited staff, the allowance is $1,600 per month. For locally recruited staff, the allowance is based on the local salary scale. The U.S. government provides danger pay to all civilian employees serving in places where conditions of civil insurrection, civil war, terrorism, or wartime conditions that threaten physical harm or imminent danger to the health or well-being of an employee exist. Danger pay is additional compensation of up to 35 percent over basic compensation to staff, for service at places in foreign areas where dangerous conditions that could threaten the health or well being of an employee exist. As of the publication of this report, the U.S. government offered danger pay to employees in 29 locations. The UN pays a post adjustment to staff to ensure equity in purchasing power of staff members across duty stations. The post adjustment is a part of the staff’s salary and is not considered an allowance. The post adjustment is higher for staff with dependents. The U.S. government grants a post allowance to staff officially stationed at a post in a foreign area where the cost of living is substantially higher than in Washington, DC. The post allowance is designed to permit staff to spend the same portion of their salaries for their standard living expenses as they would if they were residing in Washington, D.C. The amount paid is a flat rate that varies by basic salary, size of family, and post. United Nations (UN) The UN pays travel expenses for staff when they are initially appointed; when they change their duty station; when they separate from service; when they travel on official business; when they travel for home leave; when they travel to visit family members, and for rest and recuperation. In special circumstances requiring evacuation of staff members and their families for medical or security reasons, the UN also covers certain travel and travel-related costs. The UN pays travel expenses for staff dependents on the initial appointment; on separation from service, and on education grant travel and home leave. Staff also receive a daily allowance while on travel for official business. UN staff are also entitled to travel expenses for their child for one return journey from the educational institution to their duty station, if the educational institution is outside the country of the duty station. At some duty stations, the UN allows an additional round-trip journey in a non-home leave year. U.S. Civil Service Staff Serving Overseas and Foreign Service Employees The U.S. government pays travel and related expenses for members of the Foreign Service and their families under a number of circumstances, including when they are proceeding to and returning from assigned posts of duty; for authorized or required home leave; for family members to accompany, precede, or follow a foreign service member to a place of temporary duty; for representation travel; medical travel; rest and recuperation travel; evacuation travel; or other travel as authorized. In addition, the U.S. government pays the expenses for a child to travel to and from a secondary school or post-secondary school, once each way annually. The age limitation for secondary education travel is 20 (before the 21st birthday) and for post-secondary education the age limitation is 22 (before the 23rd birthday.) The U.S. may also grant Foreign Service staff and their eligible family members rest and recuperation travel to the United States, its territories, or other locations abroad. The UN provides an Additional Hardship Allowance for staff serving in certain duty stations where they are involuntary separated from their families. The Additional Hardship Allowance is paid in addition to the normal hardship allowance. The amount of the allowance varies according to grade and dependency status, and ranged from $6,540 to $23,250 as of 2013. In addition, for a UN staff member located in a duty station that lacks appropriate schools and medical facilities to meet family members’ needs, and who is obliged to pay rent in another city for their family, the staff member’s rent at the duty station and the rent for the family in the capital city can be considered one combined rent for the purposes of determining the rental subsidy. The U.S. government offers a separate maintenance allowance to assist staff who are required to maintain family members at locations other than their overseas post of assignment either due to (a) dangerous, notably unhealthful, or excessively adverse living conditions at the post, (b) because of special needs or hardship involving the employee or family member, c) if the family needs to stay temporarily in commercial quarters, such as a hotel. United Nations (UN) The UN provides an annual hardship allowance to staff on assignment in duty stations where living and working conditions are difficult. In determining the hardship allowance, the UN considers a duty station’s conditions of safety and security, health care, housing, climate, isolation, and conveniences of life. The hardship allowance varies depending on the duty station, salary level, and whether the staff member has dependents. Duty stations are categorized on a scale of difficulty from A to E, based on security conditions and quality of life at the duty station. Staff serving in more difficult duty stations receive higher allowance amounts. As of May 2013, the hardship allowance ranged from $4,360 to $23,250. U.S. Civil Service Staff Serving Overseas and Foreign Service Employees The U.S. government provides a post hardship differential, which is additional compensation of 25, 30 or 35 percent of salary to staff for service at places in foreign areas where conditions of environment differ substantially from conditions of environment in the continental United States and warrant additional compensation as a recruitment and retention incentive. A hardship differential is established for locations where the living conditions are extraordinarily difficult, involve excessive physical hardship, or are notably unhealthy. A U.S. government agency may also grant a difficult-to-staff incentive differential to staff assigned to a hardship post upon a determination that additional pay is warranted to recruit and retain staff at that post. The allowance is an additional 15 percent of salary. The U.S. government grants a foreign transfer allowance to staff for extraordinary, necessary and reasonable expenses, incurred by staff transferring to any post of assignment in a foreign area, prior to departure. This allowance includes a miscellaneous expense portion, a wardrobe expense portion, a pre- departure subsistence expense portion, and a lease penalty expense portion. The U.S. government offers a home service transfer allowance for extraordinary, necessary and reasonable expenses, for staff prior to transferring back to a post in the United States. This allowance includes a miscellaneous expense portion, a wardrobe expense portion, a subsistence expense portion, and a lease penalty expense portion. UN staff are eligible for an assignment grant that is intended to provide staff with a reasonable cash amount at the beginning of the assignment for the costs incurred as a result of appointment or reassignment. The amount of the grant varies by duty station and whether the staff has dependents. For example, a staff member with two dependents assigned to headquarters for a period of two years might earn an assignment grant of $7200, to compensate for 30 days at the beginning of the assignment. The UN also pays removal and shipment costs for staff. Staff may ship personal effects only, or household goods and personal effects in some cases. The UN has established weight limits for this allowance, which depend on the staff’s number of dependents. Some UN staff receive a small representation allowance, which permits them to extend official hospitality to individuals outside of the UN. For the purpose of official hospitality, heads of departments or offices may also authorize the reimbursement of reasonable expenditures incurred by staff who do not receive a representation allowance. The U.S. government provides representation allowances intended to cover allowable items of expenditure by staff whose official positions entail responsibility for establishing and maintaining relationships of value to the United States in foreign countries. Staff may submit vouchers to be reimbursed for allowable expenses or payments may be made on their behalf. United Nations (UN) The UN may pay a termination indemnity to a staff member whose appointment is terminated by the employing organization for any of the following reasons: abolition of post or reduction of staff, health, unsatisfactory service or agreed termination. In cases of unsatisfactory performance, the termination indemnity is at the discretion of the Secretary-General and up to half of what is otherwise payable. U.S. Civil Service Staff Serving Overseas and Foreign Service Employees The U.S. government authorizes severance pay for full-time and part-time staff who are involuntarily separated from Federal service and who meet other conditions of eligibility. To be eligible for severance pay, staff must be serving under a qualifying appointment, have a regularly scheduled tour of duty, have completed at least 12 months of continuous service, and be removed from Federal service by involuntary separation for reasons other than inefficiency (i.e., unacceptable performance or conduct). UN staff serving outside their home country are eligible for an education grant to cover part of the cost of educating children in full-time attendance at an educational institution. The amount of the grant is equivalent to 75 percent of allowable costs, subject to a maximum that varies from country to country. Staff are eligible for the grant up to the fourth year of their child’s postsecondary education, or age 25. For U.N. staff in the U.S., the maximum education grant in May 2013 was $43,589. The UN also covers up to 100 percent of boarding costs, up to a maximum amount, for children at the primary or secondary level if educational facilities are inadequate in the staff’s duty station. The U.S. government provides an allowance to assist staff in meeting the extraordinary and necessary expenses in providing adequate elementary and secondary education for dependent children at an overseas post of assignment. The amount of the grant depends on whether the child is in a school at post, a school away from the post, or in home study or at a private institution. UN staff are eligible for a rental subsidy intended to provide equity in accommodation expenses among UN staff in duty stations where rents vary considerably; and to alleviate hardships staff may face if their rent is higher than average for reasonable standard accommodations. For duty stations in Europe and North America, the UN determines a reasonable maximum rent level (or threshold) that is used to determine how much an staff should pay, taking into account their rent, their income, and whether they have dependents. Newly hired staff are eligible to receive a subsidy for the portion of their rent that exceeds the threshold up to a maximum of 40 percent of rent. They can receive the subsidy for up to seven years, and it declines over time. In years 1-4, the subsidy is 80 percent, in year 5 the subsidy is 60 percent, in year 6 it is 40 percent, and in year 7 it is 20 percent. For duty stations outside Europe and North America, the standard rental subsidy is 80 percent of the rent exceeding the threshold, up to a maximum of forty percent of rent. U.S. civilian staff are eligible for housing subsidies, called quarters allowances, that are designed to reimburse staff for substantially all costs of residing in either temporary or permanent living quarters. A quarters allowance is not granted when Government housing is provided. A temporary quarters subsistence allowance is granted to staff for the reasonable cost of temporary quarters, meals and laundry expenses incurred by staff and/or family member at the foreign post upon initial arrival or preceding final departure. A living quarters allowance is granted to staff for the annual cost of suitable, adequate, living quarters for the staff and his or her family. An extraordinary quarters allowance is granted to staff who must vacate permanent quarters due to renovations, or unhealthy or dangerous conditions. United Nations (UN) UN Allowances Not Offered to U.S. Employees Support of dependents The UN provides eligible staff members an annual children’s allowance of $2,929 per child under age 18 (or under 21 if a full-time student), and this amount is doubled for staff with disabled children. The UN also pays employees with dependents at a higher salary rate than those without dependents. According to the UN, this is similar to the practice of member states that provide a tax advantage for having dependents. Many UN employees are not eligible for these tax advantages, as they might be if they were employed in their national civil service, because most UN employees are not required to pay income taxes on their UN earnings in their home countries. To qualify for the dependents salary rate, UN staff must have a primary dependent (i.e., one dependent spouse or a first dependent child, if there is no dependent spouse). For staff with no primary dependent, the UN also provides a secondary dependent’s allowance for eligible staff members caring for a dependent parent or sibling. No comparable allowance. To encourage movement from one duty station to another, the UN provides an annual mobility allowance to staff on an assignment of one year or more who have had 5 consecutive years of service in the UN system. The amount of this allowance varies by the staff’s number of assignments, duty station, and whether the staff has dependents. As of August 2012, this allowance ranged from $2,020 to $16,900. No comparable allowance, though the U.S. government offers a difficult-to-staff incentive. See “Serving in a hardship duty station” above. UN provides, upon separation, a repatriation grant to staff members whom the organization is obligated to repatriate and who at the time of separation are residing, by virtue of their service with the United Nations, outside their home country. The UN determines the amount based on salary scale and varies according to family status and length of service outside the home country. No comparable allowance. United Nations (UN) No comparable allowance. Some U.S. government agencies have a program to repay certain types of Federally made, insured, or guaranteed student loans as an incentive to recruit or retain highly qualified personnel. Agencies may make payments to a loan holder of up to $10,000 in a calendar year, up to an aggregate maximum of $60,000 for any one staff. In return, staff must sign an agreement to remain in the service of the paying agency for at least 3 years. If the staff separates voluntarily or is separated involuntarily for misconduct, unacceptable performance, or a negative suitability determination under 5 CFR part 731 before fulfilling the service agreement, he or she must reimburse the paying agency for all student loan repayment benefits received. Appendix IV: Leave Benefits As we previously reported, UN staff are eligible for more generous leave benefits than U.S. civil service employees. For example, UN staff on fixed-term contracts earn more annual leave than U.S. civil service employees. UN staff earn 30 days of annual leave per year, while U.S. civil service employees earn 26 days a year once they have 15 or more years of service. U.S. civil service employees with less than 3 years of service earn 13 days per year, and those with 3 but less than 15 years of service earn 20 days per year. In addition, UN staff can be eligible for more sick leave than U.S. civil service employees, depending on the length of service. UN Secretariat staff do not earn sick leave the way they earn annual leave. Those with a sick leave need, who have worked for the UN for less than 3 years, are entitled to sick leave of up to 3 months on full salary and 3 months of half salary. UN staff who have completed 3 or more years of service are entitled to up to 9 months of sick leave. In contrast, U.S. civil service employees earn 4 hours of sick leave per pay period, or 1 day per month and may carry over unlimited amounts of sick leave into subsequent years. In addition, UN staff are entitled to paid maternity and paternity leave, as well as eligible for paid leave if they adopt a child, which is not offered to U.S. civil service employees. U.S. civil service employees are entitled to take certain amounts of time away from work for these purposes, but must use either their paid leave or unpaid leave under the Family and Medical Leave Act to account for their absences. Both UN staff and U.S. civil service employees have 10 holidays per year, though the dates may vary for UN staff depending on their duty station. Table 15 compares leave benefits for UN staff and U.S. civil service employees. Appendix V: Comments from the Department of State GAO Comment State commented that the ICSC review faces inherent challenges, including complexities associated with the Noblemaire Principle, which requires that compensation for UN professional staff be set in comparison to the highest compensated national service, which the UN has considered to be the U.S. federal civil service. State further commented on ambiguities with the Noblemaire Principle, including ambiguities over the comparison group and which elements of compensation should be included in the comparison. We agree that there are ambiguities associated with the Noblemaire Principle and therefore we did not use it as the basis for our comparison of UN and U.S. government benefits and allowances. As we discuss in the report, the UN General Assembly has directed that only salaries be used as part of the its margin calculation, rather than total compensation, which would include benefits and allowances in addition to salaries. It was beyond the scope of our review to comment on how the Noblemaire Principle should be applied, and our comparisons of UN and U.S. government benefits and allowances should not be interpreted as a statement or opinion on how Noblemaire comparisons should be conducted. Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the staff named above, Elizabeth Repko (Assistant Director), Debbie Chung, Leah DeWolf, Mark Dowling, Jeremy Latimer, Valérie Nowak, John O’Trakoun, Rhiannon Patterson, Steven Putansu, Jerry Sandau, David Schneider, Frank Todisco, and Ozzy Trevino made significant contributions to this report.
The UN General Assembly has expressed concerns about the relatively large and growing portion of the UN budget spent on total compensation. The United States contributes 22 percent of the UN's regular budget. UN total compensation consists of salary, benefits, and allowances. Since its inception in 1945, the UN has based salaries for its professional employees on salaries for the U.S. civil service. In 2013, GAO reported that the UN sets its salaries between 110 to 120 percent of U.S. civil service salaries, and that UN salaries were 116.9 percent of U.S. civil service salaries in 2012. UN salaries increased to 119.6 percent in 2013. GAO also recommended that the UN clarify its process for comparing salaries for UN professional staff with U.S. civil service salaries. GAO was asked to review the structure of UN total compensation and how it compares with that of U.S. federal employees. This report (1) examines similarities between UN and U.S. government benefits and allowances and compares their monetary values, and (2) examines UN efforts to address concerns about the sustainability of total compensation costs. GAO reviewed UN and U.S. government documents and data, and interviewed UN and U.S. officials. Benefits and allowances offered by the United Nations (UN) and the U.S. government are generally similar, and GAO found that each provided certain benefits or allowances with greater monetary value than the other. Similar UN and U.S. benefits include retirement plans and health insurance, while similar allowances include hardship and danger pay. Where comparable data were available, GAO found that each organization provided some benefits or allowances of greater monetary value than the other. For example, under a scenario where UN and U.S. staff retire at 62 with 20 years of service, the U.S. Federal Employees Retirement System replaces a higher percentage of pre-retirement salary than the UN Joint Staff Pension Fund. However, when these staff retire with 30 years of service, similar percentages of pre-retirement salary are replaced. In contrast, the UN allowance for staff serving in dangerous duty stations with families at separate locations had a higher average monetary value per recipient in 2012 than the comparable U.S. allowance. The UN has begun to address concerns about the sustainability of its rising total compensation costs, including initiating a review of total compensation, but that review does not include key elements. GAO analysis of UN data shows that staff-related expenditures rose steadily from $1.95 billion in 2002-2003 to $2.98 billion in 2010-2011, at an average rate of about 7 percent per 2-year budget, when adjusted for inflation. To help address costs, the UN raised its retirement age from 62 to 65 for new hires and froze rates paid for allowances for at least 1 year. In addition, the UN's International Civil Service Commission (ICSC) began a review of UN compensation, to be completed in 2015. The UN General Assembly requested that the ICSC review consider all elements of UN total compensation holistically, including both monetary and non-monetary elements. However, according to ICSC officials and documents, the review focuses only on elements within the ICSC's area of administrative responsibility, such as salary and allowances. Other key elements with significant costs, such as pensions and retiree health insurance, fall outside the ICSC's authority and therefore the study's focus. For example, the unfunded liability of the UN's retiree health insurance plan was estimated in 2012 at almost $4 billion. Without including all elements of total compensation in its review, member states will not have a complete set of information with which to make fully informed decisions about changes to the compensation system.
GAO_GAO-10-772
Background The Homeland Security Act, as well as other statutes, provide legal authority for both cross-sector and sector-specific protection and resiliency programs. For example, the purpose of the Public Health Security and Bioterrorism Preparedness and Response Act of 2002 is to improve the ability of the United States to prevent, prepare for, and respond to acts of bioterrorism and other public health emergencies, and the Pandemic and All-Hazards Preparedness Act of 2006 addresses public health security and all-hazards preparedness and response. Also, the Cyber Security Research and Development Act of 2002 authorized funding for the National Institute of Standards and Technology and the National Science Foundation to facilitate increased research and development for computer and network security and to support research fellowships and training. CIKR protection issues are also covered under various presidential directives, including HSPD-5 and HSPD-8. HSPD-5 calls for coordination among all levels of government as well as between the government and the private sector for domestic incident management, and HSPD-8 establishes policies to strengthen national preparedness to prevent, detect, respond to, and recover from threatened domestic terrorist attacks and other emergencies. These separate authorities and directives are tied together as part of the national approach for CIKR protection through the unifying framework established in HSPD-7. The NIPP outlines the roles and responsibilities of DHS and its partners— including other federal agencies, state, local, territorial, and tribal governments, and private companies. Within the NIPP framework, DHS is responsible for leading and coordinating the overall national effort to enhance protection via 18 CIKR sectors. HSPD-7 and the NIPP assign responsibility for CIKR sectors to SSAs. As an SSA, DHS has direct responsibility for leading, integrating, and coordinating efforts of sector partners to protect 11 of the 18 CIKR sectors. The remaining sectors are coordinated by 8 other federal agencies. Table 1 lists the SSAs and their sectors. The DHS’s Office of Infrastructure Protection (IP), located in the National Protection and Programs Directorate, is responsible for working with public- and private-sector CIKR partners and leads the coordinated national effort to mitigate risk to the nation’s CIKR through the development and implementation of the CIKR protection program. Using a sector partnership model, IP’s Partnership and Outreach Division (POD) works with owners and operators of the nation’s CIKR to develop, facilitate, and sustain strategic relationships and information sharing, including the sharing of best practices. The POD also works with public and private partners to coordinate efforts to establish and operate various councils intended to protect CIKR and provide CIKR functions to strengthen incident response. These councils include the aforementioned SCCs, which coordinate sectorwide CIKR activities and initiatives among private sector owners, operators, and trade associations in each of the 18 sectors, and the GCCs that represent federal, state, and local government and tribal interests to support the effort of SCCs to develop collaborative strategies for CIKR protection for each of the 18 sectors. The partnership model also includes various cross-sector councils, including the CIKR Cross-Sector Council, which addresses cross-sector issues and interdependencies among SCCs; the NIPP Federal Senior Leadership Council, which focuses on enhanced communication and coordination between and among federal departments and agencies responsible for implementing the NIPP and HSPD-7; and the State, Local, Tribal, and Territorial Government Coordinating Council, which promotes coordination across state and local jurisdictions. The model also includes a Regional Consortium Coordinating Council, which bring together representatives of regional partnerships, groupings, and governance bodies to foster coordination among CIKR partners within and across geographical areas and sectors. Figure 1 illustrates the sector partnership model and the interrelationships among the various councils, sectors, and asset owners and operators. IP’s Protective Security Coordination Division (PSCD) also operates the Protective Security Advisor Program, which deploys critical infrastructure protection and security specialists, called PSAs, to local communities throughout the country. Established in 2004, the program has 93 PSAs serving in 74 districts in 50 states and Puerto Rico, with deployment locations based on population density and major concentrations of CIKR throughout the United States. PSAs lead IP’s efforts in these locations and act as the link between state, local, tribal, and territorial organizations and DHS infrastructure mission partners. PSAs are to assist with ongoing state and local CIKR security efforts by establishing and maintaining relationships with state Homeland Security Advisors, State Critical Infrastructure Protection stakeholders, and other state, local, tribal, territorial and private-sector organizations. PSAs are to support the development of the national risk picture by conducting vulnerability and security assessments to identify security gaps and potential vulnerabilities in the nation’s most critical infrastructures. PSAs also are to share vulnerability information and protective measure suggestions with local partners and asset owners and operators. In addition, PSAs are to coordinate training for private-and public-sector officials in the communities in which they are located; support incident management; and serve as a channel of communication for state, local, tribal, and territorial officials and asset owners and operators seeking to communicate with DHS. Critical Infrastructure and the Concept of Resiliency “Despite ongoing vigilance and efforts to protect this country and its citizens, major accidents and disasters, as well as deliberate attacks, will occur. The challenge is to build the capacity of American society to be resilient in the face of disruptions, disasters, and other crises. Our vision is a Nation that understands the hazards and risks we face; is prepared for disasters; can withstand the disruptions disasters may cause; can sustain social trust, economic, and other functions under adverse conditions; can manage itself effectively during a crisis; can recover quickly and effectively; and can adapt to conditions that have changed as a result of the event.” The report also articulates that one of the goals for this mission is to “Rapidly Recover.” The two objectives associated with this goal are to (1) enhance recovery capabilities: establish and maintain nationwide capabilities for recovery from major disasters and (2) ensure continuity of essential services and functions: improve capabilities of families, communities, private-sector organizations, and all levels of government to sustain essential services and functions. DHS Efforts to Incorporate Resiliency into Programs Used to Work with Asset Owners and Operators Is Evolving but Program Management Could Be Strengthened Consistent with recent changes to the NIPP, DHS has begun to increase its emphasis on resiliency in the various programs it uses to assess vulnerability and risk at and among CIKR facilities so that it can help asset owners and operators identify resiliency characteristics of their facilities and provide suggested actions, called options for consideration, to help them mitigate gaps that have been identified. However, DHS has not developed an approach to measure owners’ and operators’ actions to address resiliency gaps identified as a result of these assessments. DHS has also begun to train PSAs about resiliency and how it applies to asset owners and operators, but it has not updated guidance that discusses PSAs’ roles and responsibilities to explicitly include resiliency and resiliency strategies. DHS Has Increased Emphasis on Resiliency in Programs, but Has Not Developed an Approach to Measure Performance In March 2010 we reported that DHS has increased its emphasis on resiliency in the 2009 NIPP by, among other things, generally pairing it with the concept of protection. We further stated that DHS has encouraged SSAs to emphasize resiliency in guidance provided to them in updating their sector-specific plans. Consistent with these efforts, DHS has also taken action to develop or enhance the programs it uses to work with asset owners and operators to bring a stronger focus to resiliency. The Regional Resiliency Assessment Program (RRAP) and the Mini-Resiliency Assessment Program (Mini- RAP) In 2009 DHS developed the RRAP to assess vulnerability and risk associated with resiliency. The RRAP is an analysis of groups of related infrastructure, regions, and systems in major metropolitan areas. The RRAP evaluates CIKR on a regional level to examine vulnerabilities, threats, and potential consequences from an all-hazards perspective to identify dependencies, interdependencies, cascading effects, resiliency characteristics, and gaps. In conducting the RRAP, DHS does an analysis of a region’s CIKR and protection and prevention capabilities and focuses on (1) integrating vulnerability and capability assessments and infrastructure protection planning efforts; (2) identifying security gaps and corresponding options for considerations to improve prevention, protection, and resiliency; (3) analyzing system recovery capabilities and providing options to secure operability during long-term recovery; and (4) assessing state and regional resiliency, mutual aid, coordination, and interoperable communication capabilities. RRAP assessments are to be conducted by DHS officials, including PSAs, in collaboration with SSAs: other federal officials; state, local, tribal, and territorial officials; and the private sector depending upon the sectors and facilities selected as well as a resiliency subject matter expert(s) deployed by the state’s homeland security agency. The results of the RRAP are to be used to enhance the overall security posture of the facilities, surrounding communities, and the geographic region covered by the project and are shared with the state. According to DHS officials, the results of specific asset-level assessments conducted as part of the RRAP are made available to asset owners and operators and other partners (as appropriate), but the final analysis and report is delivered to the state where the RRAP was conducted. One of the assessment tools DHS developed for the RRAP analysis is a “resiliency assessment builder,” which contains a series of questions designed to help officials identify resiliency issues associated with facilities included in the RRAP. The resiliency assessment builder, among other things, focuses on: the impact of loss associated with the facility, including any national security, sociopolitical, and economic impacts; interdependencies between the facility under review and other infrastructure—such as electrical power or natural gas suppliers, water, and supply chain systems—that if disrupted, could cause deterioration or cessation of facility operations; the impact of the loss of significant assets—such as an electrical substation to provide power or a rail spur to transport supplies— critical to the operation of the facility and backup systems available to maintain operations if losses occur; and specific vulnerabilities, unusual conditions, threats, or events—such as hurricanes, transportation chokepoints, or hazardous materials issues—that could disrupt operations and whether the facility is prepared to address the situation via specific capabilities or an action plan. Senior IP officials told us that they believe the RRAP has been successful in helping DHS understand resiliency in the context of interdependencies among individual assets. For example, while the focus of the Tennessee Valley Authority RRAP was energy sector sites and resources, DHS and its partners examined sites and resources in those sectors, like water and dams, which appeared to be obvious interdependencies. However, they also found that they needed to examine sites and resources in those sectors that appeared less obvious but were interdependent because they were intricately connected to the Tennessee Valley Authority operations, like sites and resources in the transportation sector. Also, in fiscal year 2010, DHS started an RRAP in Atlanta that focused primarily on commercial facilities. DHS’s related vulnerability assessment of sites (see the discussion below for additional details of these assessments) and resources associated with the water sector in Atlanta showed that an accident or attack involving one component of the water sector could disrupt the operations of sites or resources of other sectors in the geographic area covered by the RRAP. By discovering this vulnerability, and taking steps to address it, asset owners and operators in various sectors that were provided this information were better positioned to be able to work together to mitigate this potential problem. Senior IP officials said that the overall RRAP effort was piloted in five projects, but they no longer consider it a pilot program. They added that they plan to conduct five other RRAPs in 2010 in addition to the one already started in Atlanta. They further stated that because the program focuses only on areas with a high density of critical assets, they plan to develop a new “mini-RAP.” According to these officials, the mini-RAP is intended to provide assessments similar to those provided during an RRAP (but on a reduced scale) to groups of related infrastructure or assets that are not selected to receive an RRAP. An IP official stated that he anticipates that the mini- RAP, which is under development, will be finalized in October 2010. Site Assistance Visits (SAVs) DHS is also revising another vulnerability assessment called the SAV to foster greater emphasis on resiliency at individual CIKR sites. The SAV, which is a facility-specific “inside-the-fence” vulnerability assessment conducted at the request of asset owners and operators, is intended to identify security gaps and provide options for consideration to mitigate these identified gaps. SAVs are conducted at individual facilities or as part of an RRAP and are conducted by IP assessment teams in coordination with PSAs, SSAs, state and local government organizations (including law enforcement and emergency management officials), asset owners and operators, and the National Guard, which is engaged as part of a joint initiative between DHS and the National Guard Bureau. The National Guard provides teams of subject matter experts experienced in conducting vulnerability assessments. The private sector asset owners and operators that volunteer for the SAV are the primary recipient of the SAV analysis, which produces options for consideration to increase their ability to detect and prevent terrorist attacks. In addition, it provides mitigating options that address the identified vulnerabilities of the facility. The SAV is developed using a questionnaire that focuses on various aspects of the security of a facility, such as vulnerabilities associated with access to facility air handling systems; physical security; and the ability to deter or withstand a blast or explosion. Our review of the SAV questionnaire showed that it focuses primarily on vulnerability issues related to the protection of the facility. The SAV questionnaire also contains some questions that focus on resiliency issues because it asks questions about backup systems or contingencies for key systems, such as electrical power, transportation, natural gas, water, and telecommunications systems. Officials with IP’s PSCD said that they are working with IP’s Field Operations Branch to update the SAV to include more questions intended to capture the resiliency of a facility, especially since the SAV is used during the RRAP. They said that the effort is ongoing and, as of June 8, 2010, DHS had developed a time line showing the revised SAV is to be introduced in October or November 2010. Enhanced Critical Infrastructure Protection (ECIP) Security Survey DHS is also revising its ECIP security survey to further focus on resiliency at individual facilities. Under the ECIP survey, PSAs meet with facility owners and operators in order to provide awareness of the many programs, assessments, and training opportunities available to the private sector; educate owners and operators on security; and promote communication and information sharing among asset owners and operators, DHS, and state governments. ECIP visits are also used to conduct security surveys using the ECIP security survey, a Web-based tool developed by DHS to collect, process, and analyze vulnerability and protective measures information during the course of a survey. The ECIP security survey is also used to develop metrics; conduct sector-by-sector and cross-sector vulnerability comparisons; identify security gaps and trends across CIKR sectors and sub-sectors; establish sector baseline security survey scores; and track progress toward improving CIKR security through activities, programs, outreach, and training. Our review of the ECIP security survey showed that the original version of the survey made references to resiliency-related concepts—business continuity plans and continuity of operations. The newest version of the survey, published in June 2009, contains additional references to resiliency and resiliency- related concepts, including identifying whether or not a facility has backup plans for key resources such as electrical power, natural gas, telecommunications, and information technology systems. It is also used to identify key dependencies critical to the operation of the facility, such as water and wastewater, and to state whether backup plans exist for service or access to these dependencies in the event of an interruption. Further, senior IP officials told us that in addition to the updates on resiliency in the latest version of the ECIP security survey, they plan to incorporate 22 additional questions to a subsequent update of the survey that will focus on determining the level of resiliency of a facility. According to these officials, DHS also intends to use the updated survey to develop a resiliency “dashboard” for CIKR owners and operators that is intended to provide them a computerized tool that shows how the resiliency of their facility compares with other similar facilities (see the discussion below for a more detailed discussion of DHS’s ECIP dashboard). A DHS document on revisions to the SAV showed that the revised ECIP security survey is to be introduced at the same time as the revised SAV (October or November 2010) so that data collection associated with each remains compatible. DHS’s current projected release of the updated ECIP security survey is planned for October 2010. Program Management Could Be Improved by Measuring Efforts to Mitigate Resiliency Gaps Identified during Vulnerability Assessments DHS intends to take further actions to enhance the programs and tools it uses to work with asset owners and operators when assessing resiliency, but it has not developed an approach to measure its effectiveness in working with asset owners and operators in their efforts to adopt measures to mitigate resiliency gaps identified during the various vulnerability assessments. According to the NIPP, the use of performance measures is a critical step in the NIPP risk management process to enable DHS and the SSAs to objectively and quantitatively assess improvement in CIKR protection and resiliency at the sector and national levels. The NIPP states that while the results of risk analyses help sectors set priorities, performance metrics allow NIPP partners to track progress against these priorities and provide a basis for DHS and the SSAs to establish accountability, document actual performance, facilitate diagnoses, promote effective management, and provide a feedback mechanism to decision makers. Consistent with the NIPP, senior DHS officials told us that they have recently begun to measure the rate of asset owner and operator implementation of protective measures following the conduct of the ECIP security survey. Specifically, in a June 2010 memorandum to the Assistant Secretary for NPPD, the Acting Director of PSCD stated that 234 (49 percent) of 437 sites where the ECIP security survey had been conducted implemented protective measures during the 180-day period following the conduct of the ECIP survey. The Acting Director reported that the 234 sites made a total of 497 improvements across the various categories covered by the ECIP security survey, including information sharing, security management, security force, physical security, and dependencies while 239 sites reported no improvements during the period. The Acting Director stated that the metrics were the first that were produced demonstrating the impact of the ECIP program, but noted that PSCD is reexamining the collection process to determine whether additional details should be gathered during the update to the ECIP security survey planned for October 2010. However, because DHS has not completed its efforts to include resiliency material as part of its vulnerability assessment programs, it does not currently have performance metrics of resiliency measures taken by asset owners and operators. Moving forward, as DHS’s efforts to emphasize resiliency evolve through the introduction of new or revised assessment programs and tools, it has the opportunity to consider including additional metrics of resiliency measures adopted at the facilities it assesses for vulnerability and risk, particularly as it revises the ECIP security survey and develops the resiliency dashboard. Moreover, DHS could consider developing similar metrics for the SAV at individual facilities and the RRAP and mini-RAP in the areas covered by RRAPs and mini-RAPs. By doing so, DHS could be able to demonstrate its effectiveness in promoting resiliency among the asset owners and operators it works with and would have a basis for analyzing performance gaps. Regarding the latter, DHS managers would have a valuable tool to help them assess where problems might be occurring or alternatively provide insights into the tools used to assess vulnerability and risk and whether they were focusing on the correct elements of resiliency at individual facilities or groups of facilities. DHS Has Made Training on Resiliency Available to PSAs, but Guidelines on PSA Roles and Responsibilities Do Not Reflect DHS’s Growing Emphasis on Resiliency DHS uses PSAs to provide assistance to asset owners and operators on CIKR protection strategies. Although DHS had begun to train PSAs about resiliency and how it applies to the owners and operators they interact with, DHS has not updated PSAs’ guidance that outlines their roles and responsibilities to reflect DHS’s growing emphasis on resiliency. In April 2010, DHS provided a 1-hour training course called “An Introduction to Resilience” to all PSAs at a conference in Washington, D.C. The training was designed to define resilience; present resilience concepts, including information on how resilience is tied to risk analysis and its link to infrastructure dependencies and interdependencies; discuss how resilience applies to PSAs, including a discussion of the aforementioned updates to programs and tools used to do vulnerability assessments; and explain how DHS’s focus on resilience can benefit asset owners and operators. According to the Acting Deputy Director of PSCD, PSCD is expected to deliver the training to PSAs again during regional conferences to foster further discussions about resiliency and to give PSAs an additional opportunity to ask questions about the training they received in April 2010. Although DHS’s training discusses how resiliency applies to PSAs and how it can benefit asset owners and operators, DHS has not updated guidance that discusses PSA roles and responsibilities related to resiliency. The guidance DHS has provided to PSAs on certain key job tasks, issued in 2008, includes discussions about how PSAs are to (1) implement their role and responsibilities during a disaster; (2) conduct vulnerability assessments; and (3) establish or enhance existing strong relationships between asset owners and operators and DHS, federal, state, and local law enforcement personnel. However, the guidance does not articulate the role of PSAs with regard to resiliency issues, or how PSAs are to promote resiliency strategies and practices to asset owners and operators. For example, our review of DHS’s engagement guidance for PSAs showed that the guidance does not explicitly discuss resiliency; rather, it focuses primarily on protection. Specifically, the executive summary of the guidance states that one of the key infrastructure protection roles for DHS in fiscal year 2008 was to form partnerships with the owners and operators of the nation’s identified high-priority CIKR, known as level 1 and level 2 assets and systems. The guidance describes particular PSA responsibilities with regard to partnerships, including (1) identifying protective measures currently in place at these facilities and tracking the implementation of any new measures into the future; (2) informing owners and operators of the importance of their facilities in light of the ever-present threat of terrorism; and (3) establishing or enhancing existing relationships between owners and operators, DHS, and federal, state, and local law enforcement personnel to provide increased situational awareness regarding potential threats, knowledge of the current security posture at each facility, and a federal resource to asset owners and operators. There is one reference to a resiliency-related concept in an appendix where DHS indicated that the criteria to identify level 2 assets in the Information Technology sector should be “those assets that provide incident management capabilities, specifically, sites needed for rapid restoration or continuity of operations.” PSA program officials said that they are currently developing guidelines on a number of issues as DHS transitions from a CIKR program heavily focused on protection to one that incorporates and promotes resiliency. They said that PSAs do not currently have roles and responsibilities specific to “resiliency” because resiliency is a concept that has only recently gained significant and specific attention. They added that PSA roles and responsibilities, while not specifically mentioning resiliency, include component topics that comprise or otherwise contribute to resiliency as it is now defined. Nonetheless, the Acting Deputy Director of IP’s PSCD said that he envisions updating PSA guidance to incorporate resiliency concepts and that he intends to outline his plan for doing so in October 2010 as part of IP’s program planning process. However, he was not specific about the changes he plans to make to address resiliency concepts or whether the PSA’s roles and responsibilities related to resiliency would be articulated. According to standards for internal control in the federal government, management is responsible for developing and documenting the detailed policies and procedures to ensure that they are an integral part of operations. By updating PSA guidance that discusses the role PSAs play in assisting asset owners and operators, including how PSAs can work with them to mitigate vulnerabilities and strengthen their security, PSA program officials would be better positioned to help asset owners and operators have the tools they need to develop resilience strategies. This would be consistent with DHS efforts to train PSAs about resiliency and how it affects asset owners and operators. Updating PSA guidelines to address resiliency issues would also be consistent with DHS’s efforts to treat resiliency on an equal footing with protection, and would comport with DHS guidance that calls for SSAs to enhance their discussion of resiliency and resiliency strategies in SSPs. DHS Could Better Position Itself to Disseminate Information about Resiliency Practices with Asset Owners and Operators within and across Sectors DHS’s efforts to emphasize resiliency in the programs and tools it uses to work with asset owners and operators also creates an opportunity for DHS to better position itself to disseminate information about resiliency practices to asset owners and operators within and across sectors. Currently, DHS shares information on vulnerabilities and protective measures on a case-by-case basis. However, while it is uniquely positioned and has considered disseminating information about resiliency practices, DHS faces barriers in doing so and has not developed an approach for sharing this information more broadly, across sectors. DHS Shares Information on Vulnerabilities and Protective Measures on a Case-by-Case Basis According to the NIPP, its effective implementation is predicated on active participation by government and private-sector partners in meaningful, multidirectional information sharing. The NIPP states that when asset owners and operators are provided with a comprehensive picture of threats or hazards to CIKR and participate in ongoing multidirectional information flow, their ability to assess risks, make prudent security investments, and develop appropriate resiliency strategies is substantially enhanced. Similarly, according to the NIPP, when the government is provided with an understanding of private-sector information needs, it can adjust its information collection, analysis, synthesis, and dissemination accordingly. Consistent with the NIPP, DHS shares information on vulnerabilities and potential protective measures with asset owners and operators after it has collected and analyzed information during SAVs and ECIP security surveys performed at their individual facilities. This information includes vulnerabilities DHS has identified, and corresponding steps these owners and operators can take to mitigate these vulnerabilities, including options for consideration, which are suggestions presented to owners and operators to help them resolve vulnerabilities identified during DHS’s assessments. For example, DHS issues SAV reports to owners and operators that, among other things, identify vulnerabilities; help them identify their security posture; provide options for consideration to increase their ability to detect and prevent terrorist attacks; and enhance their ability to mitigate vulnerabilities. Regarding the ECIP security survey, DHS provides owners and operators an ECIP “dashboard” which shows the results for each component of the survey for a facility using an index, called the Protective Measures Index (PMI), which are scores DHS prepares for the facility and individual components that can be compared to other similar facilities’ scores. SAV reports and the ECIP dashboard generally focus on similar protection issues, such as facility or physical security, security personnel, and access control. The SAV reports and the ECIP dashboard discuss some continuity of operations issues that could be considered resiliency related. For example, the ECIP dashboard contains PMIs focused on whether the facility has a continuity plan and conducts continuity exercises, while the SAV report discusses whether the facility would be able to operate if resources such as electricity, water, or natural gas were not available. As discussed earlier, DHS is currently updating the SAV to include, among other things, an assessment of resiliency characteristics and gaps, and is taking action to develop a resiliency dashboard similar to that used under the ECIP security survey. Senior IP officials also stated that they share information on steps owners and operators can take to protect their facilities via Common Vulnerabilities, Potential Indicators, and Protective Measures (CV/PI/PM) reports. DHS develops and disseminates these reports to various stakeholders, generally on a need-to-know basis, including specific owners and operators, such as those that have been included in assessments by PSAs; law enforcement officials, emergency responders, and state homeland security officials; and others who request access to the reports. These reports, which focus on vulnerabilities and security measures associated with terrorist attacks, are intended to provide information on potential vulnerabilities and specific protective measures that various stakeholders can implement to increase their security posture. According to DHS, these reports are developed based on DHS’s experiences and observations gathered from a range of security-related vulnerability assessments, including SAVs, performed at infrastructures over time, such as the chemical and commercial facilities sectors and subsectors and asset types within those sectors, such as the chemical hazardous storage industry or the restaurant industry, respectively. For example, like other CV/PI/PM reports, DHS’s report on the restaurant industry gives a brief overview of the industry; potential indicators of terrorist activity; common vulnerabilities; and protective measures. Common vulnerabilities include unrestricted public access and open access to food; potential indicators of terrorist activity include arson, small arms attack, persons wearing unusually bulky clothing to conceal explosives, and unattended packages; and protective measures include developing a comprehensive security plan to prepare for and respond to food tampering and providing appropriate signage to restrict access to nonpublic areas. The CV/PI/PM reports discuss aspects of resiliency such as infrastructure interdependencies and incident response, but they do not discuss other aspects of resiliency. For example, the report on restaurants discusses protective measures including providing security and backup for critical utility services, such as power or water––efforts that may also enhance the resiliency of restaurants. Moving forward, as its efforts to emphasize resiliency evolve, DHS could consider including other aspects of resiliency in the CV/PI/PM reports. DHS Is Uniquely Positioned to Disseminate Information about Resiliency Practices but Faces Barriers Senior IP officials told us that they have considered ways to disseminate information that DHS currently collects or plans to collect with regard to resiliency. However, they have not explored the feasibility of developing an approach for doing so. Senior IP officials explained that given the voluntary nature of the CIKR partnership, DHS should not be viewed as identifying or promoting practices, particularly best practices, which could be construed to be standards or requirements. They said that DHS goes to great lengths to provide assurance to owners and operators that the information gathered during assessments will not be provided to regulators. They also stated that they provide owners and operators assurance that they will not share proprietary information with competitors. For example, certain information that they collect is protected under the Protected Critical Infrastructure Information (PCII) program, which institutes a means for the voluntary sharing of certain private sector, state, and local CIKR information with the federal government while providing assurance that the information will be exempt from disclosure under the Freedom of Information Act, among other things, and will be properly safeguarded. DHS has established a PCII program office, which among other things, is responsible for validating information provided by CIKR partners as PCII, and developing protocols to access and safeguard information that is deemed PCII. IP senior officials further explained that DHS relies on its private-sector partners to develop and share information on practices they use to enhance their protection and resilience. They said that the practices shared by sector partners, including best practices, are largely identified and developed by the private sector, at times with the support of its partners in government such as the SSAs. DHS facilitates this process by making various mechanisms available for information sharing, including information they deem to be best practices. For example, according to senior IP officials, DHS’s Homeland Security Information Network-Critical Sectors (HSIN-CS) was designed to provide each sector a portal to post useful or important information, such as activities or concepts that private-sector partners discern to be best practices on protection and resiliency topics. They also said that one factor to consider is that resiliency can mean different things to different sectors, as measures or strategies that are applicable or inherent to one sector may not be applicable to another given the unique characteristics of each sector. For example, the energy sector, which includes oil refineries, is inherently different than the government facilities sector, which includes government office buildings. In our March 2010 report on DHS’s increased emphasis on resilience in the NIPP, we reported that DHS officials told us that the balance between protection and resiliency is unique to each sector and the extent to which any one sector increases the emphasis on resiliency in its sector-specific plans will depend on the nature of the sector and the risks to its CIKR. Further, the Branch Chief of IP’s Office of Information Coordination and Analysis Office explained that differences in corporate cultures across the spectrum of companies could be a barrier to widely disseminating information on resiliency practices because it is often challenging to translate information, such as what constitutes a success or failure, from one company to another. He further stated that differences in the regulatory structures affecting different industries may be a factor that could limit the extent to which certain types of information could be disseminated. We recognize that DHS faces barriers to sharing information it gathers on resiliency practices within and among sectors. However, as the primary federal agency responsible for coordinating and enhancing the protection and resiliency of critical infrastructure across the spectrum of CIKR sectors, DHS is uniquely positioned to disseminate this information which would be consistent with the NIPP’s emphasis on information sharing. By working to explore ways to address any challenges or barriers to sharing resiliency information, DHS could build upon the partnering and information-sharing arrangements that CIKR owners and operators use in their own communities. For example, our work at CIKR assets along the Gulf Coast in Texas and in southern California showed that asset owners and operators viewed resiliency as critical to their facilities because it is in their best interests to either keep a facility operating during and after an event, or rebound as quickly as possible following an event. They said that they rely on a variety of sources for information to enhance their ability to be more resilient if a catastrophic event occurs, including information- sharing or partnering arrangements within and among CIKR partners and their local communities. Each of the 15 owners and operators we contacted in Texas and California said that they have partnering relationships with their sector coordinating councils, local/state government, law enforcement, emergency management, or mutual aid organizations. Furthermore, 14 of the 15 said that they work with these organizations to share information, including best practices and lessons learned, from recent disasters. Among the owners and operators we contacted: Representatives of one facility said that following a recent event, their company shared lessons learned with the local mutual aid association and various trade associations. These officials said that they also share best practices within the industry and across their facilities in other locations on an ongoing basis and that the company is currently organizing a committee made up of security staff from each facility within the organization whose primary responsibility is expected to be the sharing of best practices. Officials representing another facility told us that following an event or a drill, they critique the event and their response to garner any lessons learned or best practices. They said that they share information with the local fire department and a regional trade association. These officials stated that they will share information with other trade association members if they believe that it would be beneficial to others, but will not discuss proprietary information. Officials representing a different facility said that, following a hurricane in the same area, the company’s managers from various facilities met to share lessons learned and adopted best practices from other facilities within the same company and with external partners, including a mutual aid organization and local emergency responders. They said that they also have learned from the experiences of others— after an explosion at a similar company’s facility, they became aware that the other company had located its administration building too close to the company’s operations, thereby jeopardizing employee safety. By developing an approach for disseminating information it gathers or intends to gather with regard to resiliency, DHS would then be in a position to reach a broader audience across sectors or in different geographic locations. Senior IP officials said that they agree that disseminating information on resiliency practices broadly across the CIKR community would be a worthwhile exercise, but questioned whether they would be the right organization within DHS to develop an approach for sharing resiliency information. They said that IP does not currently have the resources to perform this function and suggested that an organization like the Federal Emergency Management Agency (FEMA) might be more appropriate for sharing information on resiliency because it already has mechanisms in place to share information on practices organizations can adopt to deal with all-hazards events, including terrorism. For example, FEMA manages DHS’s Lessons Learned Information Sharing portal, called LLIS.gov, which is a national online network of lessons learned and best practices designed to help emergency response providers and homeland security officials prevent, prepare for, and respond to all hazards, including terrorism. According to FEMA officials, LLIS.gov contains information on critical infrastructure protection and resiliency and system users, such as state and local government officials, are encouraged to submit content which is then vetted and validated by subject matter experts before being posted to the system. FEMA officials explained that FEMA does not actively collect information from system users, but encourages them to submit documents for review and possible inclusion into LLIS.gov. According to FEMA, access to LLIS.gov is restricted to members that request access to the system, particularly emergency response providers and homeland security officials. In March 2010, FEMA’s Outreach and Partnerships Coordinator for Lessons Learned Information Sharing told us that LLIS.gov had about 55,000 members, of which approximately 89 percent were representatives of state and local government; about 6 percent were representatives of private-sector organizations; and about 5 percent were representatives of the federal government. Regardless of which DHS organization would be responsible for disseminating information on resiliency practices, we recognize that DHS will face challenges in addressing any barriers it believes could hinder its ability to disseminate resiliency information. As part of this effort, DHS would have to determine what resiliency information it is collecting or plans to collect that might be most appropriate to share and what safeguards would be needed to protect against the disclosure of proprietary information within the confines of the voluntary nature of the CIKR partnership. Also, in doing so, DHS could consider some of the following questions: What additional actions, if any, would DHS need to take to convey that the information is being gathered within the voluntary framework of the CIKR partnership? To what extent does DHS need to take additional actions, if any, to provide assurance that the information being disseminated is nonregulatory and nonbinding on the owners and operators that access it? What additional mechanisms, if any, does DHS need to establish to provide assurance that reinforces the PCII process and how can resiliency practices information be presented to avoid disclosures of information that is PCII security sensitive or proprietary in nature? What mechanism or information system is most suitable for disseminating resiliency practices information, and which DHS component would be responsible for managing this mechanism or system? What approach should DHS take to review the information before it is disseminated to ensure that resiliency practices identified by DHS at one facility or in one sector are valid and viable, and applicable across facilities and sectors? What additional resources and at what additional cost, if any, would DHS need to devote to gathering and broadly disseminating information about resiliency practices across facilities and sectors? What actions can DHS take to measure the extent to which asset owners and operators are using resiliency information provided by DHS, and how can DHS use this information to make improvements, if needed? By determining the feasibility of overcoming barriers and developing an approach for disseminating resiliency information, DHS could better position itself to help asset owners and operators consider and adopt resiliency strategies, and provide them with information on potential security investments, based on the practices and experiences of their peers both within and across sectors. Conclusions In the wake of concerns by stakeholders, including members of Congress, academia, and the private sector that DHS was placing emphasis on protection rather than resilience, DHS has increased its emphasis on critical infrastructure resiliency in the NIPP. Consistent with these changes, DHS has also taken actions to increase its emphasis on resilience in the programs and tools it uses to assess vulnerability and risk that are designed to help asset owners and operators identify resiliency characteristics and gaps. These actions continue to evolve and could be improved if DHS were to strengthen program management by developing measures to assess the extent to which asset owners and operators are taking actions to address resiliency gaps identified during vulnerability assessments; and updating PSA guidelines to articulate PSA roles and responsibilities with regard to resiliency during their interactions with asset owners and operators. By developing performance measures to assess the extent to which asset owners and operators are taking actions to resolve resiliency gaps identified during the various vulnerability assessments, DHS would, consistent with the NIPP, be better positioned to demonstrate effectiveness in promoting resiliency among the asset owners and operators it works with and would have a basis for analyzing performance gaps. DHS managers would also have a valuable tool to help them assess where problems might be occurring, or alternatively provide insights into the tools used to assess vulnerability and risk and whether they were focusing on the correct elements of resiliency at individual facilities or groups of facilities. Furthermore, by updating PSA guidance to discusses the role PSAs play during interaction with asset owners and operators, including how PSAs can work with them to mitigate vulnerabilities and strengthen their security, DHS would have greater assurance that PSAs are equipped to help asset owners and operators have the tools they need to develop resilience strategies. This would also be consistent with DHS efforts to train PSAs about resiliency and how it affects asset owners and operators. Related to its efforts to develop or update its programs designed to assess vulnerability at asset owners’ and operators’ individual facilities and groups of facilities, DHS has considered how it can disseminate information on resiliency practices it gathers or plans to gather with asset owners and operators within and across sectors. However, it faces barriers in doing so because it would have to overcome perceptions that it is advancing or promoting standards that have to be adopted and concerns about sharing proprietary information. We recognize that DHS would face challenges disseminating information about resiliency practices within and across sectors, especially since resiliency can mean different things to different sectors. Nonetheless, as the primary federal agency responsible for coordinating and enhancing the protection and resiliency of critical infrastructure across the spectrum of CIKR sectors, DHS is uniquely positioned to disseminate this information. By determining the feasibility of overcoming barriers and developing an approach for disseminating resiliency information, DHS could better position itself to help asset owners and operators consider and adopt resiliency strategies, and provide them with information on potential security investments, based on the practices and experiences of their peers within the CIKR community, both within and across sectors. Recommendations for Executive Action To better ensure that DHS’s efforts to incorporate resiliency into its overall CIKR protection efforts are effective and completed in a timely and consistent fashion, we recommend that the Assistant Secretary for Infrastructure Protection take the following two actions: develop performance measures to assess the extent to which asset owners and operators are taking actions to resolve resiliency gaps identified during the various vulnerability assessments; and update PSA guidance that discusses the role PSAs play during interactions with asset owners and operators with regard to resiliency, which could include how PSAs work with them to emphasize how resiliency strategies could help them mitigate vulnerabilities and strengthen their security posture and provide suggestions for enhancing resiliency at particular facilities. Furthermore, we recommend that the Secretary of Homeland Security assign responsibility to one or more organizations within DHS to determine the feasibility of overcoming barriers and developing an approach for disseminating information on resiliency practices to CIKR owners and operators within and across sectors. Agency Comments and Our Evaluation We provided a draft of this report to the Secretary of Homeland Security for review and comment. In written comments DHS agreed with two of our recommendations and said that it needed additional time to internally consider the third. Regarding our first recommendation that IP develop performance measures to assess the extent to which asset owners and operators are taking actions to resolve resiliency gaps identified during vulnerability assessments, DHS said that IP had developed measures on owners’ and operators’ efforts to implement enhancements to security and resilience, and NPPD officials are reviewing these new performance metrics. With regard to our second recommendation to update guidance that discusses the role PSAs play during interactions with asset owners and operators about resiliency, DHS said that IP is actively updating PSA program guidance to reflect the evolving concept of resilience and will include information on resilience in the next revision to the PSA program management plan. Finally, regarding our third recommendation that DHS assign responsibility to one or more organizations within DHS to determine the feasibility of developing an approach for disseminating information on resiliency practices, DHS said that its components need time to further consider the recommendation and will respond to GAO and Congress at a later date. DHS also provided technical comments which we incorporated as appropriate. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Secretary of Homeland Security, the Under Secretary for the National Protection Programs Directorate, appropriate congressional committees, and other interested parties. If you have any further questions about this report, please contact me at (202) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Appendix I: Comments from the Department of Homeland Security Appendix II: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the contact named above, John F. Mortin, Assistant Director, and Katrina R. Moss, Analyst-in-Charge, managed this assignment. Katherine M. Davis, Anthony J. DeFrank, Michele C. Fejfar, Tracey L. King, Landis L. Lindsey, Thomas F. Lombardi, Lara R. Miklozek, Steven R. Putansu, Edith N. Sohna, and Alex M. Winograd made significant contributions to the work. Related GAO Products Critical Infrastructure Protection: Updates to the 2009 National Infrastructure Protection Plan and Resiliency in Planning. GAO-10-296. Washington, D.C.: March 5, 2010. The Department of Homeland Security’s (DHS) Critical Infrastructure Protection Cost-Benefit Report. GAO-09-654R. Washington, D.C.: June 26, 2009. Influenza Pandemic: Opportunities Exist to Address Critical Infrastructure Protection Challenges That Require Federal and Private Sector Coordination. GAO-08-36. Washington, D.C.: October 31, 2007. Critical Infrastructure: Sector Plans Complete and Sector Councils Evolving. GAO-07-1075T. Washington, D.C.: July 12, 2007. Critical Infrastructure Protection: Sector Plans and Sector Councils Continue to Evolve. GAO-07-706R. Washington, D.C.: July 10, 2007. Critical Infrastructure: Challenges Remain in Protecting Key Sectors. GAO-07-626T. Washington, D.C.: March 20, 2007. Critical Infrastructure Protection: Progress Coordinating Government and Private Sector Efforts Varies by Sectors’ Characteristics. GAO-07-39. Washington, D.C.: October 16, 2006. Critical Infrastructure Protection: Challenges for Selected Agencies and Industry Sectors. GAO-03-233. Washington, D.C.: February 28, 2003. Critical Infrastructure Protection: Commercial Satellite Security Should Be More Fully Addressed. GAO-02-781. Washington, D.C.: August 30, 2002. Cyber Security Critical Infrastructure Protection: Current Cyber Sector-Specific Planning Approach Needs Reassessment. GAO-09-969. Washington, D.C.: September 24, 2009. Cybersecurity: Continued Federal Efforts Are Needed to Protect Critical Systems and Information. GAO-09-835T. Washington, D.C.: June 25, 2009. Information Security: Cyber Threats and Vulnerabilities Place Federal Systems at Risk. GAO-09-661T. Washington, D.C.: May 5, 2009. National Cybersecurity Strategy: Key Improvements Are Needed to Strengthen the Nation’s Posture. GAO-09-432T. Washington, D.C.: March 10, 2009. Critical Infrastructure Protection: DHS Needs to Better Address Its Cybersecurity Responsibilities. GAO-08-1157T. Washington, D.C.: September 16, 2008. Critical Infrastructure Protection: DHS Needs to Fully Address Lessons Learned from Its First Cyber Storm Exercise. GAO-08-825. Washington, D.C.: September 9, 2008. Cyber Analysis and Warning: DHS Faces Challenges in Establishing a Comprehensive National Capability. GAO-08-588. Washington, D.C.: July 31, 2008. Critical Infrastructure Protection: Further Efforts Needed to Integrate Planning for and Response to Disruptions on Converged Voice and Data Networks. GAO-08-607. Washington, D.C.: June 26, 2008. Information Security: TVA Needs to Address Weaknesses in Control Systems and Networks. GAO-08-526. Washington, D.C.: May 21, 2008. Critical Infrastructure Protection: Sector-Specific Plans’ Coverage of Key Cyber Security Elements Varies. GAO-08-64T. Washington, D.C.: October 31, 2007. Critical Infrastructure Protection: Sector-Specific Plans’ Coverage of Key Cyber Security Elements Varies. GAO-08-113. October 31, 2007. Critical Infrastructure Protection: Multiple Efforts to Secure Control Systems are Under Way, but Challenges Remain. GAO-07-1036. Washington, D.C.: September 10, 2007. Critical Infrastructure Protection: DHS Leadership Needed to Enhance Cybersecurity. GAO-06-1087T. Washington, D.C.: September 13, 2006. Critical Infrastructure Protection: Challenges in Addressing Cybersecurity. GAO-05-827T. Washington, D.C.: July 19, 2005. Critical Infrastructure Protection: Department of Homeland Security Faces Challenges in Fulfilling Cybersecurity Responsibilities. GAO-05-434. Washington, D.C.: May 26, 2005. Critical Infrastructure Protection: Improving Information Sharing with Infrastructure Sectors. GAO-04-780. Washington, D.C.: July 9, 2004. Technology Assessment: Cybersecurity for Critical Infrastructure Protection. GAO-04-321. Washington, D.C.: May 28, 2004. Critical Infrastructure Protection: Establishing Effective Information Sharing with Infrastructure Sectors. GAO-04-699T. Washington, D.C.: April 21, 2004. Critical Infrastructure Protection: Challenges and Efforts to Secure Control Systems. GAO-04-628T. Washington, D.C.: March 30, 2004. Critical Infrastructure Protection: Challenges and Efforts to Secure Control Systems. GAO-04-354. Washington, D.C.: March 15, 2004. Posthearing Questions from the September 17, 2003, Hearing on “Implications of Power Blackouts for the Nation’s Cybersecurity and Critical Infrastructure Protection: The Electric Grid, Critical Interdependencies, Vulnerabilities, and Readiness”. GAO-04-300R. Washington, D.C.: December 8, 2003. Critical Infrastructure Protection: Challenges in Securing Control Systems. GAO-04-140T. Washington, D.C.: October 1, 2003. Critical Infrastructure Protection: Efforts of the Financial Services Sector to Address Cyber Threats. GAO-03-173. Washington, D.C.: January 30, 2003. High-Risk Series: Protecting Information Systems Supporting the Federal Government and the Nation’s Critical Infrastructures. GAO-03-121. Washington, D.C.: January 1, 2003. Critical Infrastructure Protection: Federal Efforts Require a More Coordinated and Comprehensive Approach for Protecting Information Systems. GAO-02-474. Washington, D.C.: July 15, 2002. Critical Infrastructure Protection: Significant Challenges in Safeguarding Government and Privately Controlled Systems from Computer-Based Attacks. GAO-01-1168T. Washington, D.C.: September 26, 2001. Critical Infrastructure Protection: Significant Challenges in Protecting Federal Systems and Developing Analysis and Warning Capabilities. GAO-01-1132T. Washington, D.C.: September 12, 2001. Critical Infrastructure Protection: Significant Challenges in Developing Analysis, Warning, and Response Capabilities. GAO-01-1005T. Washington, D.C.: July 25, 2001. Critical Infrastructure Protection: Significant Challenges in Developing Analysis, Warning, and Response Capabilities. GAO-01-769T. Washington, D.C.: May 22, 2001. Critical Infrastructure Protection: Significant Challenges in Developing National Capabilities. GAO-01-323. Washington, D.C.: April 25, 2001. Critical Infrastructure Protection: Challenges to Building a Comprehensive Strategy for Information Sharing and Coordination. GAO/T-AIMD-00-268. Washington, D.C.: July 26, 2000. Critical Infrastructure Protection: Comments on the Proposed Cyber Security Information Act of 2000. GAO/T-AIMD-00-229. Washington, D.C.: June 22, 2000. Critical Infrastructure Protection: “ILOVEYOU” Computer Virus Highlights Need for Improved Alert and Coordination Capabilities. GAO/T-AIMD-00-181. Washington, D.C.: May 18, 2000. Critical Infrastructure Protection: National Plan for Information Systems Protection. GAO/AIMD-00-90R. Washington, D.C.: February 11, 2000. Critical Infrastructure Protection: Comments on the National Plan for Information Systems Protection. GAO/T-AIMD-00-72. Washington, D.C.: February 1, 2000. Critical Infrastructure Protection: Fundamental Improvements Needed to Assure Security of Federal Operations. GAO/T-AIMD-00-7. Washington, D.C.: October 6, 1999. Critical Infrastructure Protection: Comprehensive Strategy Can Draw on Year 2000 Experiences. GAO/AIMD-00-1. Washington, D.C.: October 1, 1999. Defense Critical Infrastructure Protection Defense Critical Infrastructure: Actions Needed to Improve Identification and Management of Electrical Power Risks and Vulnerabilities to DoD Critical Assets. GAO-10-147. October 23, 2009. Defense Critical Infrastructure: Actions Needed to Improve the Consistency, Reliability, and Usefulness of DOD’s Tier 1 Task Critical Asset List. GAO-09-740R. Washington, D.C.: July 17, 2009. Defense Critical Infrastructure: Developing Training Standards and an Awareness of Existing Expertise Would Help DOD Assure the Availability of Critical Infrastructure. GAO-09-42. Washington, D.C.: October 30, 2008. Defense Critical Infrastructure: Adherence to Guidance Would Improve DOD’s Approach to Identifying and Assuring the Availability of Critical Transportation Assets. GAO-08-851. Washington, D.C.: August 15, 2008. Defense Critical Infrastructure: DOD’s Risk Analysis of Its Critical Infrastructure Omits Highly Sensitive Assets. GAO-08-373R. Washington, D.C.: April 2, 2008. Defense Infrastructure: Management Actions Needed to Ensure Effectiveness of DOD’s Risk Management Approach for the Defense Industrial Base. GAO-07-1077. Washington, D.C.: August 31, 2007. Defense Infrastructure: Actions Needed to Guide DOD’s Efforts to Identify, Prioritize, and Assess Its Critical Infrastructure. GAO-07-461. Washington, D.C.: May 24, 2007. Electrical Power Electricity Restructuring: FERC Could Take Additional Steps to Analyze Regional Transmission Organizations’ Benefits and Performance. GAO-08-987. Washington, D.C.: September 22, 2008. Department of Energy, Federal Energy Regulatory Commission: Mandatory Reliability Standards for Critical Infrastructure Protection. GAO-08-493R. Washington, D.C.: February 21, 2008. Electricity Restructuring: Key Challenges Remain. GAO-06-237. Washington, D.C.: November 15, 2005. Meeting Energy Demand in the 21st Century: Many Challenges and Key Questions. GAO-05-414T. Washington, D.C.: March 16, 2005. Electricity Restructuring: Action Needed to Address Emerging Gaps in Federal Information Collection. GAO-03-586. Washington, D.C.: June 30, 2003. Restructured Electricity Markets: Three States’ Experiences in Adding Generating Capacity. GAO-02-427. Washington, D.C.: May 24, 2002. Energy Markets: Results of FERC Outage Study and Other Market Power Studies. GAO-01-1019T. Washington, D.C.: August 2, 2001. Other Combating Terrorism: Observations on National Strategies Related to Terrorism. GAO-03-519T. Washington, D.C.: March 3, 2003. Critical Infrastructure Protection: Significant Challenges Need to Be Addressed. GAO-02-961T. Washington, D.C.: July 24, 2002. Critical Infrastructure Protection: Significant Homeland Security Challenges Need to Be Addressed. GAO-02-918T. Washington, D.C.: July 9, 2002.
According to the Department of Homeland Security (DHS), protecting and ensuring the resiliency (the ability to resist, absorb, recover from, or successfully adapt to adversity or changing conditions) of critical infrastructure and key resources (CIKR) is essential to the nation's security. By law, DHS is to lead and coordinate efforts to protect several thousand CIKR assets deemed vital to the nation's security, public health, and economy. In 2006, DHS created the National Infrastructure Protection Plan (NIPP) to outline the approach for integrating CIKR and increased its emphasis on resiliency in its 2009 update. GAO was asked to assess the extent to which DHS (1) has incorporated resiliency into the programs it uses to work with asset owners and operators and (2) is positioned to disseminate information it gathers on resiliency practices to asset owners and operators. GAO reviewed DHS documents, such as the NIPP, and interviewed DHS officials and 15 owners and operators of assets selected on the basis of geographic diversity. The results of these interviews are not generalizable but provide insights. DHS's efforts to incorporate resiliency into the programs it uses to work with asset owners and operators is evolving but program management could be strengthened. Specifically, DHS is developing or updating programs to assess vulnerability and risk at CIKR facilities and within groups of related infrastructure, regions, and systems to place greater emphasis on resiliency. However, DHS has not taken commensurate efforts to measure asset owners' and operators' actions to address resiliency gaps. DHS operates its Protective Security Advisor Program, which deploys critical infrastructure protection and security specialists, called Protective Security Advisors (PSA), to assist asset owners and operators on CIKR protection strategies, and has provided guidelines to PSAs on key job tasks such as how to establish relationships between asset owners and operators and DHS, federal, state, and local officials. DHS has provided training to PSAs on resiliency topics, but has not updated PSA guidelines to articulate the role of PSAs with regard to resiliency issues, or how PSAs are to promote resiliency strategies and practices to asset owners and operators. A senior DHS official described plans to update PSA guidelines and the intent to outline this plan in October 2010, but did not provide information on what changes would be made to articulate PSA roles and responsibility with regard to resiliency. By developing measures to assess the extent to which asset owners and operators are addressing resiliency gaps and updating PSA guidance, DHS would be better positioned to manage its efforts to help asset owners and operators enhance their resiliency. DHS faces barriers disseminating information about resiliency practices across the spectrum of asset owners and operators. DHS shares information on potential protective measures with asset owners and operators and others including state and local officials (generally on a case-by-case basis) after it has completed vulnerability assessments at CIKR facilities. DHS officials told GAO that they have considered ways to disseminate information that they collect or plan to collect with regard to resiliency. However, DHS faces barriers sharing information about resiliency strategies. For example, given the voluntary nature of the CIKR partnership, DHS officials stated that DHS should not be viewed as identifying and promoting practices which could be construed by CIKR partners to be standards. Also, according to DHS officials, the need for and the emphasis on resiliency can vary across different types of facilities depending on the nature of the facility. For example, an oil refinery is inherently different than a government office building. DHS's efforts to emphasize resiliency when developing or updating the programs it uses to work with owners and operators creates an opportunity for DHS to position itself to disseminate information about resiliency practices within and across the spectrum of asset owners and operators. By determining the feasibility of overcoming barriers and developing an approach for disseminating information on resiliency practices within and across sectors, DHS could better position itself to help asset owners and operators consider and adopt resiliency strategies.
GAO_HEHS-98-72
Background Recent estimates indicate that illicit drug use in the United States remains a major problem. In 1996, an estimated 13 million people were current drug users—that is, they had used illicit drugs in the past month—which was down from a peak of 25 million in 1979. The number of current illicit drug users has remained relatively static since 1992. Marijuana is the most commonly used illicit drug, with about 10.1 million users in 1996. About half (54 percent) of the 1996 illicit drug users used marijuana only, while another 23 percent used marijuana and one or more other drugs. The remaining 23 percent of illicit drug users used only a drug other than marijuana. The number of current cocaine users declined from 5.7 million people in 1985 to 1.75 million in 1996. The estimated number of crack cocaine users in 1996 was about 668,000 and has remained steady at about this level since 1988. However, the use of heroin has been increasing recently, rising from 68,000 current users in 1993 to 216,000 current users in 1996. Among 12- to 17-year-old adolescents, current drug use rose from 5.3 percent in 1992 to 10.9 percent in 1995 but declined in 1996 to 9.0 percent. This decline is attributable to reductions in use among youth aged 12 to 15; for those aged 16 and 17, there was no change in current use from 1995 to 1996. The rate of marijuana use among adolescents more than doubled from 1992 to 1995. By 1996, 7.1 percent of adolescents had used marijuana in the past month. The same year, 0.6 percent of adolescents were current cocaine users, and 0.2 percent were current heroin users. Previous month use of hallucinogens nearly doubled from 1994 to 1996, from 1.1 percent to 2 percent. Billions of Federal Dollars Support Drug Abuse Treatment As part of its overall drug control effort, the federal government provides significant support for activities related to drug abuse treatment, including grants to states, direct services, and research. Fiscal year 1998 federal funding for treatment of drug abuse is approximately $3.2 billion, or about one-fifth of the total drug control budget. The Congress has authorized HHS and VA to spend the vast majority of federal drug abuse treatment funds. One-Fifth of Federal Spending on Drug Control Supports Treatment Activities Federal spending on drug control recognizes four general areas of emphasis: demand reduction (which includes prevention, treatment, and related research), domestic law enforcement, interdiction, and international cooperation. For fiscal year 1998, the federal government budgeted a total of about $16 billion for drug control activities. The largest share of this budget—53 percent—supported domestic law enforcement activities. Drug abuse treatment accounted for 20 percent and prevention, for 14 percent; the remainder was allocated to interdiction and international efforts. (See fig. 1.) The proportion of drug control spending to reduce the demand for drugs has remained fairly constant since the mid-1980s at about one-third of the total. Since the early 1990s, federal spending for drug control has grown steadily. Total federal drug control funding rose by 64 percent, from about $9.8 billion in 1990 to about $16 billion in 1998. (See fig. 2.) During this period, the drug treatment budget increased slightly faster, 78 percent, growing from about $1.8 billion in fiscal year 1990 to $3.2 billion in fiscal year 1998. An additional $237 million above the 1998 level was requested for fiscal year 1999 treatment funding. HHS and VA Receive Most Federal Funds for Drug Abuse Treatment Activities Although a number of federal entities—including the Department of Justice, the Department of Education, and the Judiciary—receive treatment-related funding, HHS and VA receive the bulk of federal drug abuse treatment dollars (see table 1). For fiscal year 1998, HHS has been authorized to spend about $1.7 billion on drug abuse treatment—54 percent of all federal treatment dollars. For the same year, VA has received about $1.1 billion for drug abuse treatment and related costs, which is 34 percent of the federal treatment budget. Of the total growth in federal expenditures for drug abuse treatment between 1994 and 1998—about $557 million—increased funding to VA accounted for about 44 percent and to HHS, 33 percent. Of HHS’ $1.7 billion drug treatment budget for 1998, more than half ($944 million) was dedicated to the Substance Abuse and Mental Health Services Administration (SAMHSA) to support the treatment components of its Substance Abuse Performance Partnership Grants to states and the Knowledge Development and Application Program. Approximately 80 percent of SAMHSA’s total budget is distributed to the states through block grants and formula programs. SAMHSA has requested an increase of $143 million in fiscal year 1999 Substance Abuse Performance Partnership Grants funding to make treatment available to more of those who need it. The Health Care Financing Administration received $360 million in fiscal year 1998 to pay for drug abuse treatment services for Medicaid and Medicare beneficiaries. Eighty percent of this amount finances Medicaid treatment expenses, including all covered hospital and nonhospital services required. The remaining 20 percent covers Medicare hospital insurance treatment costs. The National Institutes of Health (NIH) received about one-sixth of HHS’s drug treatment funds to conduct research in the areas of drug abuse and underage alcohol use. For 1999, NIH has requested funding ($51 million) for its Drug and Underage Alcohol Research initiative to expand research on underage alcohol and drug addiction among children and adolescents, as well as chronic drug users, and to support increased dissemination of research findings. Recognizing the need to improve research on the infrastructure that delivers treatment, the Congress mandated in 1992 that the National Institute on Drug Abuse (NIDA) obligate at least 15 percent of its funding to support research on the impact of the organization, financing, and management of health services on issues such as access and quality of services. In 1998, VA was appropriated about $1.1 billion for inpatient and outpatient medical care provided to veterans with a diagnosis of drug abuse, as well as for drug abuse treatment services. Special substance abuse treatment services are available at 126 medical facilities. Additional monies support treatment research in coordination with NIDA. Other federal agencies that received drug treatment funds for fiscal year 1998 include the Departments of Education and Justice (each received more than $100 million), the federal Judiciary (about $75 million), and ONDCP (about $24 million). From 1994 to 1998, Justice’s funding rose 163 percent; moreover, its 1999 funding request would increase its funding by another third. The Department of Justice has requested about $83 million for fiscal year 1999 to support its Drug Intervention Program, a new program that would support drug testing, treatment, and graduated sanctions for drug offenders, in an effort to break the cycle of drug abuse and violence. The Government Performance and Results Act was enacted in 1993 in part as a means to improve performance measurement by federal agencies. It requires agencies to set goals, measure performance, and report on their accomplishments and thus should provide a useful framework for assessing the effectiveness of federally funded drug treatment efforts. However, demonstrating the efficient and effective use of federal drug abuse treatment funds is particularly challenging because most of these funds support services provided by state and local grantees, which are given broad discretion in how best to use them. Regardless, federal agencies are now required by the Results Act to hold states accountable for achieving federal goals for effective treatment outcomes. Drug Treatment Services Are Provided in a Variety of Settings Drug addiction is a complicated disorder that includes physiological, behavioral, and psychological aspects. For example, the environmental cues that have been associated with drug use can trigger craving and precipitate relapse, even after long periods of abstinence. Despite the potential for relapse to drug use, not all drug users require treatment services to discontinue use. For those who do require treatment, services may be provided in either outpatient or inpatient settings, and via two major approaches: pharmacotherapy and behavioral therapy, with many programs combining elements of both. Other treatment approaches, such as faith-based strategies, have yet to be rigorously examined by the research community. Nature of Drug Abuse In general, drug abuse is defined by the level and pattern of drug consumption and the severity and persistence of resulting functional problems. A diagnosis of drug abuse is generally made when drug use has led to social, legal, or interpersonal problems. A clinical diagnosis of drug dependence—or addiction—is based on a group of criteria including physiological, behavioral, and cognitive factors. In particular, drug addiction is characterized by compulsive drug-seeking behavior. People who are dependent on drugs often use multiple drugs and usually have substantial impairment of health and social functioning. Furthermore, addiction is generally accompanied by withdrawal symptoms and drug tolerance, resulting in the need to increase the amount of drugs consumed. Moreover, severe dependence is often associated with health conditions or impairments in social functioning, including mental health disorders that generally are serious and difficult to treat. Drug abusers are more likely than nonabusers to sustain injuries; be involved in violence and illegal activities; have chronic health problems, including a higher risk of contracting HIV (human immunodeficiency virus); and have difficulty holding a job. Most scientists agree that addiction is the result of chemical and physical changes in the brain caused by drug use. However, they recognize that addiction extends beyond physiological components to include significant behavioral and psychological aspects. For example, specific environmental cues that a drug abuser associates with drug use can trigger craving and precipitate relapse, even after long periods of abstinence. Therefore, people receiving treatment for drug abuse often enter treatment a number of times—sometimes in different approaches or settings, and sometimes in the same approach or even the same treatment facility. Often, the substance abuser reduces his or her drug use incrementally with each treatment episode. Experts recognize that not all drug users require treatment to forgo drug use because some drug users do not progress to abuse or dependence. Even among those who progress to the stage of abuse, some can stop drug use without treatment. This issue was addressed in a study of Vietnam veterans’ rapid recovery from heroin addiction. Forty-five percent of enlisted Army men had tried narcotics in Vietnam, and 20 percent reported the development of an addiction to narcotics. However, in the first year after their return home, only 5 percent of those addicted in Vietnam remained addicted in the United States. The author concluded that most addictions are relatively brief, and that most drug abusers are capable of discontinuing drug use without treatment. This view is controversial; others contend that the Vietnam veterans’ experience is an anomaly resulting from the drastic change in environment when they returned home. Drug Abuse Treatment Approaches and Settings Data from 1992-93 on use of drug treatment in the United States (the most current available) show that about 1.4 million people received drug treatment during the previous year. According to SAMHSA, the individuals in drug treatment were those with the most extreme patterns of drug use: the highest frequency of drug use, use of the least typical drug types, and early initiation of use. Most of the group in treatment had received treatment in multiple settings, most commonly in drug treatment facilities and self-help groups. Only about one-fourth of those who needed drug treatment in the previous year reported having received it during that year. Adolescents (aged 12 to 17) were even less likely to receive needed treatment, with 18 percent of those needing treatment receiving it. The treatment of drug addiction can be classified under two major approaches: pharmacotherapy and behavioral therapy. Pharmacotherapy relies on medications to block the euphoric effects or manage the withdrawal symptoms and cravings experienced with illicit drug use. One such widely used medication is methadone, a narcotic analgesic that blocks the euphoria of heroin, morphine, codeine, and other opiate drugs and suppresses withdrawal symptoms and craving between treatment doses. Methadone maintenance generally requires daily clinic visits to receive the methadone dose; over time, some clients are given take-home doses. Methadone maintenance can continue for as long as several years, and in some cases, maintenance may last a lifetime. A number of other drugs have also been shown to be safe and efficacious in the treatment of opiate addiction. Levo-alpha-acetylmethadol (LAAM) suppresses withdrawal symptoms for 72 to 96 hours and thus can reduce clients’ clinic visits to 3 days per week. Naltrexone, like LAAM, is long-acting and can be administered in small daily doses or in larger doses 3 times a week. Naltrexone is believed to be most effective for highly motivated clients, especially those with strong social supports. Buprenorphine has been effective in clinical trials in retaining patients in treatment and facilitating abstinence. In addition, buprenorphine has been shown to produce less physical dependence than methadone and LAAM. Behavioral therapy includes various forms of psychotherapy, contingency-based therapy, cognitive therapy, and other types of therapies. It may include skills training and a variety of counseling approaches, from highly structured individual or family counseling to more informal group counseling. Some programs combine elements of both pharmacotherapy and behavioral therapy. For example, many methadone maintenance programs are designed to also provide counseling services, which may include psychotherapy or individualized social assistance. Participation in counseling facilitates regular monitoring of client behavior, appearance, and drug use. Some outpatient nonmethadone programs also use pharmacological treatment, such as medications for initial detoxification, medications to control craving, or drugs that address psychiatric disorders such as depression or schizophrenia. Drug abusers receiving pharmacotherapy, behavioral therapy, or both may also participate in self-help groups, such as Alcoholics Anonymous, Narcotics Anonymous, or Rational Recovery and are generally encouraged to continue participation in these groups after leaving formal treatment to help maintain abstinence and a healthy lifestyle. A number of other, less commonly used approaches to drug treatment offer alternatives to these established approaches. One such example is the use of spirituality as a component of treatment. Some researchers have acknowledged that people with a strong spiritual or religious involvement seem to be at lower risk for substance abuse, yet research in this area remains extremely limited. Experts have yet to agree on how to define faith-based drug treatment. Some define faith-based programs as those that are based on religious beliefs and practices, such as Teen Challenge, while others consider any treatment approach that recognizes spirituality, such as Narcotics Anonymous or Cocaine Anonymous, to be faith-based. Regardless of how faith-based treatment is defined, there has not been sufficient research to determine the results of this type of treatment. For example, a recent research conference assessed the evidence on spiritual treatment for alcohol and drug abuse. The panel found strong evidence for a few limited assertions: that better treatment outcomes correlate with Alcoholics Anonymous involvement after outpatient treatment and that meditation-based interventions are associated with reduced levels of alcohol and drug use. The panel concluded that the issues for future research in this area include the definition and measurement of spiritual variables and the possible spiritual factors that could play a role in recovery from substance abuse. Regardless of the approach used, drug treatment services are provided in both inpatient and outpatient settings. Most people are served by outpatient programs, where treatment can vary from psychotherapy at comprehensive health centers to informal group discussions at drop-in centers. People who enter outpatient drug-free treatment generally (though not always) have a less severe level of addiction and associated problems than those who receive treatment in inpatient settings. Although weekly counseling is the predominate treatment approach available at outpatient settings, some programs also offer pharmacological treatment and some give assistance with social needs, including education, job training, housing, and health care. Inpatient settings include hospitals as well as residential facilities, such as therapeutic communities. Hospital-based drug treatment is used for detoxification from drugs and to provide other services for individuals having severe medical or psychiatric complications. Data from 1992-93 show that, of the group reporting drug treatment during the past year, 28 percent received treatment in an inpatient hospital setting. Chemical dependency programs, one type of inpatient treatment program, recognize drug problems as having multiple causes, including physiological, psychological, and sociocultural aspects. Treatment may last up to several weeks and may include pharmacological intervention, education about drug addiction, counseling, participation in self-help groups, and medical or psychiatric services. Long-term residential treatment programs are designed for people with more severe drug problems—those with dependence on one or more drugs who have failed previous treatment efforts. For example, therapeutic communities provide treatment that is generally planned for 6 to 12 months in a residential setting. Clients are generally chronic drug abusers who have failed at other forms of drug abuse treatment, while staff are largely previous drug abusers. Strict behavioral expectations and responsibilities are enforced to emphasize appropriate social and vocational norms. Research Issues Make Assessment of Treatment Effectiveness Difficult The study of drug treatment programs is complicated by a number of challenging methodological and implementation issues. Evaluations of treatment effectiveness can use one of several methodologies, depending on the specific questions to be addressed. Thus, the appropriateness of the study design and how well the evaluation is conducted determine the confidence to be placed in the research findings. In particular, studies of the validity of self-reported data demonstrate that information on treatment outcomes collected by self-report should be interpreted with some caution. The ability to compare the results of effectiveness studies is also influenced, and often limited, by differences in how outcomes are measured, how programs are operated, and client variables. Quality of Evidence Varies by Study Design Drug treatment effectiveness research conducted over the past 2 decades has used a variety of designs, including randomized clinical trials, simple or controlled observation, and quasiexperimental designs. Selection of the study design depends on a number of factors, including the questions being addressed and the resources available to fund the study. Methodologists agree that randomized clinical trials are the most rigorous study designs and therefore offer the strongest support for their findings. Studies that rely on a simple observational design produce less definitive findings but can provide a good indication of the operation of drug treatment programs as well as information on treatment outcomes. A quasiexperimental design, the most frequently used in field settings, falls somewhere in between. Randomized clinical studies are designed to isolate the effects of a treatment by randomly assigning individuals to either a control group—receiving no treatment or an alternative treatment—or to a group that receives the treatment being studied. This study design has been used in the assessment of methadone maintenance for treating heroin addiction. Randomized trials are often used to study the efficacy of a treatment, asking the question, “Can it work?” Although such studies provide the most definitive information about whether particular treatments are effective, they are not widely used in drug treatment research. According to an analysis by the Lewin Group, among the reasons cited for the limited use of randomized trials are the difficulties in obtaining informed consent from drug abusers and the perceived ethical issue of randomly assigning people who are seeking drug treatment to a control group in which no treatment or a treatment regimen not of the client’s choice is provided. Simple and controlled observation designs typically employ a repeated-measures methodology, whereby the researchers collect information on drug use patterns and other criteria from clients before, during, and after treatment. Generally, controlled observation studies examine multiple treatment groups, and simple observation studies follow a single treatment group without a nontreatment comparison group. Observational studies provide information about the effectiveness of treatments when implemented in uncontrolled, or real-world, conditions. Observational design has been used to assess treatment provided in all four of the major treatment settings: residential therapeutic communities and outpatient methadone maintenance, outpatient drug-free, and inpatient chemical dependency programs. Quasiexperimental study designs generally have a comparison group, a key feature of strong research design, but an investigator does not randomly assign individuals to treatment and comparison groups. Instead, comparisons are made between possibly nonequivalent client groups or by using statistical techniques that adjust for known differences in client characteristics. Even in a quasiexperimental design, a repeated-measures methodology might be used in comparing the behaviors of the same group of drug abusers before, during, and after treatment. A quasiexperimental design is often applied in evaluations of naturally occurring events, such as introducing a new treatment approach or closing a treatment program. Such a design allows greater confidence (than observation alone) that any differences detected are due to treatment but not as much confidence as random assignment of clients to treatment and comparison groups. Quasiexperimental study designs have been used to assess the effectiveness of both methadone maintenance programs and therapeutic communities as well as outpatient drug-free programs. Treatment Evaluations Define and Measure Outcomes Differently Treatment program goals generally include a wide range of issues, such as reducing drug use, reducing criminal behavior, and improving employment status. Most researchers have agreed that reducing drug use from the level it would have been without treatment (harm reduction) is a valid goal of drug treatment and an indication of program success. In addressing this issue, researchers acknowledge that abstinence from illicit drug consumption is the central goal of all drug treatment, but they contend it is not the only acceptable goal of treatment, since total abstinence from drug use may be unrealistic for many users. According to the Institute of Medicine, “an extended abstinence, even if punctuated by slips and short relapses, is beneficial in itself and may serve as a critical intermediate step toward lifetime abstinence and recovery.” Even with harm reduction as the common objective, treatment outcome measures vary among—and sometimes within—treatment programs. Operationalizing the outcome measures is also done differently across programs, which makes it difficult to compare treatment outcomes of different programs. For example, one program may measure reduction in drug use by examining the frequency of drug use, while another may choose to focus on reduced relapse time. Major drug treatment studies use other outcomes as well to measure treatment effectiveness, ranging from reductions in criminal activity to increased productivity. Indicators for these outcome measures also vary by study. (See table 2.) Another issue related to measuring treatment outcomes is concern about the time frame for client follow-up. Since drug addiction is commonly viewed as a life-long disease, many argue that long-term follow-up is needed to fully assess treatment outcomes. However, many of those who complete treatment programs are lost in the follow-up assessment period. Treatment assessment periods vary considerably, ranging from a 1-year follow-up for most studies to a 12-year follow-up for a subset of clients in one of the major studies we reviewed. The research literature indicates difficulties in tracking drug abusers even for 1-year follow-up periods. For example, of the group selected for follow-up interviews in the Drug Abuse Treatment Outcome Study (DATOS), only 70 percent actually completed the interview. Reliance on Self-Reported Data Has Limitations With all types of study designs, data collection issues can hamper assessments of treatment effectiveness. The central debate regarding data collection on the use of illicit drugs surrounds the common use of self-reported data. A recent NIDA review of current research on the validity of self-reported drug use highlights the limitations of data collected in this manner. According to this review, recent studies conducted with criminal justice clients (such as people on parole, on probation, or awaiting trial) and former treatment clients suggest that 50 percent or fewer current users accurately report their drug use in confidential interviews. In general, self-reports are less valid for the more stigmatized drugs, such as cocaine; for more recent rather than past use; and for those involved with the criminal justice system. The largest studies of treatment effectiveness, which have evaluated the progress of thousands of people in drug treatment programs, have all relied on self-reported data. That is, the drug abuser is surveyed when entering treatment, and then again at a specified follow-up interval. In general, individuals are asked, orally or in writing, to report their drug use patterns during the previous year. Self-reports of drug use may be subject to bias both prior to and following treatment and can be either over- or understated. Drug abusers may inflate their current level of drug use when presenting for treatment if they believe that higher levels of use will increase the likelihood of acceptance into treatment. Drug use may also be underreported at treatment intake or follow-up. Motivations cited for underreporting include the client’s desire to reflect a positive outcome from treatment and the perception of a strong societal stigma associated with the use of particular drugs. As questions have developed about the accuracy of self-reported data,researchers have begun using objective means to validate the data collected in this manner, although these methods also have limitations. Generally, a subgroup of the individuals surveyed after treatment is asked to provide either a urine sample or a hair sample, which is then screened for evidence of drug use. The results from the urinalysis or hair analysis are then compared against self-reports of drug use. Some researchers believe that it may be possible to systematically adjust self-reported data to correct for the biases exposed by urinalysis or hair analysis, although this technique is not currently in use. Recent major studies of drug treatment effectiveness have used urinalysis to validate self-reported data. For example, the National Treatment Improvement Evaluation Study (NTIES) found that self-reports of recent drug use (in the past 30 days) for opiates and cocaine were lower than current drug use as revealed by urinalysis. However, the self-reports of substance use over the entire follow-up period (that is, use on at least five occasions) yielded an equivalent or higher rate of use than the results of analyzing urine specimens collected at the follow-up interview. (See table 3.) Other studies found similar underreporting of drug use. The Treatment Outcome Prospective Study (TOPS), which followed people entering treatment in the early 1980s, reported that 40 percent of the individuals testing positive for cocaine 24 months after treatment had reported using the drug in the previous 3 days. Despite the discrepancies observed, each of the data collection methods used to measure treatment effectiveness has particular weaknesses. As shown above, validation studies indicate that self-reports of current drug use underreport drug use. At the same time, researchers emphasize that client reporting on use of illicit drugs during the previous year (the outcome measure used in most effectiveness evaluations) has been shown to be more accurate than reporting on current drug use. In comparison, urine tests can accurately detect illicit drugs for about 48 hours following drug use. However, urinalysis does not provide any information about drug use during the previous year. In addition, individual differences in metabolism rates can affect the outcomes of urinalysis tests. Hair analysis has received attention because it can detect drug use over a longer time—up to several months. However, unresolved issues in hair testing include variability across drugs in the accuracy of detection, the potential for passive contamination, and the relative effect of different hair color or type on cocaine accumulation in the hair. To examine the validity of self-reported data on other outcome measures, NTIES researchers compared self-reports on arrests to official arrest records and found 80 percent agreement, with underreporting of arrest histories most frequent among individuals interviewed in prison or jail and among men under 25 years of age. Researchers also compared self-reports of treatment completion, primary drug use, and demographic data with program records and found high levels of concordance between records and individual self-reports; for example, 92 percent agreed on whether a client completed the prescribed treatment. Variation in Program Operations and Client Factors Makes Comparisons Difficult Research results often do not account for the tremendous variation in program operations, such as differences in standards of treatment, staff levels and expertise, and level of coordination with other services. For example, surveys of the dosages used in methadone maintenance programs have shown that a large proportion of programs use suboptimal or even subthreshold dosages, which would likely result in poorer treatment outcomes than those of programs that provide optimal dosage levels to their clients. Similarly, outpatient drug-free programs operate with different numbers and quality of staff and have varying levels of coordination with local agencies that offer related services that are generally needed to support recovering abusers. An outpatient drug-free program that has close ties with local services, such as health clinics and job training programs, is likely to have better treatment outcomes than a program without such ties. Assessing treatment effectiveness is also complicated by differences in client factors. Researchers recognize that client motivation and readiness for treatment, as well as psychiatric status, can significantly affect the patient’s performance in treatment. For example, unmotivated clients are less likely than motivated ones to adhere to program protocols and to continue treatment. In studies of pharmacotherapy for opiate addiction, researchers have found that patients with high motivation to remain drug-free—such as health professionals, parolees, and work-release participants—have better treatment outcomes. Studies Indicate Benefits From Treatment, but Evidence Varies on Best Approaches for Specific Groups Major studies have shown that drug treatment is beneficial, although concerns about the validity of self-reported data suggest that the degree of success may be overstated. In large-scale evaluations conducted over the past 20 years, researchers have concluded that treatment reduces the number of regular drug users as well as criminal activity. In addition, these studies demonstrate that longer treatment episodes are more effective than shorter ones. Research also indicates that the amount and strength of evidence available to support particular treatment approaches for specific groups of drug abusers vary. Consistent Evidence Shows Drug Treatment Is Beneficial, but Outcomes May Be Overstated Numerous large-scale studies that examined the outcomes of treatment provided in a variety of settings have found drug treatment to be beneficial. Clients receiving treatment report reductions in drug use and criminal activity, with better treatment outcomes associated with longer treatment duration. However, studies examining the validity of self-reported data suggest that a large proportion of individuals do not report the full extent of drug use following treatment. Therefore, the findings from these major studies of treatment effectiveness—all of which relied on self-reported data as the primary data collection method—may be somewhat inflated. Major Studies Report Reductions in Drug Use and Crime Following Treatment Comprehensive analyses of the effectiveness of drug treatment have been conducted by several major studies over a period of nearly 30 years: DATOS, NTIES, TOPS, and the Drug Abuse Reporting Program (DARP) (see table 4). These large, multisite studies were designed to assess drug abusers on several measures before, during, and after treatment. These studies are generally considered by the Institute of Medicine and the drug treatment research community to be the major evaluations of drug treatment effectiveness, and much of what is known about typical drug abuse treatment outcomes comes from these studies. These federally funded studies were conducted by research organizations independent of the groups operating the treatment programs being assessed. Although the characteristics of the studies vary somewhat, all are based on observational or quasiexperimental designs. The most recently completed study, DATOS, is a longitudinal study that used a prospective design and a repeated-measures methodology to study the complex interactions of client characteristics and treatment elements as they occur in typical community-based programs. NTIES, completed in March 1997, was a congressionally mandated, 5-year study that examined the effectiveness of treatment provided in public programs supported by SAMHSA. All of these studies relied on self-report as the primary data collection method. That is, drug abusers were interviewed prior to entering treatment and again following treatment, and asked to report on their use of illicit drugs, their involvement in criminal activity, and other drug-related behaviors. As described previously in this report, studies examining the validity of self-reported data suggest that many individuals do not report the full extent of drug use following treatment. Since results from the major studies of treatment effectiveness were not adjusted for the likelihood of underreported drug use (as revealed by urinalysis substudies), the study results that follow may overstate reductions in drug use achieved by drug abusers. Researchers contend that the bias in self-reports on current drug use is greater than the bias in self-reports on past year use and that therefore the overall findings of treatment benefits are still valid. Each of these major studies attributed benefits to drug treatment when outcomes were assessed 1 year after treatment. They found that reported drug use declined when clients received treatment from any of three drug treatment approaches—residential long-term, outpatient drug-free, or outpatient methadone maintenance—regardless of the drug and client type. As shown in table 5, DATOS, the study most recently completed, found that the percentage of individuals reporting weekly or more frequent drug use or criminal activity declined following treatment. Previous studies found similar reductions in drug use. For example, researchers from the TOPS study found that across all types of drug treatment, 40 to 50 percent of regular heroin and cocaine users who spent at least 3 months in treatment reported near abstinence during the year after treatment, and an additional 30 percent reported reducing their use. DARP found that in the year after treatment, abstinence from daily opiate use was reported by 64 percent of clients in methadone programs, 61 percent in therapeutic communities, and 56 percent in outpatient drug-free programs. NTIES found that 50 percent of clients in treatment reported using crack cocaine five times or more during the year prior to entering treatment, while 25 percent reported such use during the year following treatment. The major studies also found that criminal activity declined after treatment. DATOS found that reports of criminal activity declined by 60 percent for cocaine users in long-term residential treatment at the 1-year follow-up. Only 17 percent of NTIES clients reported arrests in the year following treatment—down from 48 percent during the year before treatment entry. Additionally, the percentage of clients who reported supporting themselves primarily through illegal activities decreased from 17 percent before treatment to 9 percent after treatment. DARP found reported reductions in criminal activity for clients who stayed in treatment at least 3 months. Longer Treatment Episodes Have Better Outcomes, but Treatment Duration Is Limited by Client Drop-Out Another finding across these studies is that clients who stay in treatment longer report better outcomes. For the DATOS clients that reported drug use when entering treatment, fewer of those in treatment for more than 3 months reported continuing drug use than those in treatment for less than 3 months (see table 6). DATOS researchers also found that the most positive outcomes for clients in methadone maintenance were for those who remained in treatment for at least 12 months. Earlier studies reported similar results. Both DARP and TOPS found that reports of drug use were reduced most for clients who stayed in treatment at least 3 months, regardless of the treatment setting. In fact, DARP found that treatment lasting 90 days or less was no more effective than no treatment at facilitating complete abstinence from drug use and criminal behavior during the year following treatment. Although these studies show better results for longer treatment episodes, they found that many clients dropped out of treatment long before reaching the minimum length of treatment episode recommended by those operating the treatment program. For example, a study of a subset of DATOS clients found that all of the participating methadone maintenance programs recommend 2 or more years of treatment, but the median treatment episode by clients was about 1 year. Long-term residential programs participating in DATOS generally recommended a treatment duration of 9 months or longer, while outpatient drug-free programs recommended at least 6 months in treatment; for both program types, the median treatment episode was 3 months. TOPS found that in the first 3 months of treatment, 64 percent of outpatient drug-free program clients and 55 percent of therapeutic community clients discontinued treatment. For clients receiving methadone maintenance treatment, drop-out rates were somewhat lower—32 percent—in the first 3 months. Researchers note that drug abuse treatment outcomes should be considered comparable to those of other chronic diseases; therefore, significant dropout rates should not be unexpected. These results are similar to the levels of compliance with treatment regimens for people with chronic diseases such as diabetes and hypertension. A review of over 70 outcome studies of treatment for diabetes, hypertension, and asthma found that less than 50 percent of people with diabetes fully comply with their insulin treatment schedule, while less than 30 percent of patients with hypertension or asthma comply with their medication regimens. Research Suggests That Outpatient Treatment Reduces Drug Use as Much as Residential Treatment, but Costs Vary Widely A 1990 Institute of Medicine assessment of the treatment literature concluded that despite the heterogeneity of the programs and their clients, treatment outcomes are “qualitatively similar” regardless of whether treatment is provided in a residential or outpatient setting. In 1997, an ONDCP report showed that 34 percent of clients in outpatient treatment were no longer “heavy users” following treatment, while 38 percent of clients in residential settings reported the same. Evidence from the recent DATOS study confirmed that reported reductions in cocaine use were similar for outpatient drug-free and residential settings when clients remained in treatment for at least 3 months. Researchers point out, however, that more severe drug abusers may receive treatment in residential treatment settings than in outpatient settings, making such comparisons difficult. However, analysis of the data from DATOS showed mixed results on the impact of treatment on drug-related criminal activity. Clients in long-term residential treatment for at least 6 months were significantly less likely than clients who did not complete more than 13 weeks of treatment to report engaging in an illegal activity in the year after treatment. In contrast, clients in methadone or drug-free treatment in an outpatient setting who remained for at least 6 months were not significantly less likely to report engaging in illegal activity than clients who did not complete more than 13 weeks of treatment in these settings. Although the available evidence does not show sharp differences in outcomes, studies do show wide variation in treatment costs for inpatient and outpatient settings. A recent NTIES study found that costs per day were lowest in outpatient settings, where the average treatment period is several months. In contrast, short-term (1 month) residential treatment costs were much higher, resulting in a cost per treatment episode that was double the cost of outpatient treatment episodes. (See table 7.) Regardless of the findings of similar outcomes and great variation in costs, there is still reason to support residential treatment for certain patients. In some cases, residential treatment may be required for optimum treatment outcomes, such as for drug abusers with severe substance-related problems, those who have failed in outpatient treatment, or those with severe psychosocial impairments. In contrast, patients with greater psychosocial stability and less substance-related impairment appear to benefit most from nonhospital and nonresidential treatment. Evidence Varies on the Best Treatment Approaches for Specific Groups of Drug Abusers Research provides strong evidence to support methadone maintenance as the most effective treatment for heroin addiction. However, research on the most effective treatment interventions for other groups of drug abusers is less definitive. Promising treatment approaches for other groups include cognitive-behavioral therapy for treatment of cocaine abuse and family-based therapy for adolescent drug users. Research Supports Methadone Maintenance as the Most Effective Treatment for Heroin Addiction A number of approaches have been used in treating heroin addiction. Methadone maintenance, however, is the treatment most commonly used, and numerous studies have shown that those receiving methadone maintenance treatment have better outcomes than those who go untreated or use other treatment approaches—including detoxification with methadone. Methadone maintenance has been shown to reduce heroin use and criminal activity and improve social functioning. HIV risk is also minimized, since needle usage is reduced. Proponents of methadone maintenance also argue that reductions in the use of illicit drugs and associated criminal behavior help recovering drug abusers focus on their social and vocational rehabilitation and become reintegrated into society. However, outcomes among methadone programs have varied greatly, in part because of the substantial variation in treatment practices across the nation. Many methadone clinics have routinely provided clients dosage levels that are lower than optimum—or even subthreshold—and have discontinued treatment too soon. In late 1997, an NIH consensus panel concluded that people who are addicted to heroin or other opiates should have broader access to methadone maintenance treatment programs and recommended that federal regulations allow additional physicians and pharmacies to prescribe and dispense methadone. Similarly, several studies conducted over the past decade show that when counseling, psychotherapy, health care, and social services are provided along with methadone maintenance, treatment outcomes improve significantly. However, the recent findings from DATOS suggest that the provision of these ancillary services—both the number and variety—has eroded considerably during the past 2 decades across all treatment settings. DATOS researchers also noted that the percentage of clients reporting unmet needs was higher than that in previous studies. There are other concerns associated with methadone maintenance. For example, methadone is often criticized for being a substitute drug for heroin, which does not address the underlying addiction. Additional concerns center on the extent to which take-home methadone doses are being sold or exchanged for heroin or other drugs. Cognitive-Behavioral Treatments Show Promise for Cocaine Addiction Evidence of treatment effectiveness is not as strong for cocaine addiction as it is for heroin addiction. No pharmacological agent for treating cocaine addiction or reducing cocaine craving has been found. However, an accumulating body of research points to cognitive-behavioral therapies as promising treatment approaches for cocaine addiction. In an earlier report, we noted that treatments used for other drug dependencies, such as methadone maintenance, have not proven useful for treating cocaine dependency. Although a number of pharmacotherapies have been studied and some have proven successful in one or more clinical trials, no medication has demonstrated substantial efficacy once subjected to several rigorously controlled trials. Nor has any medication used in combination with one or more cognitive-behavioral therapies proven effective in enhancing cocaine abstinence. Researchers are hopeful, however, that a pharmacological agent for treating cocaine addiction will be developed. Without a pharmacological agent, researchers have relied on psychotherapeutic approaches to treat cocaine addiction. Studies have shown that clients receiving three cognitive-behavioral therapies have demonstrated prolonged periods of abstinence and high rates of retention in treatment programs. The cognitive-behavioral therapies, based largely on counseling and education, include (1) relapse prevention, which focuses on teaching clients how to identify and manage high-risk, or “trigger,” situations that contribute to drug relapse; (2) community reinforcement/contingency management, which establishes a link between behavior and consequence by rewarding abstinence and reprimanding drug use; and (3) neurobehavioral therapy, which addresses a client’s behavioral, emotional, cognitive, and relational problems at each stage of recovery. These programs have shown promise in curbing drug use. One relapse prevention program showed cocaine-dependent clients were able to remain abstinent at least 70 percent of the time while in treatment. A community reinforcement/contingency management program showed that 42 percent of the participating cocaine-dependent clients were able to achieve nearly 4 months of continuous abstinence, while a neurobehavioral program showed that 38 percent of the clients were abstinent at the 6-month follow-up. Family Therapy Is Under Study for Adolescent Drug Abusers Adolescent drug abusers are similar to adult drug abusers in that they are likely to use more than one type of illicit drug and to have coexisting psychiatric conditions. In other ways, they differ from adult drug abusers. Adolescents may have a shorter history of drug abuse and thus less severe symptoms of tolerance, craving, and withdrawal. In addition, they usually do not show the long-term physical effects of drug abuse. Despite a number of studies on the topic, little is known about the best way to treat adolescent drug abusers. Researchers believe that adolescents have special treatment needs; however, research has not shown any one method or approach to be consistently superior to others in achieving better treatment outcomes for adolescents. Among the wide variety of treatment approaches and settings used for adolescents, family-based therapies show promise. Historically, adolescents have been referred to residential treatment settings, which may range from group-home living with minimal professional involvement to a setting that provides intensive medical, psychiatric, and psychosocial treatment 24 hours a day. Experts now recognize that many adolescents can be successfully treated in an outpatient treatment setting, where treatment may range from less than 9 hours per week to regular sessions after school to intensive day programs that provide more than 20 hours of treatment per week. Although not thoroughly evaluated, pharmacotherapy may also be used to treat adolescent drug abuse. Researchers believe that self-help or peer support groups, such as Alcoholics Anonymous, are important adjuncts to treatment for adolescents. The relative effectiveness of alternative approaches for treating adolescents remains uncertain. An earlier study of adolescents found that residential treatment resulted in more substantial and consistent reductions in drug use, drug-related problems, and illegal activity than did outpatient drug-free programs. In contrast, the American Academy of Child and Adolescent Psychiatry acknowledged in its 1997 treatment practice parameters that research on drug treatment for adolescents has failed to demonstrate the superiority of one treatment approach over another. Studies show that success in treatment seems to be linked to the characteristics of program staff, the availability of special services, and family participation. Many experts believe that family-based intervention shows promise as an effective treatment for adolescent drug abusers. Family-based intervention is based on the assumption that family behaviors contribute to the adolescent’s decision to use drugs. Many researchers believe that family interventions are critical to the success of any treatment approach for adolescent drug abusers, since family-related factors—such as parental substance use, poor parent-child relations, and poor parent supervision—have been identified as risk factors for the development of substance abuse among adolescents. Family relationships may be the primary target for intervention or one of many target areas. A 1995 literature review suggests that family intervention can engage and retain drug abusers and their families in treatment, significantly reducing drug use and related areas of problem behavior. Further, a 1997 meta-analysis and literature review held family therapy to be superior to other treatment modalities. However, NIDA points out in a soon-to-be published article that further research is needed to identify the best approach to treating adolescent drug abusers. Conclusions With an annual expenditure of more than $3 billion, the federal investment in drug abuse treatment is an important component of the nation’s drug control efforts, and monitoring the performance of treatment programs can help ensure that progress toward the nation’s goals is being achieved. Research on the effectiveness of drug abuse treatment, however, is highly problematic, given the methodological challenges and numerous factors that influence the results of treatment. Although studies conducted over nearly 3 decades consistently show that treatment reduces drug use and crime, current data collection techniques do not allow accurate measurement of the extent to which treatment reduces the use of illicit drugs. Furthermore, research literature has not yet yielded definitive evidence to identify which approaches work best for specific groups of drug abusers. Agency and Other Comments NIDA, SAMHSA, VA, and a private consultant with expertise in drug treatment issues generally acknowledged that methodological and implementation issues make the evaluation of treatment effectiveness difficult. SAMHSA and NIDA also provided extensive and helpful technical comments, which we incorporated into a substantially revised final report. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to interested parties and make copies available upon request. If you have any questions about this report, please call me at (202) 512-7119. Other contributors to this report include Rosamond Katz and Jenny Grover. Bibliographic References for Selected Studies For additional information on the four major studies that we reviewed, see the sources cited below. Drug Abuse Treatment Outcome Study “Drug Abuse Treatment Outcome Study (DATOS).” Psychology of Addictive Behaviors, Vol. 11, No. 4 (1997), pp. 211-323. The National Treatment Improvement Evaluation Study National Opinion Research Center at the University of Chicago. The National Treatment Improvement Evaluation Study—Final Report. Prepared for the Center for Substance Abuse Treatment, SAMHSA, in collaboration with the Research Triangle Institute, Mar. 1997. Treatment Outcome Prospective Study Hubbard, R.L. “Evaluation and Treatment Outcome.” Substance Abuse: A Comprehensive Textbook, 2nd ed. Baltimore, Md.: Williams & Wilkins, 1992, pp. 596-611. Hubbard, R.L., and others. Drug Abuse Treatment: A National Study of Effectiveness. Chapel Hill, N.C.: University of North Carolina Press, 1989. Ginzburg, H.M. “Defensive Research—The Treatment Outcome Prospective Study (TOPS).” Annals of the New York Academy of Sciences, Vol. 311 (1978), pp. 265-69. Drug Abuse Reporting Program Simpson, D.D. “Drug Treatment Evaluation Research in the United States.” Psychology of Addictive Behaviors, Vol. 7 (1993), pp. 120-28. Simpson, D.D., and S.B. Sells, eds. Opioid Addiction and Treatment: A 12-Year Follow-up. Malabar, Fla.: Robert E. Krieger, 1990. Simpson, D.D., and S.B. Sells. “Effectiveness of Treatment for Drug Abuse: An Overview of the DARP Research Program.” Advances in Alcohol and Substance Abuse, Vol. 2 (1992), pp. 7-29. Related GAO Products Drug Courts: Overview of Growth, Characteristics, and Results (GAO/GGD-97-106, July 31, 1997). Drug Control: Observations on Elements of the Federal Drug Control Strategy (GAO/GGD-97-42, Mar. 14, 1997). Substance Abuse Treatment: VA Programs Serve Psychologically and Economically Disadvantaged Veterans (GAO/HEHS-97-6, Nov. 5, 1996). Drug and Alcohol Abuse: Billions Spent Annually for Treatment and Prevention Activities (GAO/HEHS-97-12, Oct. 8, 1996). Cocaine Treatment: Early Results From Various Approaches (GAO/HEHS-96-80, June 7, 1996). At-Risk and Delinquent Youth: Multiple Federal Programs Raise Efficiency Questions (GAO/HEHS-96-34, Mar. 6, 1996). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reported on: (1) the level of federal support for drug abuse treatment activities; (2) the treatment approaches and settings most commonly used and what is known about an alternative approach--faith-based treatment; (3) research issues affecting drug abuse treatment evaluations; and (4) research findings on the effectiveness of drug treatment overall as well as what is known about the effectiveness of treatment for heroin, cocaine, and adolescent drug addiction. GAO did not comprehensively analyze the extensive literature on drug treatment research methodologies and study results, independently evaluate the effectiveness of drug treatment programs, or verify the results reported in the studies it reviewed. GAO noted that: (1) billions of dollars are spent annually to support treatment for drug abuse and related research; (2) in 1998, 20 percent of the federal drug control budget, $3.2 billion, supported drug abuse treatment; (3) to meet the requirements of the Government Performance and Results Act, agencies are beginning to set goals and performance measures to monitor and assess the effectiveness of federally funded drug treatment efforts; (4) treatment services and research aim to reduce the number of current drug abusers; (5) experts recognize that not all drug users require treatment because some do not progress to the stage of abuse or dependence; (6) those who do need treatment can receive services in a variety of settings and via two major approaches: pharmacology and behavioral therapy, with many programs combining elements of both; (7) other treatment approaches, such as faith-based strategies, are sometimes used but have not been sufficiently evaluated to determine their effectiveness; (8) measuring the effectiveness of drug abuse treatment is a complex undertaking; (9) the most comprehensive studies have used an observational or quasiexperimental design, assessing effectiveness by measuring drug use before and after treatment; (10) few studies have used the most rigorous approach--random assignment to treatment and control groups--to isolate the particular effects of treatment on drug abuse; (11) in most studies, the conclusions researchers can draw are limited by factors such as reliance on self-reported data and the time frame planned for client followup; (12) furthermore, comparisons of study results are complicated by differences in how outcomes are defined and measured and differences in program operations and client factors; (13) a number of large, multisite, longitudinal studies provide evidence that drug abuse treatment is beneficial, but reliance on self-reported data may overstate effectiveness; (14) substantial numbers of clients report reductions on drug use and criminal activity following treatment; (15) research on treatment effectiveness relies heavily on client records of drug use; (16) when examining recent drug use, objective tests, such as urinalysis, consistently identify more drug users than self-reports do; (17) the research evidence to support the relative effectiveness of specific treatment approaches or settings for particular groups of drug abusers is more varied; (18) methadone maintenance has been shown to be the most effective approach to treating heroin abusers; and (19) research on the best treatment approach or setting for other groups of drug abusers is less definitive.
GAO_GAO-01-625
Background The World Health Organization (WHO) established the U.N.’s first program to respond to HIV/AIDS in 1987. Later that same year, the U.N. General Assembly encouraged WHO to continue its efforts and urged all appropriate U.N. system organizations to support AIDS control efforts. In the early 1990s, U.N. officials and bilateral donors increasingly recognized the need for a multisectoral response to the complex challenges of the HIV/AIDS pandemic, including the social, economic, and development issues contributing to the spread of the virus. They realized that WHO’s medically based approach was insufficient to effectively combat the virus. In response, the United Nation’s Economic and Social Council established the Joint United Nations Programme on HIV/AIDS and operations started in 1996. The mission of UNAIDS is to strengthen and support an expanded response to HIV/AIDS aimed at preventing the transmission of HIV, providing care and support, reducing the vulnerability of individuals and communities to the worldwide epidemic, and alleviating its impact. UNAIDS was not expected to fund the efforts of the global community. Intended to be a model of U.N. reform, UNAIDS is the United Nations’ first joint, cosponsored program of its type. UNAIDS is comprised of a Secretariat and seven U.N. cosponsors that act at the global, regional, and country levels. UNAIDS’ Programme Coordinating Board is the governing body for all programmatic issues concerning policy, strategy, finance, and monitoring and evaluation of UNAIDS. Through the Committee of Cosponsoring Organizations, the cosponsors’ executive heads or their representatives meet twice a year to consider matters concerning UNAIDS and to provide input into UNAIDS' policies and strategies. The UNAIDS Secretariat, headquartered in Geneva, Switzerland, and acting primarily at the global level, is in charge of the overall coordination and management of UNAIDS and leads the International Partnership Against AIDS in Africa. The seven cosponsors are independent U.N. agencies that have programs in regions and countries worldwide. By joining the UNAIDS partnership, they committed to joint planning and action against HIV/AIDS. Cosponsors are charged with integrating HIV/AIDS-related strategies, policies, programs, and activities into the work of their respective agencies. Figure 1 shows the organizational structure of UNAIDS. The United Nations creates U.N. “theme groups” on specific issues to facilitate its efforts at the country level and to promote a more coherent U.N. response to national priorities. For example, one type of theme group focuses on the environment and sustainable development, and another on the empowerment of women. The U.N. has 132 theme groups on HIV/AIDS that serve as UNAIDS’ primary mechanism for assisting developing countries. They are composed primarily of the senior staff of UNAIDS’ cosponsors and are located in-country. The theme groups’ principal objectives are to coordinate the U.N. response at the country level and to support national governments and others in their efforts to mount an effective and comprehensive response to HIV/AIDS. Theme groups are expected to share information, plan, and monitor coordinated actions with their U.N. partners and, in some cases, jointly finance major AIDS-related activities in support of host governments and national partners, such as nongovernmental organizations. In priority countries, the theme group may be supported by a Country Programme Advisor, a country-based Secretariat staff member. In addition to supporting the broader U.N. system, to build national commitment to HIV/AIDS action, this advisor is expected to provide information and guidance to a range of host country partners including government departments, nongovernmental and community-based organizations, and people living with HIV/AIDS. UNAIDS is funded through voluntary contributions from national governments, cosponsors’ cash contributions, and private donations. None of its funds comes from the U.N. budget or from U.N. member states’ assessed contributions. UNAIDS' biennium budgets (including the Secretariat’s and cosponsors’ activities at the global and regional levels) were $120 million for both the 1996-1997 biennium and the 1998-1999 biennium. The budget for 2000-2001 is $140 million. Cosponsors also provide funding for their HIV/AIDS-related activities from their core budgets and solicit supplemental funding for their country-level activities from bilateral donors and other sources, such as foundations. The United States is the largest contributor to UNAIDS, providing $34 million for the 1996-1997 biennium, $31 million for the 1998-1999 biennium, and approximately $32 million for the 2000-2001 biennium. The State Department is the United States’ liaison with multilateral organizations such as the United Nations, and the U.S. Agency for International Development (USAID) manages U.S. funding to UNAIDS and coordinates and participates in the U.S. delegation to UNAIDS’ governing board. UNAIDS Has Made Progress Toward Increasing Coordination and Commitment to HIV/AIDS, but Country-Level Efforts Need Strengthening UNAIDS has made progress toward increasing global coordination and commitment to HIV/AIDS since we last reported in 1998. UNAIDS is developing a U.N. system strategic plan that will help coordinate the U.N.’s HIV/AIDS-related programs and activities. In addition, UNAIDS’ cosponsors have increased their commitment and efforts to integrate HIV/AIDS into the work of their agencies; however, progress varies from cosponsor to cosponsor. UNAIDS’ advocacy efforts, especially those of the UNAIDS Secretariat, have helped increase national and international commitment and approaches to the worldwide epidemic. Funding by U.N. and bilateral donors has also increased. However, UNAIDS’ efforts at the country level are weak. UNAIDS’ theme groups continue to have difficulty organizing a unified U.N. response and helping host countries combat HIV/AIDS. Country Programme Advisors—the Secretariat’s country-based staff—also have not been as effective as expected in supporting HIV/AIDS efforts of the theme groups and host countries. UNAIDS Has Worked to Improve U.N. Coordination and the International Community’s Commitment and Approach to HIV/AIDS According to the UNAIDS governing board, the success of UNAIDS is highly dependent on collaboration within the U.N. system. However, half of UNAIDS’ donors surveyed did not believe that the Secretariat had been as successful as originally expected in facilitating the coordination of U.N. actions on HIV/AIDS. According to USAID officials, the Secretariat’s lack of clear guidance and coordination produced, in part, confusion within the U.N. system about the roles of the Secretariat and cosponsors. In response, the Secretariat is facilitating the development of the U.N. System Strategic Plan for HIV/AIDS for 2001-2005. The plan is designed to provide a more coherent U.N. response to HIV/AIDS, documenting the efforts of the Secretariat, 7 U.N. cosponsors, and 21 other U.N. agencies, such as the International Labour Organization and the Food and Agriculture Organization. The Secretariat stated the plan will be presented to UNAIDS’ governing board by June 2001. In addition, the Secretariat and cosponsors began conducting detailed reviews of each of the cosponsors’ HIV/AIDS programs in March 2000. These reviews profile each cosponsor’s mandate, structure, operations and budget, and HIV/AIDS-related work. The reviews are intended to improve UNAIDS’ strategic planning and collaboration and to increase understanding within UNAIDS about each of the cosponsors’ roles and responsibilities. UNAIDS cosponsors’ commitment to HIV/AIDS has increased since we last reported. Over the past 2 years, the executive boards of several cosponsors have issued statements to strengthen agency action on HIV/AIDS. For example, in January 2000, WHO’s Executive Board requested that the Director General strengthen the agency’s involvement in the UNAIDS effort and give HIV/AIDS priority in its budget. All UNAIDS cosponsors’ executive directors now speak at major international meetings and events, advocating for increased attention and activities to combat HIV/AIDS. Some cosponsors also have elevated the position of the HIV/AIDS unit or focal point organizationally to highlight the visibility and importance of the issue within the agency. For example, in 1999, to focus on its HIV/AIDS efforts in sub-Saharan Africa, the World Bank created a new office that reports to the agency’s Office of the Regional Vice Presidents. The same year, the U.N. Children’s Fund established a senior- level post and unit at their headquarters. On the other hand, the cosponsors’ progress toward integrating HIV/AIDS into their agency strategies, programs, and activities has varied and continues to evolve. For example, an external evaluation of the U.N. Development Programme’s HIV/AIDS program, prepared in 2000, found that HIV/AIDS had not been fully integrated into the agency’s work. In response, the Development Programme made HIV/AIDS one of its top priorities and launched a resource mobilization campaign to support country-level activities, among other efforts. The U.N. Population Fund also evaluated its HIV/AIDS programs and concluded in its 1999 report that many of the agency’s efforts to integrate HIV/AIDS were superficial. In response, the Population Fund made HIV/AIDS a top priority as part of its 2001 agency realignment process—an action that the agency expects will accelerate efforts to integrate HIV/AIDS into its existing programs. The Executive Director of UNAIDS said that further strengthening cosponsor commitment and integration of HIV/AIDS is a top internal challenge for UNAIDS. Appendix III briefly describes the HIV/AIDS programs and key activities of each of UNAIDS’ cosponsors. UNAIDS’ major donors, U.S. government officials, cosponsor officials, and others credit UNAIDS, especially the Secretariat, with contributing to the national and international communities’ increased awareness of and commitment to the fight against HIV/AIDS. They also credit UNAIDS and the Secretariat with helping to reframe HIV/AIDS as an issue involving all sectors rather than an issue involving only the health sector. Many national governments around the world were slow to respond to the HIV/AIDS epidemic, even those in the most affected areas in sub-Saharan Africa. In response, UNAIDS’ Executive Director visited 21 developing countries in 1999 and 2000, including 14 African countries. In those countries, the Executive Director stressed the importance of mobilizing efforts to combat HIV/AIDS and taking a multisectoral approach to the countries’ presidents and other high-level national leaders. For example, UNAIDS’ Executive Director met with the Prime Minister of Ethiopia in September 1999 to advocate for a high-level, expanded, and multisectoral response. In April 2000, the President of Ethiopia launched the National Council on AIDS, supported by a National Secretariat in the Office of the Prime Minister and composed of multisectoral subcommittees. With assistance from the Secretariat and the World Bank, some countries are incorporating responses to HIV/AIDS into their country’s long-term multisectoral development plans. UNAIDS also has worked with the international community, including the private sector, to broaden and increase efforts to combat HIV/AIDS. In December 2000, the Secretariat, several cosponsors, and the Japanese government collaborated to develop detailed strategies, goals, and targets for the Group of Eight’s plan to address HIV/AIDS and other infectious diseases. In addition, in September 2000, the Secretariat, WHO, and the European Commission conducted a high-level meeting to explore additional multisectoral actions that the European Union could take against poverty and communicable diseases such as HIV. UNAIDS also worked to get the private sector more involved in international efforts to combat HIV/AIDS. The Secretariat and the World Bank, together with USAID and several U.S. foundations, convened 15 major U.S. foundations in January 2000 and presented data on the foundations’ limited expenditures on HIV/AIDS. According to the Secretariat, the foundations subsequently committed to providing more funding. In April 2000, one attendee the Bill and Melinda Gates Foundation--announced a $57 million grant to expand national HIV/AIDS programs for youth in Botswana, Ghana, Uganda, and Tanzania. The Secretariat also has helped cultivate the involvement of the U.N. Foundation in global efforts against HIV/AIDS. Since 1998, the U.N. Foundation has allocated at least $25 million for HIV/AIDS-related activities implemented by UNAIDS’ cosponsors in southern Africa and Ukraine. U.N. and Bilateral Donor Funding for HIV/AIDS Has Increased Cosponsors reported that estimated spending for HIV/AIDS programs has increased significantly in the past 2 years. However, most of the increased spending came from the World Bank, which provides loans to national governments for specific HIV/AIDS-related projects. Bilateral donor funding increased slightly in 1998 over previous years, but funding has increased considerably among some donors since then. Despite these efforts, total funding for HIV/AIDS efforts is well below what experts estimate is needed to effectively combat HIV/AIDS around the world. Table 1 shows estimated spending for HIV/AIDS by UNAIDS’ cosponsors from 1996 to 1999. Overall, UNAIDS’ cosponsors have increased spending for HIV/AIDS programs and activities from $296.9 million in the 1996-1997 biennium to $658.1 million in the 1998-1999 biennium. Most of this increase (96 percent) came from the World Bank. Four other cosponsors increased spending for HIV/AIDS-related activities, although some did so only slightly. The U.N. Development Programme decreased its spending for HIV/AIDS. Cosponsor officials cited several reasons that affected their ability to increase HIV/AIDS spending. First, several cosponsors’ budgets have either declined or remained stable over the past few years. For example, the U.N. Population Fund’s overall budget declined from $628.7 million in the 1996-1997 biennium to $581.7 million in the 1998-1999 biennium. Second, earmarked funds for activities other than HIV/AIDS have increased. For example, although the U.N. Development Programme’s overall agency budget has increased from $4.3 billion in the 1996-1997 biennium to $4.8 billion in the 1998-1999 biennium, the percentage of its budget that was earmarked for specific efforts increased from 62 percent to 70 percent. Finally, the strength of the U.S. dollar has led to poor exchange rates with other countries, reducing the value of bilateral donor contributions to overall agency budgets. For example, according to U.N. Population Fund officials, some bilateral donors made substantial increases in contributions to the agency from 1999 to 2000, but these increases were neutralized by the exchange rate. According to the UNAIDS Secretariat, while bilateral donors maintained their spending for HIV/AIDS in 1996 and 1997 at $273 million each year, funding increased slightly in 1998 to $293 million. As of May 2001, the Secretariat could not provide us with more current data, but evidence from specific countries suggests that funding has increased further. For example, the United States committed approximately $466 million in 2001 compared with the $293 million spent by all bilateral donors, including the United States, in 1998. Canada announced in June 2000 that, over the next 3 years, it would increase its international HIV/AIDS spending from $20 million to $60 million per year. According to the Secretariat, most major bilateral donors have increased their HIV/AIDS funding for Africa since 2000. However, these increases are much less than the minimum of $3 billion that UNAIDS estimates may be needed annually for basic HIV/AIDS prevention, treatment, and care in the sub-Saharan Africa region alone. The U.N. Secretary-General is currently advocating for the creation and funding of a global AIDS fund that would support HIV/AIDS activities in developing countries. The U.S. Administration pledged $200 million to the fund in May 2001. UNAIDS’ Country-Level Efforts Need Strengthening One of UNAIDS’ primary functions is to strengthen host nations’ capacities to plan, coordinate, implement, and monitor the overall response to HIV/AIDS. However, UNAIDS’ governing board, donors, and senior officials do not believe that UNAIDS has been as effective as expected at the country level. The performance of UNAIDS’ theme groups varies widely, and their overall performance in facilitating the U.N. response at the country level and in providing effective assistance to host countries’ efforts to combat HIV/AIDS has been weak. In addition, UNAIDS cosponsors and the Secretariat do not hold theme groups sufficiently accountable for their efforts. The Secretariat’s Country Programme Advisors have not been as effective as expected in supporting the theme groups’ and host countries’ HIV/AIDS efforts. The Secretariat has not provided the advisors with sufficient guidance and training and initially did not hire individuals with the right mix of skills. UNAIDS’ Donors and Senior Officials Believe Country-Level Efforts Are Weak According to UNAIDS’ 2000 survey of its donors, UNAIDS has not been as successful as they expected in strengthening governments’ HIV/AIDS activities and ensuring that appropriate and effective policies and strategies are implemented to address HIV/AIDS. In addition, the survey said that donors believe UNAIDS has been weak in promoting broad-based political and social commitment and action to prevent and respond to HIV/AIDS at the country level. According to the survey, donors’ perception of UNAIDS’ lack of sufficient relevance at the country level could be a threat to future funding. UNAIDS’ governing board said that the ultimate test of UNAIDS’ success lies in the degree to which it successfully helps host countries combat HIV/AIDS. However, at the latest meeting of UNAIDS’ governing board in December 2000, both the governing board and UNAIDS’ Executive Director noted that UNAIDS needed to improve its country-level response. The governing board emphasized that a coordinated, consistent U.N. response was needed and that improving the performance of UNAIDS’ theme groups required urgent attention. UNAIDS’ Executive Director concurred with the board’s assessment, saying that these tasks are a formidable challenge and that strengthening UNAIDS’ country-level efforts is one of UNAIDS’ top internal challenges. Theme Groups’ Performance Varies and Accountability for Results is Limited UNAIDS’ 132 theme groups on HIV/AIDS—composed primarily of cosponsors’ senior in-country staff—are UNAIDS’ primary mechanism at the country level to coordinate the U.N. response and support host countries’ efforts against HIV/AIDS. However, overall theme group performance varies considerably. For example, in surveying 36 USAID missions worldwide, we asked about the extent to which the theme groups were strengthening the overall national government response to HIV/AIDS. Of the 24 missions responding, 8 said to a very or great extent, 7 said to a moderate extent, and 9 said to some, little, or no extent. In addition, UNAIDS’ annual surveys of its theme groups from 1996-1999 indicate that they have made little progress in key areas, including developing an advocacy plan, mobilizing resources, and developing a U.N. system integrated plan on HIV/AIDS. According to the UNAIDS Secretariat, theme groups are expected to develop joint advocacy action plans to plan and manage joint advocacy work on HIV/AIDS and to clarify what the theme group is advocating and by whom and how. UNAIDS’ annual surveys show that, in 1997, 31 percent of theme groups surveyed had developed a systematic approach to advocacy in the form of a strategy or plan. In 1999, 37 percent of theme groups had developed an advocacy plan or strategy. Since UNAIDS is not a funding agency, mobilizing resources to support country-level efforts against the epidemic is another key role of the theme group. According to UNAIDS, in 1997, under one-half of UNAIDS’ theme groups were mobilizing resources for HIV/AIDS activities, a figure that increased to about one-half in 1999. Most resource mobilization efforts were ad hoc, with only one-quarter of theme groups having developed a systematic approach to resource mobilization as expected. According to the UNAIDS Secretariat, a U.N. system integrated plan on HIV/AIDS is the basis for coordinated U.N. support to the national response and is the single most valuable indicator of the U.N.’s commitment at the country level. However, according to UNAIDS, as of February 2000, only 18 out of 86 theme groups surveyed had completed an integrated plan and one-half had yet to take any steps to begin the process of completing one. In 1998, we found that UNAIDS’ theme groups were ineffective for a number of reasons. The UNAIDS Secretariat did not provide timely guidance about operations or responsibilities. In addition, UNAIDS’ cosponsor staff at the country level were not committed to the UNAIDS mandate, nor were they held accountable by their respective agencies for their participation in the theme groups or for the theme groups’ results in supporting national HIV/AIDS efforts. In our most recent work, we found that some of the cosponsors and the Secretariat still do not hold theme group members accountable for results. For example, while the Director-General of WHO directed their country directors to participate in theme groups, WHO does not assess their involvement as part of their annual performance review. Neither the World Bank nor the U.N. International Drug Control Programme requires theme group involvement or includes it as a required element in annual performance reviews of senior country-level staff. The UNAIDS Secretariat also does not hold theme groups accountable for results. While the Secretariat has no organizational authority over the cosponsors’ country-level representatives, the theme groups are expected to undertake a number of activities, including developing advocacy and resource mobilization plans. The Secretariat’s annual surveys of theme groups are one way that UNAIDS obtains information on theme group operations. However, these surveys currently focus only on the internal operations and management of the theme group rather than the implementation of these plans or the extent to which theme groups are successful in their other efforts to support host countries’ HIV/AIDS efforts. The Secretariat said that it is improving the annual surveys to allow for tracking of theme group results. Also, in recognition of the continuing challenges with theme groups, UNAIDS created the Interagency Task Team on Theme Group Operations, and the Secretariat created a new Theme Group Support Unit. UNAIDS’ Country Programme Advisors Have Not Been as Effective as Expected According to U.S. officials and officials from both the UNAIDS Secretariat and cosponsors, Country Programme Advisors—the Secretariat’s country- based staff—have not been effective as expected in supporting HIV/AIDS efforts of the theme groups and host countries. For example, guidance provided by the UNAIDS Secretariat instructs the advisors to advocate to national governments for expanded efforts on HIV/AIDS but provides no guidance on what to do or how to do it. Without adequate guidance or training, an advisor’s success is dependent on his or her personal talents and skills. According to the Secretariat, many advisors have not been successful because they lack crucial diplomatic skills and were not hired at a rank high enough to successfully interact with and influence U.N. and host country government officials. In some instances, the Secretariat has increased the grade level at which these advisors are hired and is in the process of hiring new advisors with the right skills. UNAIDS also held a meeting on developing a plan of action to better focus their recruiting efforts and support the advisors in their work. Theme Groups and Advisors Are Not Actively Submitting Funding Proposals for Country- Level Activities While the UNAIDS Secretariat was not intended to fund or implement HIV/AIDS activities, it does provide small amounts of funding to support theme group proposals for projects to stimulate national HIV/AIDS efforts. These funds are also expected to help theme groups leverage funds from other sources. These funds could be used, for example, to support activities to design and develop national strategic plans or to support the development of major grants or loans to address HIV/AIDS. UNAIDS provided $22.9 million in these funds for the 1998-1999 biennium and allocated about $23.5 million for the 2000-2001 biennium. After an evaluation of the funding process in June 1999, UNAIDS found that 65 percent of projects receiving such funds succeeded in leveraging additional funds and, in some cases, in involving new sectors and partners. However, the evaluation also found that theme groups generally were not committed to submitting proposal requests, were not adequately involved in the proposal process, and did not always possess the technical expertise needed to develop a quality proposal. In addition, the evaluation found that the Country Programme Advisors had not assisted theme groups in preparing proposals to the extent that the Secretariat had expected. According to UNAIDS' Secretariat, the proposal process has been streamlined for the current biennium. UNAIDS Has Made Mixed Progress in Improving Technical Support, Best Practices, and Other Information to Enhance the Response to the Pandemic UNAIDS is charged with developing and providing information to enhance the U.N. and global response to the HIV/AIDS worldwide epidemic. The UNAIDS Secretariat has continued to improve its technical support and best practice materials since we last reported, but the best practice materials have not been sufficiently distributed. The Secretariat also has made progress in tracking the pandemic but has encountered difficulties in tracking the national and international response to the pandemic with regard to funding and activities. In addition, the Secretariat’s monitoring and evaluation efforts have various weaknesses, and UNAIDS still cannot report overall results or measure progress towards its objectives, especially at the country level. As a result, UNAIDS is constrained in its ability to make management decisions based on data or to ensure its donors that it is using program resources productively. UNAIDS Has Made Some Progress Improving Technical Support and Best Practices Materials A key function of the UNAIDS Secretariat is to arrange for and provide selected technical support and to identify, develop, and disseminate best practices. In our 1998 report, we said that the Secretariat had not adequately mobilized regional resources to provide technical support. Since then, the UNAIDS Secretariat has established and supported Technical Resource Networks to help arrange the technical support needed by U.N. organizations and others working on HIV/AIDS activities. These networks consist of groups of individuals, communities, and institutions that are expected to share information, provide peer support, and help identify sources of technical information and assistance to those working on HIV/AIDS issues. The Secretariat has facilitated the creation of 13 networks since 1998 and has provided financial and technical support— such as facilitating discussions on technical issues related to HIV/AIDS— to 49 networks worldwide. For example, the Secretariat initiated the Forum of Condom Social Marketing network in 1998 and, with the cosponsors, has supported groups such as the Asian and European Harm Reduction Networks and the Religious Alliance Against AIDS in Africa. To help improve the technical capacity of U.N. cosponsors and others working on HIV/AIDS-related activities in a number of geographic regions, in 1996, the Secretariat and cosponsors began establishing regional technical teams to serve groups of countries. These intercountry teams— locate in Abidjan, Cote d’Ivoire (western and central Africa); Pretoria, South Africa (eastern and southern Africa); and Bangkok, Thailand (Asia and the Pacific)—are expected to facilitate existing intercountry initiatives or networks and develop new mechanisms of exchange and collaboration; help arrange for technical assistance from other organizations, universities, and private consultants; and mobilize additional resources for subregional HIV/AIDS efforts. To help determine whether these teams were meeting their objectives, the Secretariat commissioned an evaluation of the Inter-country Team for Western and Central Africa, published in January 2001, which assessed the team’s relevance, effectiveness, and efficiency. The evaluation found that the team was very useful in exchanging and disseminating information, but that it was less successful in arranging for technical assistance. UNAIDS’ best practice collection includes a series of technical updates, key materials, and case studies that provide strategies and approaches to prevent infection, provide care to those already infected, and alleviate the impact of HIV/AIDS on those affected. Topics include improving the safety of blood products, caring for individuals infected by HIV and tuberculosis, and increasing access to HIV-related drugs. In 1998, we reported that these materials were too general and lacked “how-to” guidance. In 1999, the Secretariat commissioned an independent evaluation of the effectiveness, relevance, and efficiency of the best practice materials. The review surveyed 164 users who considered the best practice materials to be authoritative, high quality, user friendly, and comprehensive in coverage. However, the review concluded that the Secretariat should develop materials more suited to local circumstances. Some steps have been taken to increase local specificity in best practice materials. The UNAIDS Secretariat has worked with some countries, such as Brazil, to develop best practices that focus on successful approaches and activities taken by organizations in that country. The review also concluded that the distribution of the materials should be improved. The review found, for example, that the Country Programme Advisors—the Secretariat’s country-based staff—had not systematically distributed the materials and may not have been sufficiently aware of their responsibilities in this regard. In January 2001, a senior Secretariat official noted that, while distribution was still a problem, the Secretariat was trying to address this issue. UNAIDS Has Made Some Progress in Tracking the Pandemic but Has Encountered Difficulties in Tracking Response The UNAIDS Secretariat is responsible for developing accurate, up-to-date information on the worldwide epidemic and for tracking the international community’s response. According to UNAIDS’ 2000 donor survey, donors believe that the Secretariat has done well in tracking the pandemic. For example, the Secretariat and WHO participate in the UNAIDS/WHO Working Group on Global HIV/AIDS and Sexually Transmitted Infection Surveillance to compile the best epidemiological information available. From this data, the Secretariat calculates national HIV infection rates, which are helpful in raising awareness about the spread of the virus and in stimulating action. The working group also established the Reference Group on HIV/AIDS Estimates, Modeling and Projections, which, according to UNAIDS, has helped set clearer international standards for assessing AIDS and its impact and is expected to improve the production of country-specific estimates of HIV prevalence. However, according to the Secretariat, efforts still need to be increased to support HIV surveillance activities at the country level. The Secretariat also noted that WHO has taken steps to increase its efforts in this area. The UNAIDS Secretariat is also expected to track national and international responses to the pandemic. Various problems, however, have hindered its efforts in this area. To track funding, the Secretariat conducted a study with Harvard University in 1996 and then with the Netherlands Interdisciplinary Demographic Institute’s Resource Flows Project in 1999 to obtain data on HIV/AIDS spending by major bilateral donors, the United Nations, and developing countries. According to the Secretariat, getting these entities to report data to the contractor has been a major challenge, as has been reaching consensus on what counts as an HIV/AIDS project or activity. In addition, developing countries do not systematically track HIV/AIDS spending. To improve the monitoring and tracking of international and national resource flows, the Secretariat has established a specific unit with devoted staff resources. The Secretariat also has been developing and implementing the Country Response Information System since 2000. This database is intended to facilitate the compilation, analysis, and dissemination of relevant information by country on HIV epidemics and HIV/AIDS-related programs and activities by all relevant in-country partners. According to the Secretariat, compiling this information has been extremely difficult and more complex than originally envisioned, and it is behind in efforts in this area. The Secretariat expects to complete a prototype in the second quarter of 2001. UNAIDS’ Monitoring and Evaluation Efforts Need Improvement UNAIDS’ governing board directed UNAIDS at its creation to implement the principles of performance-based programming and to use measurable objectives in assessing its performance. We reported in 1998 that the Secretariat was in the process of developing a monitoring and evaluation plan. UNAIDS' governing board approved a plan in December 1998 that consisted of multiple elements, including a draft conceptual framework, theme group surveys, and one-time evaluations of several of the Secretariat’s specific functions, such as the best practice collection. Since then, a unified budget and workplan with performance indicators was added. Key elements of the overall plan—the conceptual framework and the unified budget and workplan—need to be improved. Furthermore, despite these evaluative efforts, UNAIDS still cannot measure progress towards achieving its objectives or overall results, especially at the country level. Although the United Nations is not required to comply with the U.S. Government Performance and Results Act, we used the principles laid out in the act to identify the elements of a successful performance-based system. These include (1) a clearly defined mission, (2) establishment of long-term strategic and annual goals, (3) measurement of performance against the goals, and (4) public reporting of results. The act seeks to link resources and performance so that an organization can show what it has accomplished compared with the money it has spent and so that it can be held accountable for the levels of performance achieved. Using the Results Act as a guide, we identified four major weaknesses in UNAIDS’ Monitoring and Evaluation Framework. First, the Framework primarily addresses the Secretariat’s outputs even though the Framework’s outcomes and impacts also apply to the cosponsors. Second, because the Framework’s outputs focus on the Secretariat, which acts primarily at the global level, the Framework does not sufficiently address UNAIDS’ performance at the country level. Third, the Framework’s outputs, outcomes, and impacts are not clearly linked, making it difficult to assess the cause and effect of UNAIDS’ specific activities. Fourth, the Framework does not establish specific performance baselines, targets, or other quantitative measures that could help UNAIDS measure overall results and progress towards achieving its objectives or expected outcomes. UNAIDS’ Unified Budget and Workplan 2000-2001, a separate performance-related instrument, provides additional documentation that compensates for some of the shortcomings of the monitoring and evaluation framework. For example, the Workplan provides UNAIDS’ mission statement, goals, and the strategic objectives leading to those goals. It also provides information on the Secretariat’s and cosponsors’ global and regional activities; includes more specific linkages between outputs, indicators, and objectives; and better accounts for the respective roles and responsibilities of the Secretariat and cosponsors. However, the Workplan also has a number of weaknesses. For example, the Workplan does not include quantifiable performance targets that would define success and help UNAIDS to measure its progress. The Workplan also does not always indicate what is needed to accomplish the stated objectives. For example, one objective is to “mobilize political and public support for UNAIDS’ priority themes and initiatives and to provide leadership and guidance in advocacy, public information, and resource mobilization efforts.” The only output for this objective—communication activities—is vague. Furthermore, like the Framework, the Workplan does not always sufficiently link its components, making it difficult to assess the cause and effect of UNAIDS’ actions. Senior Secretariat officials acknowledge that the Unified Budget and Workplan 2000-2001 has deficiencies. They said that it was the first document of its kind, compiled quickly, and did not have high-quality indicators. In addition, because it is organized thematically rather than functionally, they said it is difficult to track or assess UNAIDS’ progress in achieving its overall objectives. They also said that developing a performance-based plan with quality indicators has been especially challenging because the U.N. system lacks an evaluative culture. However, they believe the Unified Budget and Workplan 2000-2001 is an important first step. UNAIDS Secretariat officials said that evaluation efforts overall have been hampered by inadequate and inconsistent resources. Changes in personnel and reliance on consultants over the past several years have resulted in a lack of continuity and variable levels of effort. It was not until early 1998 that a staff person was hired to lead a performance evaluation unit. The unit is currently authorized three full-time professional staff and is supplemented periodically by staff on part-time loan from other agencies. Because all Secretariat positions are time-limited, there is greater turnover than normal and difficulty in recruiting and retraining skilled staff. Key Factors Have Hindered UNAIDS’ Progress UNAIDS and U.S. government officials told us that, although UNAIDS has certain advantages in the fight against HIV/AIDS, a number of key factors, some of which are external to the organization, have hindered its progress. UNAIDS was established to be the primary advocate for global action on HIV/AIDS and has advantages over other organizations, such as bilateral donor agencies, that combat HIV/AIDS. For example, as a U.N. organization, UNAIDS may have more credibility than other organizations, and thus be more effective, because it is seen as a neutral entity that does not represent any one government. In addition, UNAIDS often has access to higher-level government officials than do bilateral development agencies, and it sometimes operates in countries where bilateral agencies and other organizations do not because of conflict, political tension, or lack of compelling interest. However, UNAIDS’ broad mission, organizational structure, initial lack of a political mandate, and a lack of timely follow-through have hampered its progress. While UNAIDS has a broad and challenging mission, its progress depends on actions taken by other entities, such as international donors, nongovernmental organizations, the private sector, and national governments. National government leadership on HIV/AIDS is particularly essential to an effective response to HIV/AIDS, but many national governments around the world have been slow to respond to the crisis. For example, until 1999, the President of Zimbabwe denied that HIV/AIDS was a problem in his country; the government of India was similarly slow to respond. HIV/AIDS is also an extraordinarily complex disease for which there is no cure. Combating the pandemic requires a multisectoral approach that involves addressing the many medical, cultural, behavioral, economic, social, and political aspects that surround the virus and contribute to its impact. As a joint and cosponsored program, UNAIDS’ structure is complicated and progress depends heavily on the collegiality, cooperation, and consensus of the Secretariat and seven cosponsors. According to UNAIDS and U.S. government officials, these qualities were not evident during UNAIDS’ first several years. They noted that, even though UNAIDS is a joint program, it was created without the buy-in of the cosponsors. According to senior Secretariat and cosponsor officials, because UNAIDS was imposed on the cosponsors, there was a certain amount of hostility within the program. Furthermore, the cosponsors viewed the Secretariat as competing for funding and were confused about their role within the joint program. As a result, until recently, cosponsors were not fully committed either to incorporating HIV/AIDS into their respective mandates or to participating in UNAIDS. Since each cosponsor is accountable only to its own independent executive board, neither the Secretariat nor UNAIDS’ governing board had controlling organizational authority over the cosponsors. Thus, little could be done to exert pressure on the cosponsors to become effective partners within UNAIDS. UNAIDS’ effectiveness was further hampered, according to U.S. government officials, because it was created without the necessary political mandate or funding from the major bilateral donors or the United Nations. According to a senior Secretariat official, the bilateral donors heavily influenced the creation of UNAIDS; however, when political pressure was needed to intensify and fund UNAIDS’ cosponsors’ HIV/AIDS programs, bilateral donors provided little assistance. In addition, according to U.S. officials, the United Nations, particularly the Secretary- General, had other priorities on which to focus. The bilateral donors and the United Nations are beginning to provide needed political and financial support. For example, in January 2000, the U.N. Security Council held a session, in part due to U.S. influence, to address the impact of AIDS on global peace and security—the first session ever held on a health-related matter. Finally, according to U.S. officials, while UNAIDS initiates many activities, it does not always execute them in a timely way, further delaying an effective response. For example, according to USAID officials, UNAIDS has initiated various regional strategies to address HIV/AIDS, such as the International Partnership Against AIDS in Africa and the Eastern European Regional Strategy, but did not facilitate timely efforts to move these agreements forward. According to the Secretariat, it does not have sufficient capacity to always follow through in a timely manner on the efforts it initiates, such as the International Partnership Against AIDS in Africa. Conclusions UNAIDS was given an enormous challenge when it was created to lead and expand U.N. and global efforts to combat HIV/AIDS. Intended to be a model of U.N. reform, UNAIDS was the U.N.’s first joint and cosponsored program of its type. Because there was no precedent, UNAIDS had to learn to function effectively, depending heavily on the collegiality and cooperation of the Secretariat and seven cosponsors. Despite these challenges, UNAIDS has made progress in many areas, especially in improving U.N. coordination and advocating for an enhanced global response to the HIV/AIDS pandemic. However, while UNAIDS’ cosponsors have recently intensified their commitment and efforts to integrate HIV/AIDS into their strategies and programs, their slow response has made it more difficult for UNAIDS to achieve its mission. UNAIDS has not lived up to expectations with regard to its efforts at the country level. Overall, UNAIDS’ Secretariat and cosponsors’ representatives in developing countries continue to have difficulty organizing their efforts and providing assistance to host governments and others, and UNAIDS does not hold them accountable for results. Some cosponsors still do not require their senior country-level representatives to actively participate in theme groups or have not established performance expectations related to theme group activities. In addition, while the Secretariat surveys theme group activities annually, oversight is limited because it does not focus on results. Five years after its creation, the Secretariat has yet to implement a monitoring and evaluation plan that would enable UNAIDS to determine the important results of its overall efforts and measure progress toward achieving its objectives. A quality performance evaluation plan is critical to assure UNAIDS’ donors and others in the international community that UNAIDS is using its resources productively, that it is relevant, and that it is achieving its mission, especially at the country level. This is particularly important because UNAIDS’ donors have indicated that future funding increases for UNAIDS may depend on its effectiveness in showing results at the country level. Recommendations for Executive Action To help UNAIDS achieve progress toward its mission and to help demonstrate this progress, we recommend that the Secretary of State direct U.S. representatives on the cosponsors’ executive boards to request the respective cosponsor: to accelerate its efforts to integrate HIV/AIDS into the work of its agency, and to hold country-level staff accountable for (1) participation in theme groups and (2) the results of theme groups’ efforts to help host countries combat HIV/AIDS. The Secretary of State and the Administrator, USAID, request that the UNAIDS Secretariat and cosponsors improve UNAIDS’ monitoring and evaluation efforts in order to determine the results of its overall efforts and measure progress, especially at the country level. Agency Comments and Our Evaluation We received written comments on a draft of this report from the Department of State, USAID, and UNAIDS, which are reprinted in appendixes IV-VI. At our request, the UNAIDS Secretariat requested and received comments from UNAIDS cosponsors that were included in UNAIDS' written comments. In addition, USAID and UNAIDS also provided technical comments to update or clarify key information that we incorporated, where appropriate. USAID and the Department of State generally agreed that the program improvements we recommended were needed. USAID stated that it found the report to be fair and accurate and that, as a member of the U.S. delegation to UNAIDS’ governing board, it will focus its efforts on the recommendations and other issues cited in our report. In addition, USAID said that it had recently provided extensive written comments to UNAIDS on the draft U.N. System Strategic Plan 2001-2005 to help ensure that the plan resulted in increased accountability and improvements at the country level. While USAID said that it appreciated our acknowledgment of the impact of external factors on UNAIDS’ progress, it noted that the lack of bilateral government support following UNAIDS’ creation did not apply to USAID. In responding to our recommendations, the Department of State stated that it would instruct its delegations to encourage the cosponsors to cooperate more fully with UNAIDS, especially at the country level. In addition, the Department noted that our report will be of immense value to the UNAIDS governing board-commissioned evaluation, currently in progress, which is reviewing the entire scope of UNAIDS activities after 5 years of effort. UNAIDS generally agreed with our findings and recommendations and noted that the report will provide valuable input to the commission that UNAIDS’ governing board established to review UNAIDS’ progress. However, UNAIDS stated that our report did not give the Secretariat and the cosponsors sufficient credit for the many accomplishments they have made since we last reported in 1998. Accordingly, UNAIDS’ comments detailed numerous examples of activities undertaken, including high-level statements made, "information flows improved," documents written, and processes improved to demonstrate further the collective accomplishments of the Secretariat and the cosponsors since 1998. We disagree that our report did not provide UNAIDS with sufficient credit for its accomplishments since 1998. We believe that our report provides a fair assessment of UNAIDS' progress. Our report affirms that UNAIDS has contributed to increased commitments and funding for AIDS efforts by the U.N. and national and international entities. Through UNAIDS, the international community’s response to AIDS has broadened from one that is focused exclusively on health to one that focuses on multiple sectors. Further, we note the progress UNAIDS has made in providing countries with technical support and best practices materials, tracking the epidemic, and increasing U.N. coordination. Where there are deficiencies in UNAIDS’ efforts—at the country level and with its monitoring and evaluation framework—they are deficiencies that UNAIDS, the State Department, and USAID collectively agree are in critical areas that need improvement. While we have included, where appropriate, additional information to address UNAIDS' comments, our overall conclusions remain unchanged. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 15 days after the date of this letter. At that time, we will send copies to appropriate congressional committees; the Honorable Colin Powell, the Secretary of State; the Honorable Andrew S. Natsios, Administrator of USAID; and the Executive Director of UNAIDS. We will also make copies available to interested parties upon request. If you or your staff have any questions concerning this report, please call me at (202) 512-8979. An additional GAO contact and staff acknowledgments are listed in appendix VII. Appendix I: Status of the International Partnership Against AIDS in Africa Because of the catastrophic HIV/AIDS epidemic in Africa and the inadequate national and international response, the Joint United Nations Programme on HIV/AIDS (UNAIDS) initiated the International Partnership Against AIDS in Africa (the Partnership) in 1999. The Partnership is made up of African governments, the U.N., donors, the private sector, and the community sector. The objective of the Partnership is to increase coordination among the five partners and to expand their efforts against HIV/AIDS in each African country. To achieve this objective, the Partnership aims to establish and maintain processes through which these groups can collaborate more effectively at the country level to curtail the spread of HIV and sharply reduce AIDS’ impact on human suffering and declines in human, social, and economic development. The vision of the Partnership is that African nations with the support of the international community will implement and sustain larger-scale, more effective multisectoral national responses to HIV/AIDS within the next decade than they have in the past. According to the Partnership’s guiding document, The International Partnership Against AIDS in Africa: A Framework for Action, dated May 2000, each partner has a specific role to play. African governments are expected to provide national leadership and adequate resources to fight HIV/AIDS in their respective countries. U.N. organizations are expected to enhance U.N. coordination and the global response and to provide program and financial support. Donors are expected to mobilize national and international efforts and to provide the necessary financial assistance to support the Partnership’s actions to address HIV/AIDS. The private sector is expected to provide expertise and resources, and the community sector is expected to enhance local ownership of the Partnership. In addition, all partners have a role in advocacy, policy development, and resource mobilization. The UNAIDS Secretariat facilitated the development of the Partnership’s framework and is responsible for coordinating the implementation of the Partnership. The Secretariat is not responsible for providing funding to the Partnership. According to the UNAIDS Secretariat, the Partnership has achieved many of its milestones and has made some progress toward achieving its objectives. For example, one of the Partnership’s milestones was that, by the end of 2000, at least 12 countries were to have developed national strategic plans for HIV/AIDS, and according to the Secretariat, a total of 13 countries had achieved this goal. For example, the Partnership helped develop the National Strategic Plan for HIV/AIDS in Ghana and Burkina Faso and helped revise the national strategic plans of Ethiopia, Malawi, Zambia, and Mozambique. According to the Secretariat, these plans have resulted in the formation of wider and more effective partnerships to combat HIV/AIDS and have encouraged increased internal and external mobilization of financial resources. Also, the UNAIDS’ intercountry team in eastern and southern Africa helped establish technical networks on five subjects, including traditional medicine and AIDS counseling, and the intercountry team in Abidjan, Cote d’Ivoire, helped establish networks on three subjects, as expected, by December 2000. According to the Secretariat, progress is still being made toward milestones that had not been met as of January 2001. However, several respondents to our survey of U.S. Agency for International Development (USAID) missions expressed reservations about whether HIV/AIDS-related events occurring in the country could be directly attributed to the Partnership, since the Partnership is an enhancement of UNAIDS’ and other partners’ ongoing efforts in Africa. For example, USAID officials in Malawi stated that the Partnership’s collaborative principles have been implemented in that country since 1997, which was prior to the Partnership’s inception. The Secretariat also gives the Partnership credit for increases in World Bank loans and bilateral funding that have been announced by several bilateral donor countries, including the United States, Sweden, Canada, Norway, and Japan. While these events may have coincided with the implementation of the Partnership, a true cause and effect relationship is difficult to establish. Officials from USAID and the cosponsors have said that there is confusion about the Partnership and concern about its implementation. USAID agency officials said that the Partnership is poorly implemented and that there is general confusion within their own and other agencies, especially about how the Partnership will be implemented in country. For example, they had recently spoken to one cosponsor’s representative to UNAIDS who thought that the Partnership had ended. A member of the U.S. delegation to UNAIDS’ governing board told us that the Partnership generally lacked coordination among the five partners. Several cosponsor officials also indicated that there was confusion about the Partnership. One cosponsor told us that the Partnership did not have much substance beyond its guiding document and that their country-level offices in sub- Saharan Africa may be unaware of the Partnership. Agency officials stated that the UNAIDS Secretariat needs to provide the Partnership with greater leadership. In our survey of USAID missions in African countries key partners in the Partnership’s coordination efforts we asked whether the Partnership had achieved its objective to increase coordination among the five partners and expand their efforts against HIV/AIDS in Africa. Two of the 10 USAID/Africa missions that responded to this inquiry said that the Partnership had resulted in better coordination, 3 said it had not, and 5 did not know. Of those that did not know, the USAID mission in Kenya said that the Partnership was not well understood and that they had not heard much about it. We also asked whether the Partnership had resulted in an expanded response to HIV/AIDS. Of the 10 responding, 4 answered yes, 3 said no, and 3 did not know. The USAID mission in Ghana reported that the Partnership had contributed to increased media attention on HIV/AIDS and more programs addressing the epidemic. However, the USAID mission in Tanzania reported that the Partnership was duplicating existing national programs and hindering constructive efforts to combat HIV/AIDS in that country. One factor that may contribute to the confusion and lack of coordination among partners is that, while the framework identifies the partners, their responsibilities, and the deadlines of some objectives and activities, it does not identify who is responsible or accountable for initiating the Partnership at the country level or the actions that should be taken if this leadership is not forthcoming. For example, a respondent to our survey from the USAID mission in Zimbabwe said that no one person or organization is leading the Partnership at the country level and thus nothing is being accomplished. A senior Secretariat official agreed that the Secretariat has been weak in communicating effectively about the Partnership. However, according to this official, the Secretariat is in the process of developing additional guidance on coordination for country- level partners, which will be based on lessons learned by partners in several countries, such as Burkina Faso and Tanzania, that have task forces to lead coordination efforts. The Secretariat is in the process of synthesizing these experiences and developing additional guidance for the Partnership. Appendix II: Objectives, Scope, and Methodology The Chairman of the Senate Subcommittee on African Affairs, Senate Foreign Relations Committee, requested that we (1) assess the progress of the Joint United Nations Programme on HIV/AIDS, especially at the country level, toward increasing the coordination and commitment of the U.N. and global community; (2) assess UNAIDS’ progress in providing technical support and information and in developing a monitoring and evaluation plan to measure results; and (3) identify factors that may have affected UNAIDS’ progress. In addition, we were asked to provide information on the status of the International Partnership Against AIDS in Africa. To identify whether UNAIDS has made progress toward increasing U.N. coordination and commitment, especially at the country level, we interviewed senior officials from the UNAIDS Secretariat in Geneva, Switzerland, and the HIV/AIDS staff from each of the seven cosponsors. We also spoke with key officials from the U.S. Agency for International Development (USAID); the White House Office of National AIDS Policy; Department of Health and Human Services; the State Department; U.S. missions to the United Nations in New York City and Geneva; and Family Health International, a U.S.-based contractor working on HIV/AIDS issues. We reviewed extensive documentation from the Secretariat and from each of the seven UNAIDS cosponsors, including strategic plans, annual and biennial reports, progress reports, the Unified Budget and Workplan 2000- 2001, evaluations of the Secretariat’s and cosponsors’ HIV/AIDS programs and activities, budget and financial data, UNAIDS governing board documents, general HIV/AIDS program description documents, press releases, interagency memorandums of understanding, and memorandums to staff and major public speeches of the cosponsors’ executive directors. We also reviewed a UNAIDS-commissioned survey of 12 of its leading bilateral donors, issued in September 2000, that solicited perspectives on the extent to which UNAIDS has been successful in its roles and responsibilities. To obtain additional information on UNAIDS’ efforts at the country level, we reviewed the Secretariat’s annual surveys of theme group operations from 1996 to 1999. In addition, we conducted a survey of 36 USAID missions worldwide and received 27 responses that provided perspectives on the theme groups’ effectiveness in assisting host country efforts to combat HIV/AIDS. Of the total 82 USAID missions worldwide, we selected 36 missions to survey, on the basis that they had been involved in HIV/AIDS activities for at least 2 years. To determine UNAIDS’ progress in providing technical support and information and in developing a monitoring and evaluation plan to measure results, we interviewed senior officials from the UNAIDS Secretariat in Geneva, and key officials from USAID, the U.S. mission to Geneva, the Department of Health and Human Services, and Family Health International. We reviewed extensive documentation from UNAIDS, including governing board documents reporting on annual and biennial progress; monitoring and evaluation documents, including the Unified Budget and Workplan 2000-2001, the monitoring and evaluation framework, and commissioned evaluations of the Inter-country Team in West and Central Africa; the Secretariat’s best practice materials; and the Secretariat’s strategic planning and development fund process. We also reviewed a UNAIDS’-commissioned survey of 12 of its leading bilateral donors, issued September 2000, that solicited perspectives on the extent to which UNAIDS has been successful in its roles and responsibilities, as well as a UNAIDS biannual epidemiological report. In addition, in assessing UNAIDS’ monitoring and evaluation efforts, we used the principles contained in the Government Performance and Results Act of 1993 to identify the key elements of a successful performance-based system. To identify factors that may have affected UNAIDS’ progress, we interviewed key officials from the UNAIDS Secretariat, cosponsors, USAID, the Department of Health and Human Services, the State Department, the U.S. missions to the United Nations in New York and Geneva, and Family Health International. To determine the status of the International Partnership Against AIDS in Africa, we held discussions with UNAIDS Secretariat and cosponsor officials and also with officials from USAID, the U.S. mission to the United Nations in Geneva, and the Department of Health and Human Services. We reviewed key documents, such as the Partnership’s framework for action, progress reports, weekly bulletins, and meeting reports. In addition, we reviewed an analysis completed by the Secretariat in January 2001 on the Partnership’s progress toward its milestones, as outlined in the framework. As part of our survey of UNAIDS’ efforts at the country level, we asked USAID mission officials whether the Partnership had achieved its objectives. From the 22 missions surveyed in Africa, we received 12 responses, 10 that answered our survey questions and 2 that provided other comments. We conducted our work from August 2000 through May 2001 in accordance with generally accepted government auditing standards. Appendix III: Cosponsors’ HIV/AIDS Programs and Activities UNAIDS is expected to bring together the efforts and resources of seven U.N. system organizations to help prevent new HIV infections, care for those already infected, and mitigate the impact of the pandemic. Each cosponsor is to contribute to UNAIDS’ work according to its comparative advantage and expertise. The following briefly describes the seven cosponsors’ HIV/AIDS programs and selected activities, according to information they provided. United Nations Children’s Fund The mission of the U.N. Children’s Fund (UNICEF) is to advocate for the protection of children’s rights, to help meet their basic needs, and to expand their opportunities to reach their full potential. UNICEF supports services to the poor, rebuilds schools in war-torn societies, and promotes equal rights for girls and women. Within UNAIDS, UNICEF is the chief advocate for children and their families. UNICEF’s goal is to address the underlying causes of the AIDS epidemic; reduce the vulnerability of children, adolescents, and women to HIV/AIDS; and mitigate the impact of disease and death due to AIDS. According to UNICEF, it supports HIV/AIDS programs in 160 countries and focuses its efforts in five areas: (1) breaking the conspiracy of silence about HIV/AIDS, (2) providing primary prevention to young people, (3) reducing mother-to-child HIV transmission, (4) caring for orphans and children living in families affected by HIV/AIDS, and (5) supporting UNICEF staff members affected by HIV/AIDS. For example, in the area of primary prevention to young people, UNICEF is funding scouting groups in Cote d’Ivoire to disseminate HIV/AIDS prevention messages through games, songs, and popular drama and to provide counseling to their peers. In 1999, to help reduce mother-to- child transmission, 11 countries took part in a UNICEF-supported pilot program that offers voluntary and confidential counseling and testing to women and their partners, administers anti-retroviral medication to pregnant HIV-positive women, and provides information about infant feeding options. In Malawi, UNICEF has assisted the government in developing its national orphan policy and the National Orphan Care Programme, which emphasizes family-based care and provides support to extended families for the care of orphans. United Nations Development Programme The goal of the U.N. Development Programme is to eradicate poverty through sustainable human development. The Programme serves more than 170 countries and territories around the world through 132 country offices and technical networks. The Programme contributes to UNAIDS by helping developing countries meet the governance challenge posed by HIV/AIDS and by helping them mitigate the impact of the disease on the poor. The Programme provides advice and development services to developing country governments and civil society groups in the following areas: (1) promoting top-level political commitment through advocacy and policy dialogue; (2) strengthening countries’ capacity to plan, fund, manage, and implement national responses to the HIV/AIDS epidemic; (3) providing guidance on integrating HIV/AIDS priorities into the core of development planning; and (4) providing policy advice to the most affected countries on maintaining governance structures and essential services affected by HIV/AIDS. In addition, the Programme promotes a human rights approach that includes helping national governments formulate anti-discrimination laws and supports public information and media campaigns on HIV/AIDS in developing countries such as Bangladesh, Peru, Laos, and Turkmenistan. In several sub-Saharan African countries, the Programme is sponsoring policy studies to help governments deal with HIV/AIDS’ impact on specific sectors, poverty reduction efforts, and macroeconomic planning. In Botswana, the Programme supported the publication of a National Human Development Report that focused on how HIV/AIDS is reducing economic growth and increasing poverty in that country. United Nations International Drug Control Programme The mission of the U.N. International Drug Control Programme is to work with nations and people worldwide to tackle the global drug problem and its consequences. Through its 22 field offices, the Programme contributes to UNAIDS’s work by helping to prevent the spread of HIV through drug abuse. The Programme’s prevention activities have focused primarily on children and adolescents and emphasize the prevention of both drug use and the risky sexual behaviors associated with drug use. For example, in Brazil, the Drug Control Programme developed short prevention videos, which are shown in the streets in regions with the highest crack use, to target drug abuse among street children. In Thailand, in coordination with U.N. Population Fund, the Programme is supporting activities that are aimed at educating Muslim adolescents on reproductive health, drug abuse prevention, and HIV/AIDS. United Nations Educational, Scientific, and Cultural Organization The mandate of the U.N. Educational, Scientific, and Cultural Organization (UNESCO) is to foster international cooperation in intellectual activities designed to promote human rights, establish a just and lasting peace, and further the general welfare of mankind. UNESCO has 73 field offices and units in different parts of the world. In the context of UNAIDS, UNESCO focuses its efforts on five major areas: (1) education, (2) basic research, (3) culture, (4) human rights and social and human sciences, and (5) public information and awareness. For example, in Brazil, UNESCO is currently cooperating with the U.N. International Drug Control Programme and the Brazilian Health Ministry to provide HIV education in schools to heighten awareness of HIV and prevent its transmission. In south Asia, UNESCO published a media handbook on AIDS in eight different south Asian languages. UNESCO also has been active in promoting research on AIDS in cooperation with the World Foundation for AIDS Research and Prevention. United Nations Population Fund The primary mandate of the U.N. Population Fund is to help ensure universal access by all couples and individuals to high-quality reproductive health services by 2015. In developing countries, the Fund works to improve reproductive health and family planning services on the basis of individual choice and to formulate population policies in support of sustainable development. The Population Fund supports HIV/AIDS activities in 138 countries. The Fund addresses the prevention of HIV transmission and focuses on (1) supporting information, education, and communication programs for youth and adolescents both in and out of schools; (2) providing young people greater access to youth friendly reproductive health information, counseling, and services; (3) advocating for relevant youth policies that recognize the rights of young people and promote their reproductive health; and (4) addressing gender equity issues. The Population Fund is the largest international supplier of condoms and is UNAIDS’ focal point for condom programming. The Fund manages a database on reproductive health commodities and administers the Global Contraceptive Commodity Programme, which maintains stocks of condoms to expedite delivery to requesting countries. The Fund also works to promote the greater involvement of men in HIV prevention. For example, in parts of Africa, Asia, and Central America, the Fund supports services, information, and counseling to encourage long-distance truck drivers to adopt safer sexual practices. In addition, the Fund has been working with government and national partners to promote programs and policies that advance reproductive health and well-being. For example, in the Islamic Republic of Iran, the Fund, in conjunction with the Ministry of Education, helped distribute 700,000 copies of a poster on HIV/AIDS transmission and prevention along with 200,000 copies of a pamphlet designed for teachers to schools nationwide. World Health Organization WHO’s objective is to attain the highest possible levels of health by all peoples. WHO performs a range of advisory, technical, and policy-setting functions, including (1) providing evidence-based guidance in health; (2) setting global standards for health; (3) cooperating with governments in strengthening national health systems; and (4) developing and transferring appropriate health technology, information, and standards. As a UNAIDS cosponsor and the leading international health agency, WHO works to strengthen the health sector’s response to the worldwide HIV/AIDS epidemic and provide technical assistance to countries to improve their health policies, planning, and implementation of HIV/AIDS prevention and care interventions. For example, according to WHO, it has supported and coordinated research and provided technical support on HIV/AIDS-related issues such as the prevention and treatment of sexually transmitted infections, reproductive health, essential drugs, vaccine development, blood safety, and substance use. WHO has also developed a generic protocol for planning and implementing pilot projects to prevent mother- to-child transmission of HIV in low-income countries in Africa, Asia, and Latin America. In addition, WHO has projects in several countries with high HIV prevalence to develop national plans and implement activities for strengthening care and psychosocial support to people living with HIV/AIDS. WHO is a key partner in global surveillance of HIV infection and its behavioral determinants, including developing surveillance guidelines, updating the global database on HIV/AIDS, and producing fact sheets and reports on HIV/AIDS. The World Bank The mandate of the World Bank, the world’s largest source of development assistance, is to alleviate poverty and improve the quality of life. Through its loans, policy advice, and technical assistance, the World Bank supports a broad range of programs aimed at reducing poverty and improving living standards in the developing world. As a UNAIDS’ cosponsor, the World Bank provides loans and credits to national governments to implement HIV/AIDS programs. The World Bank committed more than $1.3 billion to 109 HIV/AIDS-related projects in 57 countries from 1986 to the end of January 2001. A recent innovation in the Bank’s support to HIV/AIDS is its multicountry program approach to lending. In September 2000, the World Bank approved the Multi-Country HIV/AIDS Program for Africa, providing $500 million in flexible and rapid funding for projects to fight the epidemic in sub-Saharan Africa. A similar multicountry program totaling about $100 million in loans and credits for the Caribbean is under way. To strengthen the Bank’s capacity to respond to HIV/AIDS as a major development issue in Africa, the Bank created ACTAfrica, a dedicated HIV/AIDS unit directly under the Office of the Regional Vice Presidents. In addition to lending in all regions of the world, the Bank is also involved in policy dialogue about HIV/AIDS with high- level officials in the government and civil society. It is also working with the U.S. Treasury to establish the International AIDS Trust Fund for HIV/AIDS activities in those countries hardest hit by the epidemic or at high risk of being so. The United States is providing $20 million to initially capitalize the fund, and contributions will be sought from other donors. Appendix IV: Comments From the Department of State Appendix V: Comments From the U.S. Agency for International Development GAO Comment The following is GAO’s comment on USAID’s letter dated May 11, 2001. In commenting on our first recommendation, USAID suggested that it is not the U.S. representatives’ role on the cosponsors’ executive boards to “propose” initiatives to the cosponsors but rather to “request” them to take action. We modified the recommendation to address this point. Appendix VI: Comments From the Joint United Nations Programme on HIV/AIDS GAO Comments The following are GAO’s comments on UNAIDS’ letter dated May 14, 2001. 1. UNAIDS commented that they disagreed with our use of a response from the donor survey to support our finding that their efforts at the country level were weak. The donor survey stated that half of the donors responding (the survey was sent to 16 of UNAIDS’ leading bilateral donors, and 12 responded) believed that UNAIDS was not as successful as expected in promoting broad-based political and social commitment at the country level. We did not rely solely on the donor survey; other evidence corroborates the donor’s concern about UNAIDS’ performance at the country level. First, the donor survey also found that donors believed that UNAIDS had not been as successful as they expected in strengthening governments’ HIV/AIDS activities and ensuring that appropriate and effective policies and strategies are implemented to address HIV/AIDS. Second, the Secretariat’s latest annual surveys of theme groups showed that, between 1997 and 1999, theme groups had made little progress in key areas, such as joint advocacy action plans and developing a U. N. system integrated plan on HIV/AIDS. Our December 2000 survey of USAID missions showed that, after 5 years of experience, theme groups' performance in strengthening the overall national government response to HIV/AIDS varied widely. Third, senior UNAIDS officials and members of the UNAIDS governing board stated in December 2000 that UNAIDS needed to improve its country-level response. The governing board said that the performance of UNAIDS’ theme groups required urgent attention, and UNAIDS' Executive Director said that strengthening UNAIDS’ country- level efforts is one of UNAIDS’ top internal challenges. This collective evidence demonstrates that UNAIDS must strengthen its efforts at the country level. 2. While UNAIDS agreed with our finding that country-level efforts need to be strengthened, it also commented that we placed too much emphasis on theme group efforts at the country level without considering broader U.N. systemwide efforts. We recognize that there are broader U.N. efforts, such as the Resident Coordinator System and the Common Country Assessment/United Nations Development Assistance Framework process. However, UNAIDS’ documents state that UNAIDS’ theme groups are its “main mechanism” for coordinating HIV/AIDS activities at the country level. Our analysis therefore focused on this mechanism. 3. UNAIDS commented that we did not credit the U.N. Development Programme for actions taken as a result of an HIV/AIDS program evaluation, prepared in 2000, which found that the agency had not fully integrated HIV/AIDS into its strategies, programs, and activities. We revised the report to include updated information on action taken in response to the evaluation. 4. UNAIDS was concerned that we did not reflect the cosponsors’ creation of new positions and units focused on HIV/AIDS and cited numerous examples of these changes. While we may not have cited every example of actions taken by the cosponsors, we did recognize that some cosponsors had elevated the position of the HIV/AIDS issue organizationally and provided an example. We revised the report to include an additional example of steps taken by the U.N. Children’s Fund. 5. UNAIDS commented that, while they agreed that country-level coordination and implementation needs strengthening, we had downplayed how much progress the United Nations has achieved in coordinating action at the country level. UNAIDS stated that we did not sufficiently credit them for the Global Strategy Framework, regional strategy development processes, partner programme reviews, improved cosponsor responses to HIV/AIDS, and a greater understanding of the epidemic at the country level. UNAIDS comments also provided additional examples of activities they believed contributed to an enhanced country- level response. We disagree that we downplayed UNAIDS’ efforts. For example, our report credits UNAIDS for facilitating the development of U.N. System Strategic Plan and conducting the detailed reviews of the cosponsors’ HIV/AIDS programs (Partner Programme Reviews), as well as for the cosponsors’ improved commitment and response to HIV/AIDS. The report does not discuss the Global Strategy Framework on HIV/AIDS because it has only recently been finalized and thus it is too soon to gauge whether this document will increase international commitment, action, or results. Also, in the absence of an effective monitoring and evaluation plan that has clear performance indicators, it is difficult to isolate UNAIDS contributions from those of the many entities working at the country level to combat HIV/AIDS, including national governments, bilateral donors, nongovernmental organizations, and foundations. 6. UNAIDS stated that we characterized theme group responsibilities too broadly and that it was never envisioned that U.N. theme groups would serve as an operational entity or as the primary mechanism for assisting developing countries. Our report clearly explains the role of the theme groups in the background section and elsewhere as, among other things, a facilitator for coordinating the U.N. response at the country level. This characterization came from UNAIDS documents that state: “In developing countries, UNAIDS operates mainly through the country-based staff of its seven cosponsors. Meeting as the host country’s U.N. Theme Group on HIV/AIDS, representatives of the cosponsoring organizations share information, plans and monitor coordinated action….” 7. UNAIDS commented that theme groups are not responsible for resource mobilization. However, UNAIDS provided us the Resource Guide for Theme Groups, which devotes one of its five sections to resource mobilization. This section states that “resource mobilization at the country level is a key role of the Theme Group.” To avoid any confusion, we modified the text. 8. UNAIDS noted that our report lacked clarity with regard to the role of the Country Programme Advisor and the operation of the Programme Acceleration Funds. To avoid any confusion about the Country Programme Advisor’s role, we modified the text. The information we presented in the report on the operation of the Programme Acceleration Funds was taken directly from UNAIDS documents—primarily the 1999 evaluation of the funding process. 9. UNAIDS provided information on the additional number of integrated U.N. workplans that have been prepared, to demonstrate the progress theme groups have made in developing a more unified U.N. response to HIV/AIDS. However, we were not able to corroborate this information. In addition, while the information UNAIDS presented shows the number of workplans completed, it does not indicate the quality and content of the plans and the extent to which they have been implemented. 10. UNAIDS provided more current information on action taken to strengthen the performance of theme groups and Country Programme Advisors--the Secretariat’s country-based staff. We revised the report to highlight some of these actions. 11. UNAIDS stated that the Unified Budget and Workplan 2000-2001 includes quantifiable performance targets. However, UNAIDS did not provide specific examples of such targets with its comments. In examining UNAIDS’ Unified Budget and Workplan in detail during our review, we noted that it contained outcome indictors. However, the workplan did not identify specific performance baselines, targets, or other measures that would enable UNAIDS to determine whether it had succeeded in its efforts and measure progress toward its objectives. 12. UNAIDS commented that its overall monitoring and evaluation plan included several one-time evaluations of specific efforts, such as UNAIDS’ development of best practices. We revised the report to clarify that UNAIDS considers these one-time evaluations part of its overall monitoring and evaluation plan. 13. UNAIDS raised several concerns about the report’s methodology and presentation. First, UNAIDS commented that the report focused too much on the findings contained in our 1998 report and did not adequately credit UNAIDS for the progress it has made. We disagree. We believe we have given credit to UNAIDS for progress in a number of areas, several of which were of specific concern in our 1998 report. For example, the report highlights increased U.N. and international commitment and funding to HIV/AIDS efforts, as well as a broadened approach to addressing HIV/AIDS from one that was exclusively health oriented to one that is now multisectoral. Further, the report notes the progress made on technical support and best practices, tracking the epidemic, and increasing U.N. coordination. However, our report also focused on those areas most needing improvement—namely, UNAIDS’ country-level efforts and monitoring and evaluation of UNAIDS’ progress and results. These are areas that the Department of State, USAID, and UNAIDS agree need improvement. Where appropriate, we have modified our report and included some additional information. Second, UNAIDS commented that the report will be out of date by the time it is issued. We disagree. The changing political climate surrounding HIV/AIDS issues does not negate the report’s conclusions and recommendations. For example, UNAIDS’ comments stated that not only did they agree that HIV/AIDS-related efforts at the country level need strengthening but that these efforts will certainly remain the central theme for “at least the next decade.” Furthermore, the current debate to establish a $7 billion to $10 billion global trust fund to address the HIV/AIDS crisis in developing countries makes the issues cited in our report even more timely and critical. The challenges UNAIDS faced in mobilizing international support for HIV/AIDS efforts, marshalling donors’ financial commitments, and establishing a system to evaluate program results are important lessons learned that should inform the current debate on a new global AIDS trust fund. UNAIDS’ comments also noted that documentation used to support the report was largely constructed with data compiled from the previous year. We used the most current data supplied by UNAIDS and other information to conduct our analysis, including several of UNAIDS’ and its governing board’s commissioned evaluations. In addition, we conducted our own survey of USAID missions to obtain perspective on UNAIDS’ country-level efforts in December 2000. Third, UNAIDS noted that the report contained selective quotations from several of UNAIDS’ evaluations and surveys of specific functions, at the same time pointing out that UNAIDS’ monitoring and evaluation efforts are insufficient. We believe our use of available data and information contained in UNAIDS’ evaluations was appropriate for depicting the steps taken in and weaknesses of UNAIDS’ efforts. However, while this information was useful, it does not provide the results of UNAIDS’ overall efforts or progress made toward its objectives. With bilateral and other donors responding to UNAIDS’ call for increased resources to combat HIV/AIDS, a quality monitoring and evaluation effort, which includes a clearly defined mission, long-term strategic and short-term goals, measurement of performance against defined goals, and public reporting of results, is even more important. Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to Mr. Hutton, Leslie Bharadwaja, Sharon Caudle, Lynn Cothern, Francisco Enriquez, Aleta Hancock, Lynne Holloway, Stanley Kostyla, and Hector Wong made key contributions to this report.
Despite efforts by the international community to reduce the spread of the human immunodeficiency virus, AIDS is now the fourth leading cause of death in the world and the primary cause of death in sub-Saharan Africa. The Joint United Nations Programme on HIV/AIDS (UNAIDS), funded in part by the United States, is one important international effort against the disease. UNAIDS was established by the United Nations (U.N.) in 1996 to provide coordinated U.N. action and to lead and promote an expanded global response to the worldwide epidemic. This report (1) assesses UNAIDS' progress, especially at the country level, toward increasing the coordination and commitment of the U.N. and global community; (2) assesses UNAIDS' progress in providing technical assistance and information and in developing a monitoring and evaluation plan to measure results; and (3) identifies factors that may have affected UNAIDS' progress. GAO found that UNAIDS has made progress in increasing U.N. coordination and enhancing the global response to the worldwide HIV/AIDS epidemic, but its country-level efforts need to be strengthened. UNAIDS has provided financial and technical support to about 50 HIV/AIDS technical networks worldwide, but has not been as successful in tracking the funding and actions host governments and others have taken to address the AIDS problem. UNAIDS has also been unable to follow its intended model of U.N. reform, whereby a single Secretariat together with several U.N. agencies would marshal the U.N. and global community's resources to address the AIDS epidemic.
GAO_GAO-05-712
Background FAA is an agency of the Department of Transportation (DOT); one of its central missions is to ensure safe, orderly, and efficient air travel in the national airspace system. FAA’s quarterly administrator’s fact book for March 2005 reports that, in 2004, air traffic in the national airspace system exceeded 46 million flights and 647 million people. According to the agency’s 2004 annual performance report for its air traffic organization, Year One—Taking Flight, at any one time as many as 7,000 aircraft—both civilian and military—could be aloft over the United States (see fig. 1). More than 36,000 employees support the operations that help move aircraft through the national airspace system. The agency’s ability to fulfill its mission depends on the adequacy and reliability of its air traffic control systems, a vast network of computer hardware, software, and communications equipment. These systems reside at, or are associated with, several types of facilities: air traffic control towers, Terminal Radar Approach Control facilities, Air Route Traffic Control Centers (or en route centers), and the Air Traffic Control System Command Center. According to FAA, Four hundred eighty-eight air traffic control towers (see fig. 2) manage and control the airspace within about 5 miles of an airport. They control departures and landings as well as ground operations on airport taxiways and runways. One hundred seventy Terminal Radar Approach Control facilities provide air traffic control services for airspace that is located within approximately 40 miles of an airport and generally up to 10,000 feet above the airport, where en route centers’ control begins. Terminal controllers establish and maintain the sequence and separation of aircraft. Twenty-one en route centers control planes over the United States—in transit and during approaches to some airports. Each center handles a different region of airspace. En route centers operate the computer suite that processes radar surveillance and flight planning data, reformats it for presentation purposes, and sends it to display equipment that is used by controllers to track aircraft. The centers control the switching of voice communications between aircraft and the center as well as between the center and other air traffic control facilities. Two en route centers also control air traffic over the oceans. The Air Traffic Control System Command Center (see fig. 3) manages the flow of air traffic within the United States. This facility regulates air traffic when weather, equipment, runway closures, or other conditions place stress on the national airspace system. In these instances, traffic management specialists at the command center take action to modify traffic demands in order to keep traffic within system capacity. As aircraft move across the national airspace system, controllers manage their movements during each phase of flight. See figure 4 for a visual summary of air traffic control over the United States and its oceans. The air traffic control systems are very complex and highly automated. These systems process a wide range of information, including radar, weather, flight plans, surveillance, navigation/landing guidance, traffic management, air-to-ground communication, voice, network management, and other information—such as airspace restrictions—that is required to support the agency’s mission. To support its operational management functions, the agency relies on several interconnected systems to process and track flights around the world. In order to successfully carry out air traffic control operations, it is essential that FAA’s systems interoperate, functioning both within and across facilities as one integrated system of systems. Each type of facility that we described in the previous section consists of numerous interrelated systems. For example, each of the en route centers, according to FAA officials, relies on 16 systems to perform mission-critical information processing and display, navigation, surveillance, communications, and weather functions. In addition, systems from different facilities interact with each other so that together they can successfully execute the entire air traffic control process. For example, systems integrate data on aircraft position from surveillance radars with data on flight destination from flight planning data systems, for use on controllers’ displays. As FAA modernizes its air traffic control systems, information security will become even more critical. The agency’s modernization efforts are designed to enhance the safety, capacity, and efficiency of the national airspace system through the acquisition of a vast network of radar, navigation, communications, and information processing systems. Newer systems use digital computer networking and telecommunications technologies that can create new vulnerabilities and expose them to risks that must be assessed and mitigated to ensure adequate protection. New vulnerabilities may also result from FAA’s increasing reliance on commercially available hardware and software and from growing interconnectivity among computer and communication systems. Increasing interconnection increases the extent to which systems become vulnerable to intruders, who may severely disrupt operations or manipulate sensitive information. The administrator has designated the CIO as the focal point for information system security within the agency. The CIO is responsible for overseeing the development of the information security program, including oversight of information security policies, architectures, concepts of operation, procedures, processes, standards, training, and plans. This responsibility is delegated to the Office of Information Systems Security, whose mission is to protect the agency’s infrastructure through leadership in innovative information assurance initiatives. In addition, the agency has established Information System Security Manager positions, with more detailed information security responsibilities, within FAA’s various lines of business, such as the air traffic organization. We have previously reported information security weaknesses at FAA. For instance, in December 2000, we reported that the agency had physical security vulnerabilities, ineffective operational systems security, inadequate service continuity efforts, an ineffective intrusion detection capability, and ineffective personnel security. We also noted that the agency had not yet implemented its information security program. Information system controls are an important consideration for any organization that depends on computerized systems and networks to carry out its mission or business. These controls should provide adequate protections against outside as well as inside threats. It is especially important for government organizations, such as FAA, where maintaining the public trust is essential. Inadequately protected systems are at risk of intrusion by individuals or groups with malicious intent, who could use their illegitimate access to obtain sensitive information, disrupt operations, or launch attacks against other computer systems and networks. Since 1997, we have designated information security as a governmentwide high-risk area. Our previous reports, and those of agency inspectors general, describe persistent information security weaknesses that place a variety of federal operations at risk of disruption, fraud, and inappropriate disclosure. Congress and the executive branch have taken actions to address the risks associated with persistent information security weaknesses. In December 2002, Congress enacted the Federal Information Security Management Act (FISMA), which is intended to strengthen the information security of federal systems. In addition, the administration has taken important steps to improve information security, such as integrating it into the President’s Management Agenda Scorecard. Moreover, the Office of Management and Budget (OMB) and the National Institute of Standards and Technology (NIST) have issued security guidance to federal agencies. Objective, Scope, and Methodology The objective of our review was to determine the extent to which FAA had implemented information security for its air traffic control systems. Our evaluation was based on (1) our Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the integrity, confidentiality, and availability of computerized data; (2) previous reports from DOT’s Office of Inspector General (OIG); and (3) FISMA, which sets key elements that are required for an effective information security program. Specifically, we evaluated information system controls that are intended to protect resources, data, and software from unauthorized access; prevent the introduction of unauthorized changes to application and provide segregation of duties in the areas of application programming, system programming, computer operations, information security, and quality assurance; ensure recovery of computer processing operations in case of disaster or other unexpected interruption; and ensure an adequate information security program. To evaluate these controls, we identified and reviewed pertinent DOT and FAA security policies and procedures. In addition, to determine whether information system general controls were in place, adequately designed, and operating effectively, we conducted vulnerability testing and assessments of systems from within the agency’s network. We also held discussions with agency staff to gain an understanding of FAA’s processes and controls. In addition, in order to take advantage of their prior work in this area, we held discussions with OIG staff and reviewed recent information security reports pertaining to air traffic control systems. Because the OIG had recently reviewed the system used by controllers to ensure the safe separation of aircraft, we did not include that system in our review. We performed our review at FAA headquarters and tested operational and management controls at three other sites. At two additional sites, we tested these controls and, in addition, tested technical controls for three critical air traffic control systems. The limited distribution report contains further details on the scope of our review. This review was performed from March 2004 through June 2005 in accordance with generally accepted government auditing standards. Although Progress Has Been Made, Air Traffic Control Systems Remain Vulnerable Although FAA has made progress in implementing information security for its air traffic control systems by establishing an agencywide information security program and addressing many of its previously identified security weaknesses, significant control weaknesses threaten the integrity, confidentiality, and availability of those systems and information. In the systems we reviewed, we identified 36 weaknesses in electronic access controls and in other areas such as physical security, background investigations, segregation of duties, and application change controls. A key reason for these weaknesses is that the agency has not yet fully implemented an information security program. As a result, FAA’s air traffic control systems remain vulnerable to unauthorized access, use, modification, and destruction that could disrupt aviation operations. Electronic Access Controls Were Inadequate A basic management objective for any organization is to protect the resources that support its critical operations from unauthorized access. Organizations accomplish this objective by designing and implementing electronic controls that are intended to prevent, limit, and detect unauthorized access to computing resources, programs, and information. Electronic access controls include those related to network management, patch management, user accounts and passwords, user rights and file permissions, and audit and monitoring of security-relevant events. Inadequate electronic access controls diminish the reliability of computerized information, and they increase the risk of unauthorized disclosure, modification, and destruction of sensitive information and of disruption of service. Network Management Networks are collections of interconnected computer systems and devices that allow individuals to share resources such as computer programs and information. Because sensitive programs and information are stored on or transmitted along networks, effectively securing networks is essential to protecting computing resources and data from unauthorized access, manipulation, and use. Organizations secure their networks, in part, by installing and configuring network devices that permit authorized network service requests, deny unauthorized requests, and limit the services that are available on the network. Devices used to secure networks include (1) firewalls that prevent unauthorized access to the network, (2) routers that filter and forward data along the network, (3) switches that forward information among segments of a network, and (4) servers that host applications and data. Network services consist of protocols for transmitting data between network devices. Insecurely configured network services and devices can make a system vulnerable to internal or external threats, such as denial-of-service attacks. Because networks often include both external and internal access points for electronic information assets, failure to secure these assets increases the risk of unauthorized modification of sensitive information and systems, or disruption of service. For the systems we reviewed, FAA did not consistently configure network services and devices securely to prevent unauthorized access to and ensure the integrity of computer systems operating on its networks. We identified weaknesses in the way the agency restricted network access, developed application software, segregated its network, protected information flow, and stored the certificates that are used for authentication. For example: Access for system administration was not always adequately restricted, and unnecessary services were available on several network systems. Application software exhibited several weaknesses that could lead to unauthorized access or to service disruptions. Although FAA implemented controls to segregate network traffic, weaknesses in the application and infrastructure systems could allow an external attacker to circumvent network controls in order to gain unauthorized access to the internal network. FAA did not encrypt certain information traversing its internal network. Instead, it used clear text protocols that made the network susceptible to eavesdropping. FAA did not comply with federal standards for protected handling of certificates and keys. Because certificates are a primary tool for controlling access to applications, this improper storage puts major applications at risk of intrusion. Patch Management Patch management is a critical process that can help to alleviate many of the challenges of securing computing systems. As vulnerabilities in a system are discovered, attackers may attempt to exploit them, possibly causing significant damage. Malicious acts can range from defacing Web sites to taking control of entire systems and thereby being able to read, modify, or delete sensitive information; destroy systems; disrupt operations; or launch attacks against other organizations’ systems. After a vulnerability is validated, the software vendor develops and tests a patch or workaround. Incident response groups and software vendors issue information updates on the vulnerability and the availability of patches. FAA’s patch management policy assigns organizational responsibilities for the patch management process—including the application of countermeasures to mitigate system vulnerability—and requires that patches be kept up to date or that officials otherwise accept the risk. For the systems we reviewed, FAA did not consistently install patches in a timely manner. For example, patches that had been issued in 2002 had not been applied to certain servers that we reviewed. On another system, the operating system software, from 1991, was outdated and unpatched, although several vulnerabilities had been identified in the meantime. The agency did not believe that the system was vulnerable to unauthorized access or that it was at low risk of exposure to these vulnerabilities. Because FAA had not yet installed the latest patches at the time of our review, firewalls, Web servers, and servers used for other purposes were vulnerable to denial-of-service attacks and to external attackers’ taking remote control of them. User Accounts and Passwords A computer system must be able to identify and differentiate among users so that activities on the system can be linked to specific individuals. When an organization assigns unique user accounts to specific users, the system distinguishes one user from another—a process called identification. The system must also establish the validity of a user’s claimed identity through some means of authentication, such as a password, that is known only to its owner. The combination of identification and authentication—such as user account/password combinations—provides the basis for establishing individual accountability and for controlling access to the system. Accordingly, agencies (1) establish password parameters, such as number of characters, type of characters, and the frequency with which users should change their passwords, in order to strengthen the effectiveness of passwords for authenticating the identity of users; (2) require encryption for passwords to prevent their disclosure to unauthorized individuals; and (3) implement procedures to control the use of user accounts. FAA policy identifies and prescribes minimum requirements for creating and managing passwords, including how complex the password must be and how to protect it. DOT policy also addresses the necessity to assign only one user to a given ID and password. FAA did not adequately control user accounts and passwords to ensure that only authorized individuals were granted access to its systems. Because the agency did not always comply with complexity requirements, passwords on numerous accounts may be easy for an attacker to guess. Additionally, one of the databases we reviewed did not require strong passwords. We also identified database passwords that were not adequately protected because they were (1) readable by all system users on two Web servers, (2) in clear text format on multiple shared server directories, and (3) written into application program code. Such weaknesses increase the risk that passwords may be disclosed to unauthorized users and used to gain access to the system. Further, administrators and/or users shared user IDs and passwords on various devices, including servers, routers, and switches, thereby diminishing the effectiveness of the control for attributing system activity to individuals. As a result, FAA may not be able to hold users individually accountable for system activity. User Rights and File Permissions The concept of “least privilege” is a basic underlying principle for securing computer systems and data. It means that users are granted only those access rights and permissions that they need to perform their official duties. To restrict legitimate users’ access to only those programs and files that they need to do their work, organizations establish access rights and permissions. “User rights” are allowable actions that can be assigned to users or to groups of users. File and directory permissions are rules that are associated with a particular file or directory and regulate which users can access them and the extent of that access. To avoid unintentionally giving users unnecessary access to sensitive files and directories, an organization must give careful consideration to its assignment of rights and permissions. DOT and FAA policies require that access privileges be granted to users at the minimum level required to perform their job-related duties. FAA permitted excessive access to air traffic control systems, granting rights and permissions that allowed more access than users needed to perform their jobs. For example, FAA had granted users of a database system the access rights to create or change sensitive system files—even though they did not have a legitimate business need for this access. Further, the permissions for sensitive system files also inappropriately allowed all users to read, update, or execute them. Audit and Monitoring of Security-Relevant Events To establish individual accountability, monitor compliance with security policies, and investigate security violations, it is crucial to determine what, when, and by whom specific actions have been taken on a system. Organizations accomplish this by implementing system or security software that provides an audit trail that they can use to determine the source of a transaction or attempted transaction and to monitor users’ activities. The way in which organizations configure system or security software determines the nature and extent of information that can be provided by the audit trail. To be effective, organizations should configure their software to collect and maintain audit trails that are sufficient to track security-relevant events. DOT policy requires that audit logging be enabled on systems so that these events can be monitored. For the systems we reviewed, FAA did not consistently audit and monitor security-relevant system activity on its servers. For example, on key devices that we reviewed, logging either was disabled or configured to overwrite, or it did not collect information on important security-relevant events such as failed login attempts. As a result, if a system was modified or disrupted, the agency’s capability to trace or recreate events would be diminished. In response to weaknesses that we identified in electronic access controls, FAA officials told us that they had already corrected many of the weaknesses. Agency officials also pointed out that because major portions of air traffic control systems consist of custom-built, older equipment with special-purpose operating systems, proprietary communication interfaces, and custom-built software, the possibilities for unauthorized access are limited and therefore mitigate the risks. However, as we noted in our 1998 report on FAA information security, one cannot conclude that old or obscure systems are secure simply because their configurations may not be commonly understood by external hackers. In addition, the systems’ proprietary features do not provide protection from attack by disgruntled current and former employees who understand them, or from more sophisticated hackers. The weaknesses that we identified could allow unauthorized access to certain systems. Other Information System Controls Were Not Sufficient In addition to electronic access controls, other important controls should be in place to ensure the security and reliability of an organization’s data. These controls include policies, procedures, and control techniques to physically secure computer resources, conduct suitable background investigations, provide appropriate segregation of duties, and prevent unauthorized changes to application software. However, weaknesses existed in each of these areas. These weaknesses increase the risk of unauthorized access to and modification of FAA’s information systems and of disruption of service. Physical Security Physical security controls are important for protecting computer facilities and resources from espionage, sabotage, damage, and theft. These controls restrict physical access to computer resources, usually by limiting access to the buildings and rooms in which the resources are housed and by periodically reviewing the access granted, in order to ensure that access continues to be appropriate. At FAA, physical access control measures (such as guards, badges, and locks—used alone or in combination) are vital to protecting the agency’s sensitive computing resources from both external and internal threats. FAA has implemented a facility security management program that requires all staffed facilities to undergo a physical security review. These physical security reviews are part of an overall facility accreditation program, which requires facilities to meet all required security measures in order to become accredited. Since our December 2000 report, FAA has made progress with this program and has accredited about 430 additional facilities for a total of 64.8 percent of its staffed facilities (see fig. 5). Although FAA had taken some actions to strengthen its physical security environment, certain weaknesses reduced its effectiveness in protecting and controlling physical access to sensitive areas such as server rooms. Facility reviews are supposed to determine the overall risk level at the facility, examine the facility’s security procedures, and discover local threats and vulnerabilities. However, in 2004, DOT’s OIG reported that these physical security reviews generally focused more on the facility’s perimeter than on vulnerabilities within the facility. We also identified weaknesses in FAA’s physical security controls. Specific examples are listed below: FAA did not consistently ensure that access to sensitive computing resources had been granted to only those who needed it to perform their jobs. At the time of our review, FAA did not have a policy in place requiring that (1) physical access logs be reviewed for suspicious activity or (2) access privileges be reviewed to ensure that employees and contractors who had been granted access to sensitive areas still needed it. As a result, none of the sites we visited could ensure that employees and contractors who were accessing sensitive areas had a legitimate need for access. Sensitive computing resources and critical operations areas were not always secured. FAA did not properly control the badging systems used for granting physical access to facilities. The required information security access controls regarding password protection were inconsistently implemented, and division of roles and responsibilities was not enforced in the automated system. The entrances to facilities were not always adequately protected. Visitor screening procedures were inconsistently implemented, and available tools were not being used properly or to their fullest capability. These weaknesses in physical security increase the risk that unauthorized individuals could gain access to sensitive computing resources and data and could inadvertently or deliberately misuse or destroy them. Background Investigations According to OMB Circular A-130, it has long been recognized that the greatest harm to computing resources has been done by authorized individuals engaged in improper activities—whether intentionally or accidentally. Personnel controls (such as screening individuals in positions of trust) supplement technical, operational, and management controls, particularly where the risk and magnitude of potential harm is high. NIST guidelines suggest that agencies determine the sensitivity of particular positions, based on such factors as the type and degree of harm that the individual could cause by misusing the computer system and on more traditional factors, such as access to classified information and fiduciary responsibilities. Background screenings (i.e., investigations) help an organization to determine whether a particular individual is suitable for a given position by attempting to ascertain the person’s trustworthiness and appropriateness for the position. The exact type of screening that takes place depends on the sensitivity of the position and any applicable regulations by which the agency is bound. In 2000, we testified that FAA had failed to conduct background investigations on thousands of contractor personnel. Further, according to the testimony, many reinvestigations—which are required every 5 years for top secret clearances—were never completed. Since our 2000 testimony, the agency has made improvements to its background investigation program. For example, according to agency officials, it has completed background investigations for 90 percent of its contractor personnel and has implemented an automated system to track and report when reinvestigations are required. Although FAA has recently made improvements to its background investigation program, the agency has not always properly designated sensitivity levels for positions involving tasks that could have a major impact on automated information systems. According to the Office of Personnel Management (OPM), positions with major responsibility for the design, testing, maintenance, operation, monitoring, or management of systems hardware and software should be designated as “high risk.” However, FAA has designated some of these types of positions as “moderate risk;” all 20 individuals that we identified as having system responsibilities with potentially significant access were designated as moderate risk or below. Further, OPM recommends a minimum background investigation for moderate risk positions. Nonetheless, FAA had been requiring only a National Agency Check and Inquiry, a less stringent investigation. Without properly designating position sensitivity levels and performing the appropriate background investigations, the agency faces an increased risk that inappropriate individuals could modify critical information and systems or disrupt operations. Segregation of Duties Segregation of duties refers to the policies, procedures, and organizational structure that help ensure that no single individual can independently control all key aspects of a process or computer-related operation and thereby gain unauthorized access to assets or records. Often segregation of duties is achieved by dividing responsibilities among two or more individuals or organizational groups. This diminishes the likelihood that errors and wrongful acts will go undetected, because the activities of one individual or group will serve as a check on the activities of the other. Inadequate segregation of duties increases the risk that erroneous or fraudulent transactions could be processed, improper program changes be implemented, and computer resources be damaged or destroyed. For the systems we reviewed, FAA did not properly segregate incompatible duties in its computer-related operations. Key duties in a computer environment that are generally segregated include software design, development, and testing; software change control; computer operations; and computer production control. However, on one of the systems that we reviewed, FAA allowed software developers to place application code into the production environment. With access to production systems, software developers could intentionally introduce malicious code. Additionally, FAA did not have mitigating controls; for example, there was no provision for reviewing code on production systems to ensure that only authorized code was placed into production. FAA officials told us that it plans to establish an independent production control group that would place code into production once resources become available for this particular system. Without adequate segregation of duties or appropriate mitigating controls, FAA is at increased risk that unauthorized code could be introduced into the production environment, possibly without detection. Application Change Controls It is important to ensure that only authorized and fully tested application programs are placed in operation. To ensure that changes to application programs are necessary, work as intended, and do not result in the loss of data or program integrity, such changes should be documented, authorized, tested, and independently reviewed. In addition, test procedures should be established to ensure that only authorized changes are made to the application’s program code. Application change control procedures that FAA’s contractor used were incomplete. At one site, we reviewed change control and quality assurance documentation for 10 of 50 software changes that had been made by FAA’s contractor in 2004. We determined that the contractor appropriately followed its own change control process, only omitting a few minor items in its documentation. However, although the contractor’s change control process adequately addressed software testing, it did not include reviewing code after it had been installed on production systems to verify that the correct code had been placed into production. This issue is important, because developers are allowed access to production systems. With no mitigating controls in place, developers could introduce unauthorized code into production systems—without detection. Information Security Program Is Not Yet Fully Implemented A key reason for the information security weaknesses that we identified in FAA’s air traffic control systems was that the agency had not yet fully implemented its information security program to help ensure that effective controls were established and maintained. FAA has implemented the foundation for an effective information security program with written policy and guiding procedures that designate responsibility for implementation throughout the agency. FISMA requires agencies to implement an information security program that includes periodic assessments of the risk and the magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems; policies and procedures that (1) are based on risk assessments, (2) cost- effectively reduce risks, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; plans for providing adequate information security for networks, facilities, and systems; security awareness training to inform personnel—including contractors and other users of information systems—of information security risks and of their responsibilities in complying with agency policies and procedures; at least annual testing and evaluation of the effectiveness of information security policies, procedures, and practices relating to management, operational, and technical controls of every major information system that is identified in the agencies’ inventories; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in their information security policies, procedures, or practices; procedures for detecting, reporting, and responding to security plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. FAA has made progress in implementing information security by establishing an agencywide information security program and addressing many of its previously identified security weaknesses. FAA’s Information System Security Program Handbook requires each of these FISMA elements, and the agency has initiatives under way in all of these areas. In addition, the Office of Information Systems Security has developed a security management tool to monitor (1) the status of corrective actions, (2) the status of certifications and authorizations for all systems in FAA’s inventory, (3) information security-related budgetary allocations and expenditures, and (4) training requirements for key security personnel. However, we identified instances in which the program had not been fully or consistently implemented for the air traffic control systems. Agency officials recognize that more work is needed to continue to improve their information security program. Risk Assessments Identifying and assessing information security risks are essential steps in determining what controls are required. Moreover, by increasing awareness of risks, these assessments can generate support for the policies and controls that are adopted in order to help ensure that these policies and controls operate as intended. Further, OMB Circular A-130, appendix III, prescribes that risk be reassessed when significant changes are made to computerized systems—or at least every 3 years, as does FAA policy. Consistent with NIST guidance, FAA requires that risk assessments include identifying system interconnections, information sensitivity, threats and existing countermeasures and analyzing vulnerabilities. The risk assessments that we reviewed generally complied with FAA requirements. For the systems we reviewed, FAA provided five risk assessments. Four of the five included the required topics. However, the risk assessment for the fifth one was incomplete and did not always address countermeasures. Inadequately assessing risk and identifying countermeasures can lead to implementing inadequate or inappropriate security controls that might not address the system’s true risk, and to costly efforts to subsequently implement effective controls. Policies and Procedures Another key task in developing an effective information security program is to establish and implement risk-based policies, procedures, and technical standards that govern security over an agency’s computing environment. If properly implemented, policies and procedures should help reduce the risk that could come from unauthorized access or disruption of services. Technical security standards provide consistent implementing guidance for each computing environment. Because security policies are the primary mechanism by which management communicates its views and requirements, it is important to establish and document them. FAA’s Office of Information Systems Security has developed systems security policies, with the intent to provide security commensurate with the risks of unauthorized access or disruption of service. For example, FAA has developed policies on an overall information system security program, background investigations, and password management. Further, the agency’s Information System Security Program Handbook provides detailed information on certification and authorization of information systems. DOT has also developed various technical standards, which address various computing environments. However, FAA’s policies and procedures did not address issues such as reviewing and monitoring physical access. In addition, the agency had not yet developed procedures to effectively implement patch management for its air traffic control systems. Also, as noted earlier, in some instances—such as password management—FAA was not following its own policies and procedures. Without effectively implementing policies and procedures, the agency has less assurance that their systems and information are protected. Security Plans The objective of system security planning is to improve the protection of information technology resources. A system security plan provides an overview of the system’s security requirements and describes the controls that are in place—or planned—to meet those requirements. OMB Circular A-130 requires that agencies develop and implement system security plans for major applications and for general support systems and that these plans address policies and procedures for providing management, operational, and technical controls. Further, Circular A-130 requires that agencies’ plans be consistent with guidance issued by NIST. FAA policy requires that security plans be developed, and its Information System Security Program Handbook provides guidance on developing security plans. According to both FAA and NIST, plans should include elements such as security controls currently in place or planned, the individual responsible for the security of the system, a description of the system and its interconnected environment, and rules of behavior. Although the security plans that we reviewed generally complied with FAA policy and guidance, we identified instances where plans were incomplete or not up-to-date. All five of the information system security plans we reviewed were missing information required by FAA. Procedures outlining the individuals responsible for plan reviews and monitoring the status of planned controls were missing in each case. Also, no agency officials were identified to fulfill this responsibility. Although a security plan had been developed for one of FAA’s major applications, it was missing such required sections as rules of behavior and controls in place for public access. Another plan did not identify the system owner or the individual who had responsibility for system security. Further, some sections in one of the plans we reviewed were outdated. For example, security controls that existed at the time of our review were not described in the plan. Without complete and up-to-date security plans, FAA cannot ensure that appropriate controls are in place to protect its systems and critical information. Security Awareness Training Another FISMA requirement for an information security program is that it promote awareness and provide required training for users so that they can understand the system security risks and their role in implementing related policies and controls to mitigate those risks. Computer intrusions and security breakdowns often occur because computer users fail to take appropriate security measures. For this reason, it is vital that employees and contractors who use computer resources in their day-to-day operations be made aware of the importance and sensitivity of the information they handle, as well as the business and legal reasons for maintaining its confidentiality, integrity, and availability. FISMA mandates that all federal employees and contractors who use agency information systems be provided with periodic training in information security awareness and accepted information security practice. FAA has established a policy requiring employees and contractors to take annual security awareness training. Further, FISMA requires agency CIOs to ensure that personnel with significant information security responsibilities get specialized training. OMB and NIST also require agencies to implement system-specific security training. In December 2000, we reported that FAA had not fully implemented a security awareness and training program. Since then, the agency has established its policy for annual training and has implemented an agencywide security awareness program that includes newsletters, posters, security awareness days, and a Web site. FAA has also implemented a Web- based security awareness training tool that not only meets the requirements of FISMA, but also records whether individuals have completed the training. The training records that we reviewed showed that personnel with significant information security responsibilities had received specialized training. Despite the agency’s progress in security awareness training, we identified shortcomings with the program. For example, although FAA implemented a Web-based training tool, the agency does not require all employees and contractors to use it. As a result, not all contractors and employees receive annual training, training is not appropriately tracked and reported, and the training provided in place of the tool is not always adequate. Although FAA reported in its most recent FISMA report that 100 percent of its employees and contractors had taken security awareness training, it was unable to provide documentation for more than one-third of selected employees and contractors. Further, the agency does not have an effective tracking mechanism for security awareness training. In some circumstances, management relies on verbal responses from employees and contractors on whether they have completed training, but it has no uniform reporting requirements. Instead they receive responses in different forms, such as telephone conversations, e-mails, and faxes. In instances where the Web- based tool is not used, the awareness training may be inadequate. At one of the sites we visited, this training consisted of a briefing that did not cover information system security and risks. Further, the agency had not developed guidance or procedures for system-specific security training, as required by OMB and NIST. Without adequate security awareness and training programs, security lapses are more likely to occur. As in our 2000 report, we were able to access sensitive security information on the Internet. FAA agreed that the information we identified was sensitive and took prompt action to remove the specific examples that we had provided. However, 8 months later, one of the examples was available on the Internet again, even though it was marked for “Internal Distribution Only.” Tests and Evaluations of Control Effectiveness Another key element of an information security program is testing and evaluating systems to ensure that they are in compliance with policies and that policies and controls are both appropriate and effective. This type of oversight is a fundamental element because it demonstrates management’s commitment to the security program, reminds employees of their roles and responsibilities, and identifies and mitigates areas of noncompliance and ineffectiveness. Although control tests and evaluations may encourage compliance with security policies, the full benefits are not achieved unless the results improve the security program. Analyzing the results of security reviews provides security specialists and business managers with a means of identifying new problem areas, reassessing the appropriateness of existing controls, and identifying the need for new controls. FISMA requires that the frequency of tests and evaluations be based on risks, but occur no less than annually. Security tests and evaluations are part of FAA’s certification and authorization process, which is required every 3 years or when significant changes to the system occur. According to agency officials, in each of the following 2 years, FAA conducts a self-assessment based on NIST guidance. Although FAA had conducted system tests and evaluations, documentation and testing were not always adequate. For example: In three of the five test plan and results reports we reviewed, most of the test results were not included. Additionally, very little testing was conducted on the network and infrastructure pieces of any of the systems we reviewed. As of April 2005, the certifications and authorizations for about 24 percent of the air traffic control systems were either outdated or had not been completed. According to FAA officials, the agency’s risk-based approach focused on certifying and accrediting all of its systems; therefore, management accepted an extension beyond 3 years for some systems. DOT’s IG testified that some of the testing is being conducted only on developmental systems, rather than operational systems. FAA’s practice was to perform system tests and evaluations annually without regard to criticality. Our tests of critical systems identified many weaknesses. More frequent testing by FAA of these systems may have identified, and FAA could have corrected, many of the information security weaknesses discussed in this report. Without appropriate tests and evaluations, the agency cannot be assured that employees and contractors are complying with established policies or that policies and controls are appropriate and working as intended. Remedial Actions Remedial action plans are a key component described in FISMA. They assist agencies in identifying, assessing, prioritizing, and monitoring the progress in correcting security weaknesses that are found in information systems. According to OMB Circular A-123, agencies should take timely and effective action to correct deficiencies that they have identified through a variety of information sources. To accomplish this, remedial action plans should be developed for each deficiency, and progress should be tracked for each. FAA policy requires remediation reports to address the results of tests and evaluations. Although the agency has developed a remedial action tracking system, which included remedial plans, for weaknesses identified through previous reviews in order to help it monitor the progress in correcting security weaknesses, these remedial plans did not address all identified weaknesses, and some deficiencies were not always corrected in a timely manner. Incident Handling Even strong controls may not block all intrusions and misuse, but organizations can reduce the risks associated with such events if they promptly take steps to detect and respond to them before significant damage is done. In addition, accounting for and analyzing security problems and incidents are effective ways for organizations to gain a better understanding of threats to their information and of the costs of their security-related problems. Such analyses can pinpoint vulnerabilities that need to be eliminated so that they will not be exploited again. Problem and incident reports can provide valuable input for risk assessments, can help in prioritizing security improvement efforts, and can be used to illustrate risks and related trends for senior management. DOT has issued a policy for detecting, reporting, and responding to security incidents. In December 2000, we reported that FAA had not fully implemented an effective intrusion detection capability. Since then, FAA has established a Computer Security Incident Response Center, whose mission is to detect and respond to intrusions on FAA’s systems. The Center produces incident reports and provides agency management with various analyses. However, the following weaknesses prevent it from effectively detecting and responding to many potential threats: Although the agency has deployed intrusion detection systems, these systems do not cover all segments of the air traffic control system. According to FAA officials, the agency has a risk-based plan to further deploy intrusion detection capabilities. One of the intrusion detection systems that we reviewed was configured in such a way that it was unable to detect potential intrusions. While FAA has made progress, it remains at risk of not being able to detect or respond quickly to security incidents. Continuity of Operations Continuity of operations controls, sometimes referred to as service continuity, should be designed to ensure that when unexpected events occur, key operations continue without interruption or are promptly resumed, and critical and sensitive data are protected. These controls include environmental controls and procedures designed to protect information resources and minimize the risk of unplanned interruptions, along with a plan to recover critical operations should interruptions occur. If continuity of operations controls are inadequate, even a relatively minor interruption could result in significant adverse nationwide impact on air traffic. FAA requires that continuity of operations plans be included as part of its certification and authorization process. Although FAA has various initiatives under way to address continuity of operations, shortcomings exist. For the systems we reviewed, FAA identified five continuity of operations plans. One plan was incomplete and FAA included the need to complete this plan in its remediation report. While four plans were completed, one of these did not contain accurate information. It described an operating environment to be used as a contingency, yet this environment did not exist at the time of our review. Further, in April 2005, DOT’s IG testified that FAA had not made sufficient progress in developing continuity plans to enable it to restore air traffic control services in case of a prolonged service disruption at the en route centers. Until the agency completes actions to address these weaknesses, it is at risk of not being able to appropriately recover in a timely manner from certain service disruptions. Conclusions Although FAA has made progress in implementing information security by establishing an agencywide information security program and addressing many of its previously identified security weaknesses, significant information security weaknesses remain that could potentially lead to disruption in aviation operations. These include weaknesses in electronic access controls, for example, in managing networks, system patches, user accounts and passwords, and user rights and in logging and auditing security-relevant events. Weaknesses in physical security, background investigations, segregation of duties, and application change controls increase the level of risk. A key reason for FAA’s weaknesses in information system controls is that it has not yet fully implemented an information security program to ensure that effective controls are established and maintained. Effective implementation of such a program provides for periodically assessing risks, establishing appropriate policies and procedures, developing and implementing security plans, promoting security awareness training, testing and evaluating the effectiveness of controls, implementing corrective actions, responding to incidents, and ensuring continuity of operations. Although FAA has initiatives under way to address these areas, further efforts are needed to fully implement them. Recommendations for Executive Action To help establish effective information security over air traffic control systems, we recommend that the Secretary of Transportation direct the FAA Administrator to take the following 12 actions to fully implement an information security program: Ensure that risk assessments are completed. Develop and implement policies and procedures to address such issues as patch management and the reviewing and monitoring of physical access. Review system security plans to ensure that they contain the information required by OMB A-130 and are up to date. Enhance the security awareness training program to ensure that all employees and contractors receive information security awareness training, as well as system specific training, and that completion of the training is appropriately reported and tracked. Develop a process to ensure that sensitive information is not publicly available on the Internet. Conduct tests and evaluations of the effectiveness of controls on operational systems, and document results. Perform more frequent testing of system controls on critical systems to ensure that the controls are operating as intended. Review remedial action plans to ensure that they address all of the weaknesses that have been identified. Prioritize weaknesses in the remedial action plans and establish appropriate, timely milestone dates for completing the planned actions. Implement FAA’s plan to deploy intrusion detection capabilities for portions of the network infrastructure that are not currently covered. Correct configuration issues in current intrusion detection systems to ensure that they are working as intended. Review service continuity plans to ensure that they appropriately reflect the current operating environment. We are also making recommendations in a separate report with limited distribution. These recommendations consist of actions to be taken to correct the specific information security weaknesses we identified that are related to network management, patch management, password management, user privileges, auditing and logging, physical security, background investigations, segregation of duties, and application change controls. Agency Comments and Our Evaluation In providing oral comments on a draft of this report, the FAA’s CIO agreed to consider our recommendations and emphasized several points. He stated that the issues we identified in the three individual systems we examined are not necessarily indicative of the security posture of the air traffic control system as a whole. We acknowledge that we focused our examination on the technical controls of three critical systems. In addition, we reviewed management and operational controls at five sites and FAA headquarters and relied on the OIG’s prior work pertaining to air traffic control systems. We concluded that significant information security weaknesses remain that could potentially lead to a disruption in aviation operations. The CIO also indicated that the implications of the findings in this report should be tempered by the understanding that individual system vulnerabilities are further mitigated by system redundancies and separate access controls that are built into the overall air traffic control system architecture to provide additional protection that is not considered within the context of this review. He was concerned that our report does not always balance the identification of individual system issues with consideration of the relative risk that an issue may pose to the overall system and that the public may be prone to infer from the report that the security risks to the air traffic control system are higher than they may actually be. We acknowledge that FAA may have other protections built into the overall system architecture. However, as noted in this report, the complex air traffic control system relies on several interconnected systems. As a result, the weaknesses we identified may increase the risk to other systems. For example, FAA did not consistently configure network services and devices securely to prevent unauthorized access to and ensure the integrity of computer systems operating on its networks. In addition, the CIO indicated that all security findings for air traffic control systems, including those from our report, are evaluated and prioritized for action and that FAA has established a sound track record for moving quickly to address priority issues—as demonstrated by the extensive actions the agency has taken on issues identified in our previous reports and in DOT OIG reports. For example, according to the CIO, FAA established an extensive information security training program; deployed intrusion detection systems; and established the Computer Security Incident Response Center as a prevention, detection and reporting capability on a 24x7x365 basis. Finally, he stated that as a result of FAA’s information security actions, it achieved 100 percent of the President’s Management Agenda goals for certification and authorization of its systems, completed certification and authorization for over 90 percent of its systems in fiscal year 2004, and completed 100 percent of its certifications and authorizations by June 30, 2005. We acknowledge in our report that FAA has made progress in implementing its information security program and has initiatives under way; however, we identified weaknesses in key areas cited by the CIO. For example, as noted in this report, although FAA conducted tests and evaluations as part of its certification and authorization process, some of these were outdated and documentation and testing were not always adequate. The CIO also provided specific technical comments, which we have incorporated, as appropriate, in the report. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to congressional committees with jurisdiction over FAA and executive branch agencies’ information security programs, the Secretary of Transportation, the FAA Administrator, the DOT Inspector General, and other interested parties. We also will make copies available to others on request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact me at (202) 512-6244 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix I. GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the person named above, Edward Alexander, Mark Canter, Nicole Carpenter, Jason Carroll, Lon Chin, William Cook, Kirk Daubenspeck, Neil Doherty, Patrick Dugan, Joanne Fiorino, Edward Glagola, Steve Gosewehr, Jeffrey Knott, Carol Langelier, Harold Lewis, Duc Ngo, Eugene Stevens, and Chris Warweg made key contributions to this report.
The Federal Aviation Administration (FAA) performs critical functions that contribute to ensuring safe, orderly, and efficient air travel in the national airspace system. To that end, it operates and relies extensively on an array of interconnected automated information systems and networks that comprise the nation's air traffic control systems. These systems provide information to air traffic controllers and aircraft flight crews to help ensure the safe and expeditious movement of aircraft. Interruptions of service by these systems could have a significant adverse impact on air traffic nationwide. Effective information security controls are essential for ensuring that the nation's air traffic control systems are adequately protected from inadvertent or deliberate misuse, disruption, or destruction. Accordingly, GAO was asked to evaluate the extent to which FAA has implemented information security controls for these systems. FAA has made progress in implementing information security for its air traffic control information systems; however, GAO identified significant security weaknesses that threaten the integrity, confidentiality, and availability of FAA's systems--including weaknesses in controls that are designed to prevent, limit, and detect access to these systems. The agency has not adequately managed its networks, software updates, user accounts and passwords, and user privileges, nor has it consistently logged security-relevant events. Other information security controls--including physical security, background investigations, segregation of duties, and system changes--also exhibited weaknesses, increasing the risk that unauthorized users could breach FAA's air traffic control systems, potentially disrupting aviation operations. While acknowledging these weaknesses, agency officials stated that the possibilities for unauthorized access were limited, given that the systems are in part custom built and that they run on older equipment that employs special-purpose operating systems, proprietary communication interfaces, and custom-built software. Nevertheless, the proprietary features of these systems cannot fully protect them from attacks by disgruntled current or former employees who are familiar with these features, nor will they keep out more sophisticated hackers. A key reason for the information security weaknesses that GAO identified in FAA's air traffic control systems is that the agency had not yet fully implemented its information security program to help ensure that effective controls were established and maintained. Although the agency has initiatives under way to improve its information security, further efforts are needed. Weaknesses that need to be addressed include outdated security plans, inadequate security awareness training, inadequate system testing and evaluation programs, limited security incident-detection capabilities, and shortcomings in providing service continuity for disruptions in operations. Until FAA has resolved these issues, the information security weaknesses that GAO has identified will likely persist.
GAO_HEHS-95-61
Background Elementary and secondary education, the nation’s largest public enterprise, is conducted in over 80,000 schools in about 15,000 districts. America’s public schools serve over 42 million students. About 70 percent of schools serve 27 million elementary students; 24 percent serve 13.8 million secondary students; and 6 percent serve 1.2 million students in combined elementary and secondary and other schools. America’s traditional one-room school houses have been replaced by larger facilities that may have more than one building. Comprising classroom, administrative, and other areas like gymnasiums and auditoriums, a school may have an original building, any number of permanent additions to that building, and a variety of temporary buildings—each constructed at different times. Buildings that have been well maintained and renovated at periodic intervals have a useful life equivalent to a new building. A number of state courts as well as the Congress have recognized that a high-quality learning environment is essential to educating the nation’s children. Crucial to establishing that learning environment is that children attend school in decent facilities. “Decent facilities” was specifically defined by one court as those that are “...structurally safe, contain fire safety measures, sufficient exits, an adequate and safe water supply, an adequate sewage disposal system, sufficient and sanitary toilet facilities and plumbing fixtures, adequate storage, adequate light, be in good repair and attractively painted as well as contain acoustics for noise control....”More recently, the Congress passed the Education Infrastructure Act of 1994, in which it stated that “improving the quality of public elementary and secondary schools will help our Nation meet the National Education Goals.” Despite these efforts, studies and media reports on school facilities since 1965 indicate that many public elementary and secondary schools are in substandard condition and need major repairs due to leaking roofs, plumbing problems, inadequate heating systems, or other system failures. Although localities generally finance construction and repair, with states playing a variety of roles, federal programs have monies to help localities offset the impact of federal activities, such as Impact Aid, improving accessibility for the disabled, and managing hazardous materials. However, these programs do not totally offset all costs. For example, prior GAO work found that federal assistance provided for asbestos management under the Asbestos School Hazard Abatement Act of 1984 did not meet the needs of all affected schools. From 1988 through 1991, the Environmental Protection Agency (EPA) received 1,746 qualified applications totaling $599 million but only awarded $157 million to 586 school districts it considered to have the worst asbestos problems. EPA was aware of the shortfall in federal assistance but believed that state and local governments should bear these costs. Because of the perception that federal programs—as well as current state and local financing mechanisms—did not begin to address the serious facilities needs of many of America’s schools, the Congress passed the Education Infrastructure Act of 1994. The Congress then appropriated $100 million for grants to schools for repair, renovation, alteration, or construction. Scope and Methodology To determine the amount of funding needed to improve inadequate facilities and the overall physical condition and prevalence of schools that need major repairs, we surveyed a national sample of schools and augmented the survey with visits to selected school districts. We used various experts to advise us on the design and analysis of this project. (See app. III for a list of advisers.) We sent the survey to a nationally representative stratified random sample of about 10,000 schools in over 5,000 school districts. The sample was designed for the Department of Education’s 1994 Schools and Staffing Survey (SASS), which is sponsored by the National Center for Educational Statistics. We asked about (1) the physical condition of buildings and major building features, such as roofs; framing, floors, and foundations; exterior walls and interior finishes; plumbing; heating, ventilation, and air conditioning (HVAC); and electric power; (2) the status of environmental conditions, such as lighting, heating, and ventilation; (3) the amount districts and schools had spent in the last 3 years or plan to spend in the next 3 years due to federal mandates that require managing or correcting hazardous materials problems and providing access to all programs for all students; and (4) an estimate of the total cost of needed repairs, renovations, and modernizations to put all buildings in good overall condition. (See app. IV for a copy of the questionnaire.) We directed the survey to those officials who are most knowledgeable about facilities—such as facilities directors and other central office administrators of the districts that housed our sampled school buildings. Our analyses are based on responses from 78 percent of the schools sampled. Analyses of nonrespondent characteristics showed them to be similar to respondents. Findings from the survey have been statistically adjusted (weighted) to produce nationally representative estimates. All of the data are self-reported, and we did not independently verify their accuracy. (See the forthcoming report on location and demographic analyses of schools in need of major repair for a detailed description of our data collection methods and analysis techniques, confidence intervals, and the like.) In addition, we visited 41 schools in 10 selected school districts varying in location, size, and minority composition. During these visits, we observed facility conditions and interviewed district and local school officials to obtain information on facilities assessment, maintenance programs, resources, and barriers encountered in reaching facility goals. (See app. I for profiles on the districts visited.) We conducted this study from April 1994 to December 1994 in accordance with generally accepted government auditing standards. Principal Findings Schools Report Needing Billions to Improve Facilities On the basis of our survey results, we estimate that the nation’s schools need $112 billion to complete all repairs, renovations, and modernizations required to restore facilities to good overall condition and to comply with federal mandates. (See fig. 1.) This amount includes $65 billion—about $2.8 million per school—needed by one-third of schools for which one or more entire building needs major repairs or replacement. Another 40 percent of schools (those in adequate or better condition) reported needing $36 billion—about $1.2 million per school—to repair or replace one or more building features, such as the plumbing or roof, or to make other corrective repairs. Almost two-thirds of the schools reported needing $11 billion—an average of $.2 million per school—to comply with federal mandates over the next 3 years. Of this amount, about $5 billion (54 percent) is needed to correct or remove hazardous substances, such as asbestos, lead in water or paint, materials contained in UST, and radon or meet other requirements while about $6 billion (55 percent) is needed by schools to make programs accessible to all students. (See fig. 1.) This $11 billion is in addition to the $3.8 billion reported spent by three-quarters of all schools in the last 3 years to comply with federal mandates. (See fig. 2.) Of the money schools reported that they spent to comply with federal mandates, $2.3 billion (60 percent) went to correct or remove hazardous substances—primarily asbestos—while $1.5 billion (40 percent) went to make all programs accessible to all students. District officials we spoke with reported that they must also comply with many state and local mandates. For example, one urban district reported how federal, state, and local regulations govern many of the same areas such as hazardous materials management and some aspects of indoor air quality. In addition, officials cited numerous state health and sanitation codes, state safety inspections for building features, as well as city zoning ordinances, local building codes, and historic preservation regulations. By 1992, the enormity of the requirements as well as decades of capital needs underfunding have resulted in only the 2 newest of their 123 schools complying with all current codes. The district further described how these regulations and the accompanying cost could apply to the installation of air conditioning. For example, air conditioning could be installed in a building for $500,000. However, this may also require an additional $100,000 in fire alarm/smoke detection and emergency lighting systems as well as $250,000 in architectural modifications for code compliance. Additionally, the location of outside chillers may be regulated by zoning and historic preservation ordinances. In our visits to selected districts, officials from major urban areas reported needing billions to put their schools into good overall condition. (See table 1.) Two-Thirds of Schools Adequate but Millions of Students Must Attend Other One-Third School officials reported that two-thirds of the nation’s schools are in adequate (or better) condition, at most needing only some preventive maintenance or corrective repair. However, about 14 million students must attend the remaining one-third (25,000 schools), in which at least one building is in need of extensive repair or replacement. Even more students, 28 million, attend schools nationwide that need one or more building feature extensively repaired, overhauled, or replaced or that contain an environmentally unsatisfactory condition, such as poor ventilation. (See tables 2 and 3.) These schools are distributed nationwide. Physical Condition Specifically, about one-third of both elementary and secondary schools reported at least one entire building—original, addition, or temporary—in need of extensive repairs or replacement. (See fig. 3 and pictures in app. II.) About 60 percent (including some schools in adequate condition) reported that at least one building feature needed extensive repair, overhauling, or replacement; and three-quarters of those schools needed multiple features repaired. Features most frequently reported in need of such repairs were HVAC; plumbing; roofs; exterior walls, finishes, windows, and doors; electrical power; electrical lighting; and interior finishes and trims. (See fig. 4 and pictures in app. II.) Further, while 41 percent of all schools reported unsatisfactory energy efficiency, 73 percent of those schools with exterior walls, windows, and doors and 64 percent of those with roofs in need of major repair reported unsatisfactory energy efficiency. These unrepaired features not only reduce energy efficiency but may also have an adverse environmental effect on students. As one Chicago elementary school principal told us, “Heat escapes through holes in the roof; the windows leak (the ones that are not boarded up) and let in cold air in the winter so that children must wear coats to class.” Following are some other examples: In New Orleans, the damage from Formosan termites has deteriorated the structure of many schools. In one elementary school, they even ate the books on the library shelves as well as the shelves themselves. (See app. II.) This, in combination with a leaking roof and rusted window wall, caused so much damage that a large portion of the 30-year-old school has been condemned. The whole school is projected to be closed in 1 year. At a Montgomery County, Alabama, elementary school, a ceiling weakened by leaking water collapsed 40 minutes after the children left for the day. Water damage from an old (original) boiler steam heating system at a 60-year-old junior high school in Washington, D.C., has caused such wall deterioration that an entire wing has been condemned and locked off from use. Steam damage is also causing lead-based wall paint to peel. Raw sewage backs up on the front lawn of a Montgomery County, Alabama, junior high school due to defective plumbing. A New York City high school built around the turn of the century has served as a stable, fire house, factory, and office building. The school is overcrowded with 580 students, far exceeding the building’s 400 student capacity. The building has little ventilation (no vents or blowers), despite many inside classrooms, and the windows cannot be opened, which makes the school unbearably hot in the summer. In the winter, heating depends on a fireman’s stoking the coal furnace by hand. In Ramona, California, where overcrowding is considered a problem, one elementary school is composed entirely of portable buildings. It had neither a cafeteria nor auditorium and used a single relocatable room as a library, computer lab, music room, and art room. Last year, during a windstorm in Raymond, Washington, the original windows of an elementary school built in 1925 were blown out, leaving shards of glass stuck in the floor. The children happened to be at the other end of the room. This wooden school is considered a fire hazard, and although hallways and staircases can act as chimneys for smoke and fire, the second floor has only one external exit. In rural Grandview, Washington, overcrowded facilities are a problem. At one middle school, the original building was meant to house 450 students. Two additions and three portables have been added to accommodate 700 students. The school has seven staggered lunch periods. The portables have no lockers nor bathrooms and are cold in the winter and hot in the spring/summer. In a high school in Chicago, the classroom floors are in terrible condition. Not only are floors buckling, so much tile is loose that students cannot walk in all parts of the school. The stairs are in poor condition and have been cited for safety violations. An outside door has been chained for 3 years to prevent students from falling on broken outside steps. Peeling paint has been cited as a fire hazard. Heating problems result in some rooms having no heat while other rooms are too warm. Leaks in the science lab caused by plumbing problems prevent the classes from doing experiments. Guards patrol the outside doors, and all students and visitors must walk through metal detectors before entering the school. (See app. II and fig. 6.) During our school visits, we found wide disparities between schools in the best or even average condition and schools in the worst condition, and these schools were sometimes in the same district. Environmental Conditions About 50 percent of the schools reported at least one unsatisfactory environmental condition; while 33 percent reported multiple unsatisfactory conditions. Of those, half reported four to six unsatisfactory conditions. Those conditions most frequently reported to be unsatisfactory were acoustics for noise control, ventilation, and physical security. (See fig. 5.) Additionally, three-quarters of schools responding had already spent funds during the last 3 years on requirements to remove or correct hazardous substances such as asbestos (57 percent), lead in water or paint (25 percent), materials in USTs such as fuel oil (17 percent), radon (18 percent), or other requirements (9 percent). Still two-thirds must spend funds in the next 3 years to comply with these same requirements—asbestos (45 percent), lead (18 percent), UST (12 percent), radon (12 percent), or other requirements (8 percent). We saw numerous examples of unsatisfactory environmental conditions during our school visits: In the Pomona, California, school district, the student body has increased 37 percent over the last 10 years. Some schools must have five staggered lunch periods to accommodate all students. As a result of overcrowding, in one elementary school, students are housed in temporary buildings installed in 1948 that are unattractive, termite ridden, dark, and underequipped with electrical outlets. The temporary buildings get very hot as well as very cold at times because of poor insulation. A Raymond, Washington, high school—a three-story structure with walls of unreinforced concrete with roof and floor not adequately secured to the walls that may not withstand earthquakes—contains steam pipes that are not only extremely noisy but provide too little or too much heat from room to room. In Richmond, Virginia, schools in the district close early in September and May because the heat, combined with poor ventilation and no air conditioning, creates health problems for students and teachers, especially those with asthma. A Chicago elementary school, built in 1893 and not painted for many years, has walls and ceilings with chipping and peeling lead-based paint, asbestos, and several boarded-up windows. Some rooms have inadequate lighting due to antiquated lighting fixtures that are no longer manufactured, so bulbs could not be replaced when burned out. One section of the school has been condemned due to structural problems. However, the auditorium and gym in this area are still used. The school was scheduled for closure in 1972 but remained open due to community opposition to the closure with promises of renovation by the district. (See app. II.) Insufficient Funds Contribute to Declining Physical Conditions District officials we spoke to attributed the declining physical condition of America’s schools primarily to insufficient funds, resulting in decisions to defer maintenance and repair expenditures from year to year. This has a domino effect. Deferred maintenance speeds up the deterioration of buildings, and costs escalate accordingly, further eroding the nation’s multibillion dollar investment in school facilities. For example, in many schools we visited, unrepaired leaking roofs caused wall and floor damage that now must also be repaired. New York school officials told us that, while a typical roof repair is $600, a full roof replacement costs $300,000, and painting and plastering 10 rooms on a top floor that has been damaged by water infiltration costs $67,500 plus $4,500 to replace damaged floor tiles. In other words, for every $1 not invested, the system falls another $620 behind. In addition, unrepaired roofs cause energy costs to increase as heat escapes through holes, further depleting already limited funds. Further, due to lack of routine maintenance in the Chicago district, many schools have not been painted since they were painted 20 years ago with lead-based paint. In an elementary school in New York City, repair problems had not been addressed since the school was built 20 years ago. Problems that could have been addressed relatively inexpensively years ago have now caused major problems such as sewage leaking into the first grade classrooms, a leaking roof that is structurally unsound, and crumbling walls. Similarly, in Chicago, we visited an elementary school whose roof, the principal told us, had needed replacement for 20 years. Because it had only been superficially patched, rather than replaced, the persistent water damage had caused floors to buckle and plaster on the walls and ceilings to crumble. It had also flooded parts of the electric wiring system. One teacher in this school would not turn on her lights during rainstorms for fear of electrical shock; in another classroom the public address system had been rendered unusable. Buckets had to be placed on the top floor of the school to catch the rain. Some district officials we spoke with reported that they had difficulty raising money for needed repairs and renovation due to an anti-tax sentiment among voters resulting in the failure of bond issues as well as passage of property tax limitations. About one in three districts reported that they have had an average of two bond issues fail in the past 10 years. Further, school officials told us that often bond proceeds are far less than needed for repairs. For example, in Pomona, California, a $62.5 million bond issue was submitted to the voters after a survey indicated that the $200 million needed for repairs would be rejected. At the time of our survey, 6 percent of districts had a bond issue before the electorate. However, as one survey respondent commented, “the current public attitudes about the economy and education are generally so negative that passing a bond referendum is a fantasy.” Other states have reduced school funding by passing property tax limitations. One survey respondent reported, “The state’s contribution to local schools has dropped by 40 percent over the last few years...” According to another survey respondent, “This is a 1913 building which many of the taxpaying citizens feel was good enough for them...it is looked at as a monument in the community. Unless some form of outside funding is arranged, the citizens may never volunteer to replace this building since it will require raising their taxes.” Further, districts reported a lack of control over some spending priorities as they must fund a large portion of federal mandates for managing or correcting hazardous materials as well as making all programs accessible to all students. A recurring theme in comments from survey respondents was, “Unfunded federal and state mandates are one of the prime causes of lack of funds for replacing worn-out heating and cooling equipment, roofs, etc....” Another survey respondent stated, “The ADA requirements were a major reason we had to replace two older schools. These costs, when added to other costs for renovations and modifications, resulted in overall costs for repairs which exceeded the costs for new facilities.” On the other hand, Chicago school officials told us that due to limited funds and the installation of one elevator costing $150,000, very few schools are able to provide program access to all students. In looking at the uses of bond proceeds in the districts, the average amount of the most recently passed bond issue was $7 million. While about 3 percent was provided for federal mandates, 54 percent was provided for school construction and 38 percent for repairing, renovating, and modernizing schools. The remaining 5 percent was spent for purchases of computers and telecommunications equipment. Districts also said that they must sometimes divert funds initially planned for facilities maintenance and repair to purchase additional facilities due to overcrowding. This has resulted from both demographic and mandated changes. For example, additional funds were required for construction and purchase of portables due to large immigrant influxes as well as population shifts in districts or climbing enrollment due to overall population increases. Further, some mandated school programs, such as special education, require additional space for low pupil-teacher ratios. One survey respondent described the competing demands on limited funds as follows: “Our school facilities are not energy efficient or wired for modern technology. Our floor tile is worn out and the furniture is in poor shape. Our taxpayers don’t want to put any more in schools. Our teachers want better pay. Our students and parents want more programs and technology. HELP!!!” Building Age—by Itself—Is Not Significant While some studies cite building age as a major factor contributing to deteriorating conditions, older buildings often have a more sound infrastructure than newer buildings. Buildings built in the early years of this century—or before—frequently were built for a life span of 50 to 100 years while more modern buildings, particularly those built after 1970, were designed to have a life span of only 20 to 30 years. A study of English school facilities found that the schools built during the 1960s and 1970s were built quickly and cheaply and have caused continuing maintenance problems. As one survey respondent commented, “the buildings in this district are approximately 20 years old, but the exterior siding was inferior from the beginning...it has deteriorated and ruptured extensively....” A principal in Chicago stated about her 1970s building, “Our most pressing problem is that the school is crumbling down around us.... From the beginning, this building has had serious roof problems. Water leaks throughout the building from the roof and from the walls. Pools of water collect in the floors of the classrooms. One wall has buckled and is held in place with a steel stake. The windows leak and let cold air in....” According to some school officials, the misperception about the age factor has been reinforced because older buildings are sometimes not maintained but allowed to deteriorate until replaced. Three schools we visited in Chicago presented a good example of the difficulty of using age to define condition. All three were built between 1926 and 1930 and had the same design and basic structure. Today, their condition could not be more different. One school had been allowed to deteriorate (had received no renovation since the 1970s) until it reached a point where local school officials classified it as among those schools in the worst physical condition. The second school had received some recent renovation because of community complaints about its condition and was classified as a typical school for the school district. The third school had been well maintained throughout the years, and now school officials classified it as a school in the best physical condition. (See pictures contrasting the three schools in fig. 6.) Conclusions Two-thirds of America’s schools report that they are in adequate (or better) overall condition. Still, many of these schools need to repair or replace one or more building feature, manage or correct hazardous materials, or make all programs accessible to all students. Other schools have more serious problems. About 14 million students are required to attend the remaining one-third of schools that have one or more entire buildings in less-than-adequate condition, needing extensive repair or replacement. These schools are distributed nationwide. Our survey results indicate that to complete all repairs, renovations, or modernizations needed to put school buildings into good overall condition and comply with federal mandates would require a projected investment of $112 billion. Continuing to delay maintenance and repairs will defer some of these costs but will also lead to the need for greater expenditures as conditions deteriorate, further eroding the nation’s multibillion dollar investment in school infrastructure. In addition, if maintenance continues to be deferred, a large proportion of schools that are in only adequate condition and need preventive maintenance or corrective repair will soon deteriorate to less-than-adequate condition. As one survey respondent observed, “It is very difficult to get local communities to accept this burden (facilities construction/renovation). Our district, one of the wealthiest in the state, barely passed a bare bones budget to renovate. It must be a national crisis.” Agency Comments We spoke with Department of Education officials at the National Center for Educational Statistics who reviewed a draft of this report and found the report well done and generally approved of the approach. In addition, staff from the Office of the Undersecretary provided us with technical comments that we incorporated into our report. They did not comment, however, on our methodology, reserving judgment for the detailed technical appendix in our forthcoming report. Copies of this report are also being sent to appropriate House and Senate committees and all members, the Secretary of Education, and other interested parties. If you have any questions about this report, please contact Eleanor L. Johnson, Assistant Director, who may be reached at (202) 512-7209. A list of major contributors to this report can be found in appendix VII. District Profiles We visited 41 schools in 10 selected school districts that varied by location, size, and ethnic composition. During these visits, we observed facility conditions and interviewed district and local school officials to get information on facilities assessment, maintenance programs, resources, and barriers encountered in reaching facilities goals. We asked officials to show us examples of “best,” “typical,” and “worst” schools and verified the reliability of these designations with others. In some small districts, we visited all schools. Chicago, Illinois Overview Chicago is a large urban district whose school officials rated their school facilities, overall, as in fair to poor condition. Widespread disparities exist, however, between schools in the best and worst condition. About 15 percent of the schools were built before 1900, and over half are more than 50 years old. Slightly more than 25 percent were built during the fifties and sixties to handle the baby boom, and 20 percent were built during the last 25 years. However, a number of the newer structures are temporary buildings or “demountables” (large sections of prefabricated frames put together on a cement slab). These buildings now show major structural damage, and the seams of the buildings are splitting apart. Permanent buildings also have structural damage. For example, we visited two schools that had chained exit doors to prevent students from either being hit by debris from a cracking exterior brick wall—in a “typical” Chicago school—or falling on collapsing front steps—in a “worst” school. Schools in the worst condition need new exterior building envelopes (roofs, tuck pointing, windows, and doors), have asbestos or lead-based paint, suffer ceiling and floor problems from leaky roofs, and need to replace outdated electrical and plumbing systems. Schools in the best condition tend to be newer, need few or no repairs, have a more flexible space design, contain electrical systems capable of housing new technology, have air conditioning, and offer brightly colored walls and low ceilings. However, condition does not depend on age alone; three schools we visited typifying best, worst, and typical were all over 60 years old. Officials report that their biggest facility issues are deferred maintenance and overcrowding. They say that a shortage of funds, caused by a lack of taxpayer support, hinders the district from either upgrading or maintaining its facilities. About 30 to 40 percent of needed repairs have been deferred from year to year for decades with priority given to repairs that ensure student safety. Additionally, some federal mandates—particularly lead and asbestos removal abatement programs—have caused major expenditures as most schools built between 1920 and 1979 contain asbestos, and all schools were painted with lead paint before 1980. Overcrowding began in the seventies with a great increase in the Hispanic population. However, in some instances, individual schools may be overcrowded, while neighboring schools remain underenrolled. One official told us that this is due in part to the problems caused by gang “turf” and the threat of extreme violence or even death to individuals who wander into “enemy” territory. School officials are reluctant to reassign students if the receiving schools are in territory controlled by a different gang than that of the overcrowded school the children presently attend. Facilities Financing Officials estimate that they need $2.9 billion to put schools in good overall condition. While the primary source of school funding is local property taxes, smaller amounts of state and federal funds are also used. Although the 1994 school facilities budget is $270 million (10 percent of the total education budget), only about $50 million is used for maintenance and repair. To obtain funds for building and renovating, the district relies on bonds, we were told, as politicians hesitate to ask anti-tax voters for even a minimal increase in taxes. Grandview, Washington Overview This small agricultural town in rural Washington has five schools. While the high school, built in 1978, is in excellent condition, the other four schools, built between 1936 and 1957, need to be totally renovated or replaced over the next 10 to 20 years. In addition, a student population increasing annually at about 4 percent since 1986 has resulted in overcrowding. Although Grandview’s middle school was built to house 475 students, current enrollment stands at about 700. One elementary school designed for 375 students now has 464. Another crowded elementary school converted the gymnasium into two classrooms. The district currently has 14 portable classrooms in use and anticipates needing 4 more in the next 3 years. Facilities Financing Grandview schools have an annual budget of $13.5 million, about 2 percent of which goes for maintenance. They receive funding from local tax levies and from the state and general apportionment of about $4,000 per student. They are also eligible for state equalization funding contingent on passing their levy. New construction and renovation are funded by bond issues and state funding assistance contingent on passing the bond issue. An $11 million bond issue to build a new middle school to alleviate crowding failed in February 1994 and again in the fall of 1994. Funding problems include public resistance to raising taxes and decreased state assistance due to a reduction in the timber sales on the public lands that support school construction funding. Montgomery County, Alabama Overview Many of Montgomery County school facilities are old but are generally in fair condition. However, approximately 10 percent of the schools need to be replaced. In the last 20 years, about 8 schools were built. The oldest building is a portion of an elementary school built in 1904. Schools built during the early 1900s are not air conditioned and need new roofs. At one elementary school we visited, a ceiling recently collapsed just 40 minutes after the children left for the day. Some schools have had students in “temporary” buildings for years. In addition, many repairs and renovations are needed to maintain schools, accommodate overcrowding and comply with federal mandates. Overcrowding problems have resulted in the use of 284 portable buildings to house students. In the 1980s, Montgomery County’s student population increased, creating the need for new elementary schools. Court-ordered desegregation also increased student populations at some schools through voluntary student movement, through a minority to majority transfer process. This process allowed minority students to attend any school in the county with a more than 50-percent majority of white students. Primarily, we were told, minority students chose to attend schools on the east side of town because the school facilities were better equipped and nicer. To provide adequate instructional space for the influx of children at the east side schools, portable rooms were added. Facilities Financing Lack of money prohibits the district from making needed facilities repairs. The operations and maintenance budget has dropped 10 percent in the past 3 to 4 years. The current facilities budget is $1 million of a $6 million total education budget. The district has no capital improvement budget. On June 28, 1994, voters defeated a local tax referendum for bond money the county had planned to use to remove all portable buildings, make all needed repairs and renovations, and build new schools located so that children from the west side of town would not have to travel so far for better school accommodations. New Orleans, Louisiana Overview New Orleans’ public schools are rotting away. Suffering from years of neglect due to lack of funds for repair and maintenance, New Orleans students attend schools suffering from hundreds of millions of dollars’ worth of uncorrected water and termite damage. Fire code violations are so numerous that school officials told us, “We don’t count them—we weigh them.” Most of the buildings have no air conditioning, though the average morning relative humidity in New Orleans is 87 percent. One high school recently had an electrical fire that started in the 80-year-old timbers in the roof. No one was hurt but the students were sent to other buildings for the rest of the year. An elementary school, built in 1964, was condemned and closed in 1994 due to water and termite damage. Facilities Financing New Orleans uses local property taxes and federal asbestos loans to upgrade its buildings. The district has submitted five bond issues to the voters in the last 20 years, for a total of $175 million, but only two of the bond issues have passed. The school facilities annual budget in 1994 is $6 million or 2 percent of the total education budget. This has decreased in the past 10 years from $9 million (4 percent of the education budget). New York, New York Overview New York has extremely diverse school facilities—while conditions are generally bad, some schools are models for 21st century learning. The “best” school we saw—a $151 million state-of-the-art science high school—was only blocks away from an example of the “worst”—another high school in a 100-year-old building that had served as a stable, fire house, factory, and office building. This high school’s elevators do not work, its interior classrooms have no windows, it has little ventilation and no air conditioning, and its heating depends on a fireman’s stoking the coal furnace by hand. Overcrowding and generally poor condition of the school buildings—many over 100 years old and in need of major renovation and repair—are New York’s main facilities problems. Since the fiscal crisis in the 1970s, maintenance and repair of the city’s school buildings have been largely neglected. Twenty years of neglect compound problems that could have been corrected much more cheaply had they been corrected earlier. As the city seeks the funds for repairing leaking roofs, plumbing problems that cause sewage to seep into elementary school classrooms, and ceilings that have caved in, its school enrollment is dramatically increasing. After losing more than 10 percent of its population in the sixties, a vast migration of non-English speaking residents in the last 3 years has resulted in overcrowding in 50 percent of New York’s schools. One school is operating at over 250 percent of capacity. Because classrooms are unavailable while under repair, in some cases improvements are postponed. Facilities Financing The New York City schools’ maintenance, repair, and capital improvement budget is approved annually by the city council. While the state provides some loan forgiveness, the city is largely responsible for all of the costs. Each school is allocated a maintenance and repair budget based solely on square footage. As a result, schools—even new schools—frequently cannot repair problems as they arise, which often leads to costly repairs in the future. In 1988, the estimated cost of upgrading, modernizing, and expanding the school system by the year 2000 was over $17 billion. The total capital backlog at that time was over $5 billion. The capital plan for fiscal year 1990 through fiscal year 1994 was funded at $4.3 billion—barely 20 percent of the amount requested. Pomona, California Overview Although district officials generally describe their school facilities overall as “adequate to fair,” some individual schools are excellent while others have severe problems. The oldest school was built in 1932. The worst schools were built in the mid-1950s to early 1960s and face many repair problems—poor plumbing, ventilation, lighting, leaking roofs, and crumbling walls. In contrast, one new school that opened last fall is state of the art. Only three schools have been built in the last 20 years. Like many school districts in California, Pomona’s biggest facilities issue is overcrowding. Because the student body has increased 37 percent in the last 10 years, the district relies on what school officials call “God-awful” portables—bungalows that are ugly, not air conditioned, termite-ridden, dark, and have too few electrical outlets. The portables generally provide sufficient classroom space but leave schools suffering from a severe lack of common-use areas and space for student movement. For example, some schools have to schedule five lunch periods to handle overcrowded campuses. Facilities Financing In 1991 the district passed a $62.5 million bond measure—significantly short of the $200 million it says it needs to put its schools in good overall condition. Officials attribute their facilities’ financial problems to state cutbacks, the passage of Proposition 13 in 1979, which greatly reduced local tax revenues, and unfunded federal mandates that drain the district’s budget. As a result, the district must function without enough facilities staff and continue to defer maintenance and repair while using temporary “band-aid” measures. However, the passage of Pomona’s 1991 bond measure and two 1992 state bond measures increased the district’s capital improvement budget to $14 million or about 16 percent of the district’s $85 million education budget. Pomona’s maintenance and repair budget is usually about 2 percent of the education budget. Ramona, California Overview Ramona is a small but growing rural community in central San Diego County. Four of its nine schools are more than 25 years old; its oldest was built over 50 years ago. Although Ramona’s oldest schools tend to be well constructed, they suffer from seriously deteriorating wiring and plumbing and inadequate or nonexistent heating, ventilation, air conditioning, and communications systems. The school district also suffers from the lack of an adequate, stable funding source that would allow it to modernize and expand its facilities. Consequently, most of Ramona’s schools are underbuilt and must rely on portables for overcrowding. One elementary school we visited consisted only of portables, with no cafeteria nor auditorium. One portable served as a library, computer lab, music room, and art room. In contrast, two new schools built in the last 5 years are bright, have flexible space, and are wired for the latest technology. The portables are difficult to maintain, and repair costs are higher in the long run than if real additions had been built in the first place. The most common repair needs in Ramona’s schools are roofs, signal systems (alarms, bells, and intercoms), and paving. Facilities Financing Officials attribute its facilities’ funding problems to the community’s inability to pass a bond issue—two attempts in the past 8 years have failed—their small rural district’s competitive disadvantage in applying for state funds, and the state’s emphasis on building new schools rather than retrofitting. The district’s facilities budget varies each year but comprises (1) a new building program that uses matching state funds, (2) a routine maintenance budget that is about 2 percent of the district’s $30 million education budget ($600,000), and (3) a deferred maintenance budget that is 0.5 percent of the education budget ($150,000) and is supposed to be matched by the state but rarely is in full. Raymond, Washington Overview Raymond is a western Washington town that has not recovered from the timber industry downturn of the early 1980s. The town and student populations have declined, and the demographics have changed dramatically. All three Raymond schools are old and two may be unsafe. The high school was built in 1925. It is a three-story structure of unreinforced concrete that may not safely withstand the possible earthquakes in the area. In addition, the building’s systems are old and inadequate. Steam pipes are noisy and provide too little or too much heat from room to room. One 1924 elementary school is built of wood—a potential fire hazard—and will be closed in 2 years. A third school was built during the 1950s and will receive a major remodeling and new addition next year. Facilities Financing Raymond recently passed its first bond issue since the 1950s to fund the remodeling of and addition for an elementary school. A bond issue proposed in 1990 to build a new facility for grades kindergarten to 12 failed. The public does not want to spend money on school maintenance and construction, and the tax base is too low to raise adequate funding. According to the school superintendent, the Columbia Tower (a Seattle skyscraper) has a higher assessed value than the entire district of Raymond. The district’s budget is $4 million, which is made up of local levies and state funding. Over the next 2 years, they will ask for a levy increase of $75,000, specifically for needed repairs. Richmond, Virginia Overview Renovation presents the biggest facility issue for the Richmond schools. Their 58 buildings are visually appealing yet old-fashioned compared with 21st century learning standards. Many, if not most, of the district’s renovation needs are due to the buildings’ age: The average building was built around the time of World War II. Ninety percent of the buildings lack central air conditioning; many schools close early in September and May/June because the heat and poor ventilation creates breathing problems for the children. In the past 20 years, 20 schools have been closed; only 2 new schools have opened. Facilities Financing Richmond is a poor city: the average family income is $17,700. The facilities director says he usually asks for $18 million but only gets $3 million and about 3 percent of the education budget for maintenance. He says city planners and voters view the buildings as architectural landmarks and think of them in terms of 1950s standards of learning. Also, the money he would have used for renovations has been spent on meeting “federal codes.” The district has tried twice to get the state to match funds for deferred maintenance but was rejected each time. New construction gets funded through bond issues. Washington, D.C. Overview With a capacity of 110,000 students, many of Washington’s school facilities are old and underused. Only 22 schools of 164—mainly elementary—have been built in the last 20 years. According to the district’s facilities manager, the average age of Washington’s schools is 50 years. While structurally sound, these older buildings house old—sometimes original—systems, such as the heating and air conditioning or electrical systems, which have major repair problems. Washington schools have many urgent repair needs, according to the district facilities manager. Old boiler systems have steam leakages causing such infrastructure erosion that whole school wings have been condemned and cordoned off; leaky roofs are causing ceilings to crumble on teachers’ and students’ desks; fire doors are warped and stick. In addition, the district was under court order to fix the most serious of an estimated $90 million worth of fire code violations by the start of the 1994-95 school year. These violations included locked or blocked exit doors, defective or missing fire doors, broken alarms, malfunctioning boilers, and unsafe electrical systems. Some of the schools also lack air conditioning and are so poorly insulated that children must wear coats to keep warm in winter weather. Facilities Financing From the school district’s total operating and capital budget of about $557 million in fiscal year 1994, about $100 million (18 percent) was allocated to school maintenance and capital improvement. Of this, approximately $25 million (including salaries) goes to the district’s facilities office, with the balance given directly to the schools for their on-site maintenance and operations. The building maintenance budget has declined from about 18 percent to 14 percent of the total school budget in the past 10 years. Funds for school maintenance and repair and capital improvements come from the District of Columbia’s general budget, over which the Congress has authority. Until 1985, the District’s capital improvement program was financed only through money borrowed from the U.S. Treasury. After 1985, the District was given authority to sell general obligation bonds in the capital markets. From 1985 through 1994, the schools received $314 million to finance capital improvements: $232 million through general obligation bond issuances, $59 million borrowed from the U.S. Treasury, and $23 million from District tax revenue. The Worst of the Worst Project Advisers The following individuals advised this report either by (a) serving on our expert panel on January 31, 1994; (b) helping with the development of our questionnaire; or (c) reviewing a draft report. Allen C. Abendabc Chief School Facilities Branch Maryland State Department of Education Phillip T. ChenConstruction Technician Division of Construction Department of Facilities Management Board of Education of Montgomery County (Maryland) Greg Colemanab Capital Asset Management Administrator Office of Infrastructure Support Services U.S. Department of Energy Laurel CornishDirector of Facilities U.S. Department of Education Impact Aid School Facilities Branch (Mr.) Vivian A. D’SouzaActing Director Division of Maintenance Department of Facilities Management Board of Education of Montgomery County (Maryland) Kenneth J. Ducotebc Director Department of Facility Planning New Orleans Public Schools Robert FeildDirector Committee on Architecture for Education American Institute of Architects William Fowlerabc Education Statistician U. S. Department of Education National Center for Education Statistics Lawrence Friedmanbc Associate Director Regional Policy Information Center North Central Regional Educational Laboratory Thomas E. GlassProfessor Department of Leadership and Educational Policy Studies Northern Illinois University Terence C. GoldenChairman Bailey Realty Thomas GroomsProgram Manager Federal Design Office National Endowment for the Arts Shirley J. HansenPresident Hansen Associates Alton C. HalavinAssistant Superintendent for Facilities Services Fairfax County Public Schools Fairfax County, Virginia Bruce HunterExecutive Director American Association of School Administrators Eddie L. KingAuditor Inspector General Department of Education Andrew LemerPresident Matrix Group, Inc. William H. McAfee IIIFacilities Manager Division of Facilities Management District of Columbia Public Schools Roger Scottbc Program Director Southwest Regional Laboratory Richard L. Siegel(Former) Director of Facilities Services Smithsonian Institution Lisa J. WalkerExecutive Director Education Writers Association Tony J. Wallbc Executive Director/CEO The Council of Educational Facilities Planners International William M. WilderDirector Department of Facilities Management Board of Education of Montgomery County (Maryland) GAO Questionnaire for Local Education Agencies The U. S. General Accounting Office (GAO) has been asked by the United States Congress to obtain information about school facilities, such as physical condition and capacity. While several limited studies have been done recently, no comprehensive national study of school facilities has been done in 30 years. The Congress needs this information to shape the details of federal policy, such as funding for the School Infrastructure Act of 1994. All responses are confidential. We will report your data only in statistical summaries so that individuals cannot be identified. This questionnaire should be answered by district level personnel who are very familiar with the school facilities in this district. You may wish to consult with other district level personnel or with school level personnel, such as principals, in answering some questions. We are conducting this study with only a sample of randomly selected schools, so the data on your school(s) is very important because it represents many other schools. Please respond even if the schools selected are new. If you have questions about the survey, please call Ms. Ella Cleveland (202) 512-7066 or Ms. Edna Saltzman (313) 256-8109. Mail your completed questionnaire in the enclosed envelope within 2 weeks to: Ms. Ella Cleveland U.S. General Accounting Office NGB, Suite 650 441 G St., NW Washington, DC 20548 Thank you for your cooperation in this very important effort. Linda G. Morra Director Education and Employment INSTRUCTIONS FOR COMPLETING THIS QUESTIONNAIRE 1. Sometimes you will be asked to "Circle Circle ALLALL that appears, you may circle the numbers next to more than one answer. that apply. apply." When this instruction 2. Sometimes you will be asked to "Circle Circle one." When this instruction appears, one. circle the number next to the one best answer. If any of the following statements are true for this school, please circle the number of the appropriate answer. Circle ALL that apply. Does this school currently house any of its students in instructional facilities located off of its site, such as rented space in another school, church, etc.? Circle one. Yes................1 No.................2 If your answer is "No," circle the number 2. This school is no longer in operation..................................... 2 This school is a private school, not a public school.......................................... 3 . Sometimes you will be asked to write in a number. Please round off to the nearest whole number. Do not use decimals or fractions. Please be sure your numbers are clearly printed so as not to be mistaken for another number. This institution or organization is not a school...........................................4 If your answers are "teaches only postsecondary" and "a private school," circle the numbers 1 and 3. What is the total amount of this most recently passed bond issue? $ total amount of most recently passed bond issue If your answer is $8,500,435.67, write 8,500,436 in the space provided. SECTION I. DISTRICT INFORMATION 1. What would probably be the total cost of all repairs/renovations/modernizations required to put all of this district’s schools in good overall condition? Give your best estimate.If all of this district’s schools are already in good (or better) overall condition, enter zero. 2. On which of the sources listed below is this estimate based? Circle ALL that apply. Overall condition includes both physical condition and the ability of the schools to meet the functional requirements of instructional programs. Good condition means that only routine maintenance or minor repair is required. Facilities inspection(s)/assessment(s) performed within the last three years by licensed professionals.....................................1 Repair/renovation/ modernization work already being performed and/or contracted for....................................................................2 Capital improvement/facilities master plan or schedule.................................3 My best professional judgment.....................4 Opinions of other district administrators.................................................5 Other (specify:_________________ ____________________________)...............6 3. During the last 3 years, how much money has been spent in this district on the federal mandates listed below? Include money spent in 1993-1994. If exact amounts are not readily available, give your best estimate. Enter zero if none. Circle "1" if spending was not needed. Accessibility for student with disabilities Underground storage tanks (USTs) ) Accessibility for students with disabilities Underground storage tanks (USTs) ) 5. Are these spending needs for federal mandates included in your answer to question 1? Circle one for each mandate listed. Accessibility for students with disabilities Underground storage tanks (USTs) ) 6.6. InIn whatwhat year digits of the year. this district? district? Enter the last two 7.7. WhatWhat waswas thethe total bond issue? issue? 8. How much money did this most recently passed bond issue provide for the items listed below? Enter zero if none. Removal of Underground Storage Tank (USTs) Removal of other environmental conditions Access for students with disabilities 9. During the last 10 years, how many bond issues have failed to pass? bond issues failed to pass 10. Do you currently have a bond issue before the electorate? Circle one. SECTION II. SCHOOL INFORMATION This section asks about the first school shown on the Instruction Sheet enclosed with this survey. 1. NAME OF SCHOOL: Please enter the name of the first school shown on the Instruction Sheet. 3. Which of the following grades did this school offer around the first of October, 1993? Circle ALL that apply. SCHOOL’S SURVEY IDENTIFICATION NUMBER: Please enter the Survey Identification number of the first school shown on the Instruction Sheet. Grade 2.......................2 Grade 3.......................3 Grade 4.......................4 Grade 5.......................5 Grade 6.......................6 Grade 7.......................7 2. If any of the following statements are true for this school, please circle the number of the appropriate answer. Circle ALL that apply. This institution or organization is not a school STOP! IF YOU MARKED ANY OF THE ABOVE STATEMENTS GO TO THE NEXT SCHOOL INFORMATION SECTION. 4. What was the total number of Full Time Equivalent (FTE) students enrolled in this school around the first of October, 1993? 8. How many original buildings, attached and/or detached permanent additions to the original buildings, and temporary buildings does this school have on-site? If this school does not have any permanent additions or any temporary buildings on-site, enter zero for these categories. 5. Does this school house any of its students in instructional facilities located off of its site, such as rented space in another school, church, etc.? Circle one. Attached and/or detached permanent additions to original buildings 6. How many of this school’s Full Time Equivalent (FTE) students are housed in off-site instructional facilities? 7. How many total square feet of off-site instructional facilities does this school have? If exact measurements are not readily available, give your best estimate. 9. How many total square feet do the original buildings, the attached and/or detached permanent additions, and the temporary buildings have? If exact measurements are not readily available, give your best estimate.If this school does not have any permanent additions or any temporary buildings on-site, enter zero for these categories. Attached and/or detached permanent additions to original buildings Temporary buildings 10. What is the overall condition of the original buildings, the attached and/or detached permanent additions, and the temporary buildings? Refer to the rating scale shown below, and circle one for EACH category of building.If this school does not have any permanent additions or any temporary buildings on-site, circle "0." Overall condition includes both physical condition and the ability of the buildings to meet the functional requirements of instructional programs. Excellent: new or easily restorable to "like new" condition; only minimal routine maintenance required. Good: only routine maintenance or minor repair required. Adequate: some preventive maintenance and/or corrective repair required. Fair: inconvenient; extensive corrective maintenance and repair required. fails to meet code and functional requirement in some cases; failure(s) are Poor: consistent substandard performance; failure(s) are disruptive and costly; fails most code and functional requirements; requires constant attention, renovation, or replacement. Major corrective repair or overhaul required Replace: Non-operational or significantly substandard performance. Replacement required. Attached and/or detached permanent additions to original buildings 11. What would probably be the total cost of all repairs/renovations/modernizations required to put this school’s on-site buildings in good overall condition? Give your best estimate.If this school’s on-site buildings are already in good (or better) overall condition, enter zero. .00 12. On which of the sources listed below is this estimate based? Circle ALL that apply. Does not apply -- already in good (or better) overall condition Facilities inspection(s)/assessments(s)performed within the last three years by licensed professionals Repair/renovation/modernization work already being performed and/or contracted for Capital improvement/facilities master plan or schedule Opinions of other district administrators Other (specify:__________________________) 13. During the last 3 years, how much money has been spent on the federal mandates listed below for this school’s on-site buildings? Include money spent in 1993-1994.If exact amounts are not readily available, give your best estimate. Enter zero if none. Circle "1" if spending was not needed. Accessibility for students with disabilities Underground storage tanks (USTs) ) Accessibility for students with disabilities Underground storage tanks (USTs) _________________) 15. Are these spending needs for federal mandates included in your answer to question 11? Circle one for each mandate listed. Accessibility for students with disabilities Underground storage tanks (USTs) ____________________) 16. Overall, what is the physical condition of each of the building features listed below for this school’s on-site buildings? Refer to the rating scale shown below, and circle one for EACH building feature listed. Excellent: new or easily restorable to "like new" condition; only minimal routine maintenance required. Good: only routine maintenance or minor repair required. Adequate: some preventive maintenance and/or corrective repair required. Fair: extensive corrective maintenance and repair required. fails to meet code or functional requirement in some cases; failure(s) are inconvenient; Poor: consistent substandard performance; failure(s) are disruptive and costly; fails most code and functional requirements; requires constant attention, renovation, or replacement. Major corrective repair or overhaul required. Replace: Non-operational or significantly substandard performance. Replacement required. Framing, floors, foundations Exterior walls, finishes, windows, doors Heating, ventilation, air conditioning Life safety codes 17. Do this school’s on-site buildings have sufficient capability in each of the communications technology elements listed below to meet the functional requirements of modern educational technology? Circle one for EACH element listed. Computer printers for instructional use Computer networks for instructional use Conduits/raceways for computer/computer network cables Electrical wiring for computers/communications technology Electrical power for computers/communications technology 18. How many computers for instructional use does this school have? Include computers at both on-site buildings and off-site instructional facilities. 19. How well do this school’s on-site buildings meet the functional requirements of the activities listed below? Circle one for EACH activity listed. Large group (50 or more students) instruction Storage of alternative student assessment materials Display of alternative student assessment materials Parent support activities, such as tutoring, planning, making materials, etc. Private areas for student counseling and testing Before/after school care 20. How satisfactory or unsatisfactory is each of the following environmental factors in this school’s on-site buildings? Circle one for EACH factor listed. Flexibility of instructional space (e.g., expandability, convertability, adaptability) Physical security of buildings 21. Does this school have air conditioning in classrooms, administrative offices, and/or other areas? Circle ALL that apply. Yes, in administrative offices Yes, in other areas No, no air conditioning in this school at all 4 ---> GO TO QUESTION 23 22. How satisfactory or unsatisfactory is the air conditioning in classrooms, administrative offices, and/or other areas? Circle one for EACH category listed. 23. Does this school participate in the National School Lunch Program? Circle one. 24. Regardless of whether this school participates in the National School Lunch Program, around the first of October, 1993, were any students in this school ELIGIBLE for the program? Circle one. 2-----> GO TO QUESTION 27 3-----> GO TO QUESTION 27 25. Around the first of October, 1993, how many applicants in this school were approved for the National School Lunch Program? Enter zero if none. applicants approved 26. Around the first of October, 1993, how many students in this school received free or reduced lunches through the National School Lunch Program? Enter zero if none. 27. How many students in this school were absent on the most recent school day? If none were absent, please enter zero. 28. What type of school is this? Circle one. Elementary or secondary with SPECIAL PROGRAM EMPHASIS-- for example, science/math school, performing arts high school, talented/giftedschool, foreign language immersion school, etc. SPECIAL EDUCATION--primarily serves students with disabilities VOCATIONAL/TECHNICAL--primiarily serves students being trained for occupations ALTERNATIVE--offers a curriculum designed to provide alternative or nontraditional education; does not specifically fall into the categories of regular, special education, or vocational school magnet program? program? Circle one. Yes..................1 No....................2 IF THIS IS THE LAST SCHOOL LISTED ON YOUR INSTRUCTION SHEET, PLEASE GO DIRECTLY TO THE LAST PAGE OF THIS QUESTIONNAIRE. Data Points for Report Figures Tables in this appendix provide data for the figures in the report. GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments D. Catherine Baltzell, Supervisory Social Science Analyst Ella F. Cleveland, Subproject Manager Harry M. Conley III, Statistician Nancy Kintner-Meyer, Evaluator Steven R. Machlin, Statistician Deborah L. McCormick, Senior Social Science Analyst Sara J. Peth, Technical Information Specialist William G. Sievert, Technical Advisor Kathleen Ward, Technical Advisor The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (301) 258-4097 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. Address Correction Requested
Pursuant to a congressional request, GAO reviewed the condition of the nation's school buildings, focusing on the: (1) amount of funding public elementary and secondary schools need to improve inadequate facilities; and (2) overall condition and prevalence of schools that need major repairs. GAO found that: (1) based on a national sample, the nation's schools need an estimated $112 billion to repair or upgrade their facilities to good condition; (2) two-thirds of the schools need $11 billion over the next 3 years to comply with federal mandates to make all programs accessible to all students, to remove hazardous substances, and to meet other requirements; (3) schools also need to comply with state and local health and sanitation codes, safety standards, and building codes; (4) two-thirds of the schools are in adequate or better condition and need only preventive maintenance or corrective repair; (5) about 14 million students nationwide attend the remaining one-third of schools that need extensive repair; (6) almost 60 percent of the nation's schools reported problems with at least one major building feature and about one-half of the schools have at least one unsatisfactory environmental condition such as poor ventilation or poor physical security; and (7) the major factors that contribute to inadequate school facilities include school districts' decisions to defer vital maintenance and repair expenditures because of funding constraints, unfunded federal and state mandates, and shifting population patterns.
GAO_GAO-06-896T
Background Although wildland fires triggered by lightning are a natural, inevitable, and in many cases a necessary ecological process, past federal fire suppression policies have led to an accumulation of fuels and contributed to larger and more severe wildland fires. In recent years, both the number of acres burned by wildland fires and the costs to suppress fires have been increasing. From 1995 through 1999, wildland fires burned an average of 4.1 million acres each year; from 2000 through 2004, the fires burned an average of 6.1 million acres each year—an increase of almost 50 percent. During the same periods, the costs incurred by federal firefighting entities to suppress wildland fires more than doubled, from an average of $500 million annually to about $1.3 billion annually. Although efforts to fight these larger, more severe fires have accounted for much of the increase in suppression costs, the continuing development of homes and communities in areas at risk from wildland fires and the efforts to protect these structures also contribute to the increasing costs. Forest Service and university researchers estimate that about 44 million homes in the lower 48 states are located in the wildland-urban interface. When fire threatens the wildland-urban interface, firefighting entities often need to use substantial resources—including firefighters, fire engines, and aircraft to drop retardant—to fight the fire and protect homes. As wildland fire suppression costs have continued to rise, increasing attention has focused on how suppression costs for multijurisdictional fires are shared. To share suppression costs for a specific fire, local representatives of federal and nonfederal firefighting entities responsible for protecting lands and resources affected by the fire—guided by the terms of the master agreement—decide which costs will be shared and for what period. They document their decisions in a cost-sharing agreement for that fire. According to federal officials, cooperating entities traditionally shared suppression costs on the basis of the proportion of acres burned in each entity’s protection area because the method was relatively easy to apply and works well when the lands affected by a wildland fire are similar. Officials said that the use of alternative cost- sharing methods has been increasing in recent years. Unclear Guidance and Inconsistent Application of Cost- Sharing Methods Can Have Significant Financial Consequences for Entities Involved Federal and nonfederal entities included in our review used a variety of methods to share the costs of fighting fires that burned or threatened both federal and nonfederal lands and resources. Although master agreements between federal and nonfederal entities typically listed several cost- sharing methods, the agreements often lacked clear guidance for officials to follow in deciding which cost-sharing method to apply to a specific fire. Consequently, for eight fires we reviewed in four states, we found varied cost-sharing methods used and an inconsistent application of these methods within and among states, although the fires had similar characteristics. The type of cost-sharing method chosen is important because it can have significant financial consequences for the federal and nonfederal entities involved. Master Agreements Provided Cost-Sharing Framework, but Those We Reviewed Lacked Clear Guidance Master agreements provide the framework for federal and nonfederal entities to work together and share the costs of fighting wildland fires. The master agreements we reviewed for 12 western states all directed federal and nonfederal entities to develop a separate agreement, documenting how costs were to be shared for each fire that burned—or, in some cases, threatened to burn—across multiple jurisdictions. The master agreements varied in the cost-sharing methods specified: The master agreement for 1 state (Idaho) did not identify any specific cost- sharing method to use. The master agreements for 3 states (Alaska, Arizona, New Mexico) listed the acres-burned method as the primary or only method to be used. Although two of these agreements allowed the use of alternative cost- sharing methods, they did not explicitly state under what circumstances an alternative method would be appropriate. The master agreements for 8 remaining states listed multiple, alternative cost-sharing methods but did not provide clear guidance on when each method should be used. Cost-Sharing Methods Were Inconsistently Applied for the Eight Fires We Reviewed Federal and nonfederal entities used varied cost-sharing methods for the eight fires we reviewed, although the fires had similar characteristics. As shown in figure 1, the cost-sharing methods used sometimes varied within a state or from state to state. The costs for the two fires that we reviewed in Utah were shared using two different methods, although both fires had similar characteristics. For the Blue Springs Fire, federal and nonfederal officials agreed that aircraft and engine costs of protecting an area in the wildland-urban interface during a 2-day period would be assigned to the state and the remaining costs would be shared on the basis of acres burned. Federal and state officials explained that, because the Blue Springs Fire qualified for assistance from the Federal Emergency Management Agency (FEMA), state officials agreed to bear a larger portion of the total fire suppression costs. For the Sunrise Complex of fires, in contrast, state officials were reluctant to share costs in the same manner. Although these fires also threatened the wildland-urban interface, they did not meet the eligibility requirements for FEMA reimbursement of nonfederal costs. Consequently, federal and nonfederal officials agreed to share costs for the Sunrise Complex on the basis of acres burned. The costs for the two fires we reviewed in Arizona were also treated differently from each other. For the Cave Creek Complex of fires, federal and state officials agreed to share suppression costs using an acres-burned method for the southern portion of the complex, which encompassed federal, state, and city lands and required substantial efforts to protect the wildland-urban interface. The federal government paid the full costs for the northern portion of the fire. For the Florida Fire, federal and nonfederal officials were unable to reach an agreement on how to share costs. Officials from the affected national forest proposed a cost-sharing agreement, whereby the state would pay the costs of firefighting personnel, equipment, and aircraft used to protect the wildland-urban interface, and all other fire suppression costs would be paid by the federal government. The state official, however, did not agree with this proposal. He believed that the Forest Service, not the state, was responsible for protecting areas of the wildland-urban interface threatened by the Florida Fire and that he was not authorized to agree to the terms of the proposed agreement. Methods used to share suppression costs for fires with similar characteristics also varied among states. For example, costs for the fires we reviewed in California and Colorado were shared using methods different from those used for similar fires we reviewed in Arizona and Utah. In California, federal and nonfederal officials agreed to share the costs of two fires using the cost-apportionment method—that is, costs were apportioned on the basis of where firefighting personnel and equipment were deployed. Officials said that they had often used this method since the mid-1980s because they believed that the benefit it provides in more equitable cost sharing among affected firefighting entities outweighs the additional time required to apportion the costs. In Colorado, federal and nonfederal officials agreed to share suppression costs for both of the fires we reviewed in that state using guidance they had developed and officially adopted in 2005, called “fire cost share principles.” Under these principles, aviation costs for fires burning in the wildland-urban interface are shared equally for 72 hours, and other fire suppression costs, such as firefighting personnel and equipment, are shared on the basis of acres burned. The Cost-Sharing Method Used Can Lead to Significantly Different Financial Outcomes Having clear guidance as to when particular cost-sharing methods should be used is important because the type of method ultimately agreed upon for any particular fire can have significant financial consequences for the firefighting entities involved. To illustrate the effect of the method chosen, we compared the distribution of federal and nonfederal costs for the five fires we reviewed in which the actual cost-sharing method used was not acres burned with what the distribution would have been if the method used had been acres burned. We found that the distribution of costs between federal and nonfederal entities differed, sometimes substantially, depending on the cost-sharing method used. The largest differences occurred in California, which used the cost apportionment method. For the Deep Fire, using the cost-apportionment method, federal entities paid $6.2 million, and nonfederal entities paid $2.2 million. Had the costs been shared on the basis of acres burned, federal entities would have paid an additional $1.7 million, and nonfederal entities would have paid that much less because most of the acres burned were on federal land. According to federal and state officials, the nonfederal entities bore a larger share of the cost than they would have under an acres-burned method because of the efforts to protect nonfederal lands and resources. For the Pine Fire, using cost apportionment, federal entities paid $5.2 million, and nonfederal entities paid $8.1 million. Had an acres-burned method been used, federal entities would have paid about $2 million less, and nonfederal entities would have paid that much more. According to a federal official who worked on apportioning costs for that fire, the higher costs that the federal entities paid under cost apportionment were largely due to extensive firefighting efforts on federal land to ensure that the fire was extinguished. In Colorado and Utah, the differences in federal and state entities’ shares between the methods used and the acres-burned method were less pronounced, likely because the cost-sharing methods used still relied heavily on acres burned. In each case, federal entities’ shares would have been more and nonfederal shares less had an acres-burned method been used, due to the efforts to protect the wildland-urban interface. For example, the federal share of costs for the Blue Springs Fire in Utah would have been about $400,000 more and the nonfederal share that much less if an acres-burned method had been used for the whole fire. In Colorado, we estimated that the federal share of costs for the Mason Gulch Fire would have been about $200,000 more and the nonfederal share that much less under an acres-burned method. Current Cost-Sharing Framework Raises Several Concerns Federal and nonfederal agency officials we interviewed raised a number of concerns about the current cost-sharing framework. First, some federal officials said that because master agreements and other policies do not provide clear guidance about which cost-sharing methods to use, it has sometimes been difficult to obtain a cost-sharing agreement that they believe shares suppression costs equitably. Second, nonfederal officials were concerned that the emergence of alternative cost-sharing methods has caused nonfederal entities to bear a greater share of fire suppression costs than in the past. Finally, some federal officials expressed concern that the current framework for sharing costs insulates state and local governments from the cost of protecting the wildland-urban interface, thereby reducing their incentive to take steps that could help mitigate fire risks and reduce suppression costs in the wildland-urban interface. We believe these concerns may reflect a more fundamental issue—that is, that federal and nonfederal entities have not clearly defined their financial responsibilities for wildland fire suppression, particularly for the wildland- urban interface. Lack of Clear Guidance Can Lead to Difficulties in Sharing Costs Some federal officials said that the lack of clear guidance can make it difficult to agree to use a cost-sharing method that they believe equitably distributes suppression costs between federal and nonfederal entities, particularly for fires that threaten the wildland-urban interface. As discussed, different cost-sharing methods were used for the two fires we reviewed in Utah, even though both fires required substantial suppression efforts to protect the wildland-urban interface. A federal official said that because of the state officials’ unwillingness to use a method other than acres burned on one of the fires and because of the lack of clear guidance about which cost-sharing method should be used, he agreed to use an acres-burned method and did not seek a cost-sharing agreement that would have assigned more of the costs to the nonfederal entities. Some federal officials in Arizona expressed similar views, saying that the lack of clear guidance on sharing costs can make it difficult to reach agreement with nonfederal officials. For example, federal and state officials in Arizona did not agree on whether to share costs for one fire we reviewed in that state. Officials from the Forest Service’s and the Department of the Interior’s national offices agreed that interagency policies for cost sharing could be clarified to indicate under what circumstances particular cost-sharing methods are most appropriate. They said that the acres-burned method, for example, is likely not the most equitable method to share costs in cases where fires threaten the wildland-urban interface. Officials noted that the National Fire and Aviation Executive Board—made up of the fire directors from the five federal land management agencies and a representative from the National Association of State Foresters—was developing a template for both master and cost-sharing agreements. As of May 2006, this template had not been finalized, but our review of a draft version indicated that the template might not provide additional clarity about when each cost-sharing method should be used. Nonfederal Officials Were Concerned about Increased Costs and Equity among States While federal officials expressed the need for further guidance on how to share costs, nonfederal officials were concerned that the emergence of alternative cost-sharing methods was leading state and local entities to bear a greater share of suppression costs than in the past, and they questioned whether such an increase was appropriate. Nonfederal officials also said that wildland fire suppression costs already posed budgetary challenges for state and local entities and that using alternative cost- sharing methods more often could exacerbate the situation. State officials said that if a state’s suppression costs in a given year exceed the funds budgeted, they must seek additional state funds, which can be difficult. Moreover, they said, in many states, protecting structures is primarily a local responsibility, and many local entities are unable to pay the costs of fighting a large fire that threatens the wildland-urban interface. Although clarifying guidance about which cost-sharing methods are most appropriate for particular circumstances could cause nonfederal entities to bear more wildland fire suppression costs, over the long term, such clarification would also allow each entity to better determine its budgetary needs and take steps to meet them. In addition to their concerns about increased costs, nonfederal as well as federal officials were concerned that the federal government was treating nonfederal entities in different states differently, thereby creating inequities. Federal and nonfederal officials said that because some states use particular cost-sharing methods more often than other states, the proportion of costs borne by federal and nonfederal entities likely varies from state to state, resulting in nonfederal entities’ paying a higher proportion of costs in some states and a lower proportion in other states. Clarifying which cost-sharing methods should be used in particular situations could increase nonfederal officials’ assurance that the federal government is treating them equitably relative to other states. Cost-Sharing Framework May Reduce Incentives to Mitigate Fire Risks in the Wildland-Urban Interface Federal officials said that the current cost-sharing framework insulates state and local governments from the cost of protecting the wildland- urban interface. As we have previously reported, a variety of protective measures are available to help protect structures from wildland fire including (1) reducing vegetation and flammable objects within an area of 30 to 100 feet around a structure and (2) using fire-resistant roofing materials and covering attic vents with mesh screens. However, some homeowners and homebuilders resist using these protective measures because they are concerned about aesthetics, time, or cost. As a result, federal and nonfederal officials said, it can be politically difficult for state and local governments to adopt—and enforce—laws requiring such measures, and many at-risk areas have not done so. The states and communities we visited exhibited various degrees of progress in adopting laws requiring protective measures. For example, California requires homeowners in the wildland-urban interface to maintain 100 feet of defensible space and, in areas at particularly high risk from wildland fires, also requires new structures to be constructed with fire-resistant roofing materials and vents. The other states we visited do not have such statewide requirements, but they are taking a variety of steps to require or encourage protective measures. For example, Utah passed a law in 2004 requiring its counties to adopt standards for landscaping and building materials if they want to be eligible to receive state funds to assist with fire suppression costs. Other counties had efforts underway to educate homeowners about measures they could use to reduce their risk without requiring that such measures be used. Federal officials expressed concern—and some nonfederal officials acknowledged—that the use of cost-sharing methods that assign more costs to federal entities, and the availability of federal emergency assistance, insulate state and local governments from the cost of providing wildland fire protection. These federal officials pointed out that wildland fires threatening structures often require added suppression efforts. Under some cost-sharing methods, such as acres burned, federal entities often end up paying a large proportion of the costs for these efforts. Some federal and nonfederal officials also noted that the availability of FEMA assistance to nonfederal entities—which can amount to 75 percent of allowable fire suppression costs for eligible fires—further insulates state and local governments from the cost of protecting the wildland-urban interface. Of the eight fires included in our review, nonfederal officials were seeking reimbursement for the allowable costs of the five fires that FEMA determined met eligibility requirements. Federal officials suggested that to the extent that state and local governments are insulated from the cost of protecting the wildland-urban interface, these governments may have a reduced incentive to adopt laws requiring homeowners and homebuilders to use protective measures that could help mitigate fire risks. Some officials said that by requiring homeowners and homebuilders to take such measures, more of the cost of protecting the wildland-urban interface would then be borne by those who chose to live there. Officials’ Concerns May Reflect Ambiguity over Financial Responsibilities On the basis of our review of previous federal reports and interviews with federal and nonfederal officials, we believe that the concerns we identified may reflect a more fundamental issue—that federal and nonfederal firefighting entities have not clearly defined their fundamental financial responsibilities for wildland fire suppression, particularly those for protecting the wildland-urban interface. Federal officials said that the continuing expansion of the wildland-urban interface and rising fire suppression costs for protecting these areas have increased the importance of resolving these issues. Federal wildland fire management policy states that protecting structures is the responsibility of state, tribal, and local entities; but the policy also says that, under a formal fire protection agreement specifying the financial responsibilities of each entity, federal agencies can assist nonfederal entities in protecting the exterior of structures threatened by wildland fire. Federal and nonfederal officials agreed that federal agencies can assist with such actions, but they did not agree on which entities are responsible for bearing the costs of these actions. Federal officials told us that the purpose of this policy is to allow federal agencies to use their personnel and equipment to help protect homes but not to bear the financial responsibility of providing that protection. Nonfederal officials, however, said that these actions are intended to keep a wildland fire from reaching structures, and financial responsibility should therefore be shared between both federal and nonfederal entities. Further, the presence of structures adjacent to federal lands can substantially alter fire suppression strategies and raise costs. A previous federal report and federal officials have questioned which entities are financially responsible for suppression actions taken on federal lands but intended primarily or exclusively to protect adjacent wildland-urban interface. Fire managers typically use existing roads and geographic features, such as rivers and ridgelines, as firebreaks to help contain wildland fires. If, however, homes and other structures are located between a fire and such natural firebreaks, firefighters may have to construct other firebreaks and rely more than they otherwise would on aircraft to drop fire retardant to protect the structures, thereby increasing suppression costs. Nonfederal officials in several states, however, questioned the appropriateness of assigning to nonfederal entities the costs for suppression actions taken on federal lands. These officials, as well as officials from the National Association of State Foresters, said that accumulated fuels on federal lands is resulting in more severe wildland fires and contributing to the increased cost of fire suppression. They also said that federal agencies are responsible for keeping wildland fires from burning off federal land and should, therefore, bear the costs of doing so. Federal officials in the states we visited recognized this responsibility, but some also said that with the growing awareness that wildland fires are inevitable in many parts of the country, policy should recognize that wildland fires will occur and are likely to burn across jurisdictional boundaries. In their view, those who own property in areas at risk of wildland fires share a portion of the financial responsibility for protecting it. Previous federal agency reports also have recognized this issue and have called for clarifying financial responsibility for such actions. Conclusions Wildland fires are inevitable and will continue to affect both federal and nonfederal lands and resources. Federal, state, and local firefighting entities have taken great strides to develop a cooperative fire protection system so that these entities can effectively work together to respond to these fires. Efforts are now needed to address how to best share the costs of these cooperative fire protection efforts when the fires burn or threaten multiple jurisdictions, particularly when suppression efforts may focus more heavily on one entity’s lands and resources. The need for clear guidance on when to use a particular cost-sharing method is becoming more acute as the wildland-urban interface continues to grow and wildland fire suppression costs continue to increase. Before such guidance can be developed, however, federal and nonfederal entities must agree on which entity is responsible for the costs of protecting areas where federal and nonfederal lands and resources are adjacent or intermingled, particularly in the wildland-urban interface. Without explicit delineation of financial responsibilities, federal and nonfederal entities’ concerns about how these costs are shared are likely to continue. Thus, to strengthen the framework for sharing wildland fire suppression costs, we recommended that the Secretaries of Agriculture and the Interior, working in conjunction with relevant state entities, provide more specific guidance as to when particular cost-sharing methods should be used and clarify the financial responsibilities for suppressing fires that burn, or threaten to burn, across multiple jurisdictions. In responding to our report, the Forest Service and the Department of the Interior generally agreed with the findings and recommendations. The National Association of State Foresters did not agree, stating that developing national guidance would not provide the flexibility needed to address the variability in local circumstances and state laws. Although we agree that a certain amount of flexibility is needed, without more explicit guidance to assist local federal and nonfederal officials responsible for developing cost-sharing agreements for individual fires, the inconsistencies in how suppression costs are shared within and among states are likely to continue, along with concerns about perceived inequities. Mr. Chairman, this concludes my prepared statement. I would be pleased to answer any questions that you or other Members of the Subcommittee may have at this time. GAO Contact and Staff Acknowledgments For further information about this testimony, please contact me at (202) 512-3841 or [email protected], or Robin M. Nazzaro at (202) 512-3841 or [email protected]. David P. Bixler, Assistant Director; Jonathan Dent; Janet Frisch; and Richard Johnson made key contributions to this statement. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Wildland fires can burn or threaten both federal and nonfederal lands and resources, including homes in or near wildlands, an area commonly called the wildland-urban interface. Agreements between federal and nonfederal firefighting entities provide the framework for working together and sharing the costs of fire suppression efforts. GAO was asked to (1) review how federal and nonfederal entities share the costs of suppressing fires that burn or threaten both of their lands and resources and (2) identify any concerns that these entities may have with the existing cost-sharing framework. This testimony is based on GAO's May 2006 report Wildland Fire Suppression: Lack of Clear Guidance Raises Concerns about Cost Sharing between Federal and Nonfederal Entities (GAO-06-570). Federal and nonfederal entities used a variety of methods to share the costs of fighting wildland fires affecting both of their lands and resources. Cooperative agreements between federal and nonfederal firefighting entities--which are developed and agreed to by the entities involved--provide the framework for cost sharing and typically list several cost-sharing methods available to the entities. The agreements GAO reviewed, however, often lacked clear guidance for federal and nonfederal officials to use in deciding which method to apply to a specific fire. As a result, cost-sharing methods were applied inconsistently within and among states, even for fires with similar characteristics. For example, GAO found that in one state, the costs for suppressing a large fire that threatened homes were shared solely according to the proportion of acres burned within each entity's area of fire protection responsibility, a method that traditionally has been used. Yet, costs for a similar fire within the same state were shared differently. For this fire, the state agreed to pay for certain aircraft and fire engines used to protect the wildland-urban interface, while the remaining costs were shared on the basis of acres burned. In contrast to the two methods used in this state, officials in another state used yet a different cost-sharing method for two similar large fires that threatened homes, apportioning costs each day for personnel, aircraft, and equipment deployed on particular lands, such as the wildland-urban interface. The type of cost-sharing method ultimately used is important because it can have significant financial consequences for the entities involved, potentially amounting to millions of dollars. Both federal and nonfederal agency officials raised a number of concerns about the current cost-sharing framework. First, some federal officials were concerned that because guidance is unclear about which cost-sharing methods are most appropriate in particular circumstances, it can be difficult to reach agreement with nonfederal officials on a method that all parties believe distributes suppression costs equitably. Second, some nonfederal officials expressed concerns that the emergence of alternative cost-sharing methods is causing nonfederal entities to bear a greater share of fire suppression costs than in the past. In addition, both federal and nonfederal officials believed that the inconsistent application of these cost-sharing methods has led to inequities among states in the proportion of costs borne by federal and nonfederal entities. Finally, some federal officials also expressed concern that the current framework for sharing costs insulates state and local governments from the increasing costs of protecting the wildland-urban interface. Therefore, nonfederal entities may have a reduced incentive to take steps that could help mitigate fire risks, such as requiring homeowners to use fire-resistant materials and landscaping. On the basis of a review of previous federal reports and interviews with federal and nonfederal officials, GAO believes that these concerns may reflect a more fundamental issue--that federal and nonfederal entities have not clearly defined their basic financial responsibilities for wildland fire suppression, particularly those for protecting the wildland-urban interface.
GAO_GAO-05-258T
Background The United States is currently undergoing a transition from analog to digital broadcast television. With traditional analog technology, pictures and sounds are converted into “waveform” electrical signals for transmission through the radiofrequency spectrum, while digital technology converts these pictures and sounds into a stream of digits consisting of zeros and ones for transmission. Digital transmission of television signals provides several advantages compared to analog transmission, such as enabling better quality picture and sound reception as well as using the radiofrequency spectrum more efficiently than analog transmission. This increased efficiency makes multicasting—where several digital television signals are transmitted in the same amount of spectrum necessary for one analog television signal—and HDTV services possible. A primary goal of the DTV transition is for the federal government to reclaim spectrum that broadcasters currently use to provide analog television signals. The radiofrequency spectrum is a medium that enables many forms of wireless communications, such as mobile telephone, paging, broadcast television and radio, private radio systems, and satellite services. Because of the virtual explosion of wireless applications in recent years, there is considerable concern that future spectrum needs— both for commercial as well as government purposes—will not be met. The spectrum that will be cleared at the end of the DTV transition is considered highly valuable spectrum because of its particular technical properties. In all, the DTV transition will clear 108 megahertz of spectrum—a fairly significant amount. In the Balanced Budget Act of 1997, the Congress directed FCC to reallocate 24 MHz of the reclaimed spectrum to public safety uses. Since the terrorist attacks of September 11, 2001, there has been a greater sense of urgency to free spectrum for public safety purposes. The remaining returned spectrum will be auctioned for use in advanced wireless services, such as wireless high-speed Internet access. To implement the DTV transition, television stations must provide a digital signal, which requires them to upgrade their transmission facilities, such as transmission lines, antennas, and digital transmitters and encoders. Depending on individual station’s tower configuration, the digital conversion may require new towers or upgrades to existing towers. Most television stations throughout the country are now providing a digital broadcast signal in addition to their analog signal. After 2006, the transition will end in each market—that is, analog signals will no longer be provided—when at least 85 percent of households have the ability to receive digital broadcast signals. Americans Watch Television through Three Primary Modes The three primary means through which Americans view television signals are over the air, cable, and direct broadcast satellite (DBS). Over-the-air broadcast television, which began around 1940, uses radiofrequencies to transmit television signals from stations’ television towers to households’ television antennas mounted on rooftops, in attics, or directly on television sets. Over-the-air television is a free service. Cable television service, a pay television service, emerged in the late 1940s to fill a need for television service in areas with poor over-the-air reception, such as mountainous or remote areas. Cable providers run localized networks of cable lines that deliver television signals from cable facilities to subscribers’ homes. Cable operators provide their subscribers with, on average, approximately 73 analog television channels and 150 digital television channels. In 1994, a third primary means of providing television emerged: direct broadcast satellite (DBS). Subscribers to DBS service use small reception dishes that can be mounted on rooftops or windowsills to receive television programming beamed down from satellites that orbit over the equator. Like cable, DBS service is a subscription television service that provides consumers with many channels of programming. When the Congress enacted the Satellite Home Viewer Improvement Act of 1999, it allowed DBS carriers to provide local broadcast signals—such as the local affiliate of ABC or NBC—which they had previously not generally been able to provide. Over-the-Air Households. We found that 19 percent, or 20.8 million American households, rely exclusively on over-the-air transmissions for their television viewing. We recognize that others have estimated a lower value for the percent of households relying on over the air television. Our results were derived from a survey of over 2,400 households, from which we estimated with 95 percent certainty that between 17 and 21 percent of households rely on over the air television. Compared to households that purchase a subscription to cable or DBS service, we found that exclusive over-the-air viewers are somewhat different demographically. Overall, over-the-air households are more likely to have lower incomes than cable or satellite households. Approximately 48 percent of exclusive over-the-air viewers have household incomes less than $30,000, and 6 percent have household incomes over $100,000. Additionally, nonwhite and Hispanic households are more likely to rely on over-the-air television than are white and non-Hispanic households; over 23 percent of non-white households rely on over-the-air television compared to less than 16 percent of white households, and about 28 percent of Hispanic households rely on over-the- air television compared to about 17 percent of non-Hispanic households. Finally, we found that, on average, exclusive over-the-air households have 2.1 televisions, which is lower than the average for cable and satellite households. We asked the survey research firm to recontact approximately 100 of the respondents who exclusively watch television through over-the-air transmission to ask additional questions, including the primary reason the household does not purchase a subscription video service. Forty-one of these respondents said that it was too costly for them to purchase a subscription video service, and 44 said that they do not watch enough television to warrant paying for television service. Most of the recontacted households seemed unlikely to purchase a subscription service in the near future. Only 18 of the recontacted households said that they would be likely to purchase a subscription video service in the near future, and another 10 said that they might do so. Cable Households. We found that 57 percent, or 63.7 million American households, view television through a cable service. On average, cable households have 2.7 television sets. Sixteen percent of cable households have at least one television set in the home that is not connected to cable but instead receives only over-the-air television signals. Of the cable households surveyed, roughly 29 percent had household incomes of less than or equal to $30,000, and about 13 percent had incomes exceeding $100,000. We also found that 44 percent of the cable homes have at least one set-top box. Of those cable subscribers with a set-top box, about 67 percent reported that their box is capable of viewing channels the cable system sells on “digital cable tiers,” meaning that the channels are transmitted by their cable provider in a digital format. A subset of these “digital cable” customers have a special set-top box capable of receiving their providers’ transmission of high-definition digital signals. Because the existence of a set-top box in the home may be relevant for determining what equipment households would need to view broadcast digital television signals, we asked the survey research firm to recontact approximately 100 cable households that do not have a set-top box to ask questions about their likely purchase of digital cable tiers—which require a set-top box—in the near future. First, we asked the primary reason why the household did not currently purchase any cable digital tiers of programming. Fifty-one of the recontacted respondents said that they did not want to bear the extra expense of digital tiers of cable programming, and 33 said that they did not watch enough television to justify purchasing digital cable service. Only 9 of the recontacted respondents said that they would be likely to purchase digital cable service in the near future, and another 9 said that they might purchase such service in the near future. Finally, we asked these respondents whether they would be reluctant to change their service in any way that would require them to use a set-top box. Of the recontacted respondents, 37 said they would be very reluctant to change their service in a way that would require them to use a set-top box, and another 38 said that they would be somewhat reluctant to do so. DBS Households. We found that about 19 percent, or 21.7 million American households, have a subscription to a DBS service. These households have, on average, 2.7 television sets. About one-third of these households have at least one television set that is not hooked to their DBS dish and only receives over-the-air television signals. In terms of income, 29 percent of DBS subscribers have incomes less than or equal to $30,000, and 13 percent have incomes exceeding $100,000. One important difference between cable and DBS service is that not all DBS subscribers have the option of viewing local broadcast signals through their DBS provider. Although the DBS providers have been rolling out local broadcast stations in many markets around the country in the past few years, not all markets are served. DBS subscribers in markets without local broadcast signals available through their DBS provider usually obtain their local broadcast signals through an over-the-air antenna, or through a cable connection. This is important to the DTV transition because how households with DBS service view their local broadcast channels will play into the determination of their requirements to transition to broadcast DTV. We therefore requested that the survey research firm recontact approximately 100 DBS customers to ask how they receive their local broadcast channels. We found that when local channels are available to DBS subscribers, they are very likely to purchase those channels. Well more than half of the DBS subscribers who were recontacted viewed their local broadcast channels through their DBS service. Nearly one-fourth of the recontacted DBS subscribers view their local broadcast channels through free over-the-air television. As DBS providers continue to roll out local channels to more markets, the percentage of DBS subscribers relying on over-the-air transmissions to view local signals will likely decline. Households’ Equipment Needs for DTV Transition Will Depend on their Mode of Television Viewing and Current Equipment Status, and Will Also Be Affected by Regulatory Decisions The specific equipment needs for each household to transition to DTV— that is, to be able to view broadcast digital signals—depends on certain key factors: the method through which a household watches television, the television equipment the household currently has, and certain critical regulatory decisions yet to be made. In this section we discuss two cases regarding a key regulatory decision that will need to be made and the implications that decision will have on households’ DTV equipment needs. Before turning to the two cases, a key assumption underlying this analysis must be discussed. Currently, broadcasters have a right to insist that cable providers carry their analog television signals. This is known as the “must carry” rule, and dates to the Cable Television Consumer Protection and Competition Act of 1992. FCC made a determination that these must carry rules will apply to the digital local broadcast signals once a station is no longer transmitting an analog signal. In our analysis, we assume that the must carry right applies to broadcasters’ digital signals, and as such, cable providers are generally carrying those signals. DBS providers face some must carry rules as well, although they are different in some key respects from the requirements that apply to cable providers. For the purposes of this analysis, we assume that to the extent that DBS providers face must carry requirements, those requirements apply to the digital broadcast signals. For nearly all cable subscribers, and more than half of the DBS subscribers, local broadcast analog signals are provided by their subscription television provider. This means that these providers capture the broadcasters’ signals through an antenna or a wire and retransmit those signals by cable or DBS to subscribers. We make two disparate assumptions, which we call case one and case two, about how cable and DBS providers might provide digital broadcast signals to subscribers. We do not suggest that these are the only two possibilities regarding how the requirements for carriage of broadcast signals might ultimately be decided—these are simply two possible scenarios. Case One. In this case, we assume that cable and DBS providers will continue providing broadcasters’ signals as they currently do. This assumption would be realized if cable and DBS providers initially downconvert broadcasters’ digital signals at the providers’ facilities, which may require legislative or regulatory action. That is, cable providers would initially downconvert broadcasters’ high-definition digital signals to an analog format before they are transmitted to their subscribers. Similarly, DBS providers would initially downconvert broadcasters’ high- definition digital signals to a standard-definition digital format before they are transmitted to their subscribers. In this case, there would be no need for cable and DBS subscribers to acquire new equipment; only households viewing television using only an over-the-air antenna must take action to be able to view broadcasters’ digital signals. This case shares many attributes with the recently-completed DTV transition in Berlin, Germany. All over-the-air households—which account for approximately 21 million households in the United States—must do one of two things to be able to view digital broadcast signals. First, they could purchase a digital television set that includes a tuner capable of receiving, processing, and displaying a digital signal. The survey data we used indicated that only about 1 percent of over-the-air viewers have, as of now, purchased a digital television that contains a tuner. However, some large televisions sold today are required to include such a tuner and by July 2007, all television sets larger than 13 inches are required to include a tuner. After that time, consumers who purchase new television sets will automatically have the capability of viewing digital signals. Approximately 25 to 30 million new television sets are purchased each year in the United States. The second option available to over-the-air households is to purchase a digital-to-analog set-top box. That is, for those households that have not purchased a new television set, the set-top box will convert the digital broadcast signals to analog so that they can be viewed on an existing analog television set. Viewers with digital-to-analog set-top boxes would not actually see the broadcast digital signal in a digital format, but would be viewing that signal after it has been downconverted, by the set-top box, to be compatible with their existing analog television set. Currently, simple set-top boxes that only have the function of downconverting digital signals to analog are not on the market. More complex boxes that include a variety of functions and features, including digital to analog downconversion, are available, but at a substantial cost. However, manufacturers told us that simple, and less expensive, set-top boxes would come to the market when a demand for them develops. Case Two. In the second case, we assume that cable and DBS companies would be required to provide the broadcasters’ signals to their subscribers in substantially the same format as it was received from the broadcasters. Because some of the broadcasters’ signals are in a high-definition digital format, cable and DBS subscribers—just like over-the-air households— would need to have the equipment in place to be able to receive high- definition digital signals. There are several ways these subscribers could view these signals: Cable or DBS subscribers would be able to view digital broadcast television if they have purchased a digital television set with an over-the- air digital tuner. They would then have the capability of viewing local digital broadcast stations through a traditional television antenna—just like an over-the-air viewer. However, many cable and DBS households may want to continue to view broadcast television signals through their cable or DBS provider. Cable or DBS subscribers could purchase a digital television with a “cable card” slot. By inserting a “card” provided by the cable company into such a television, subscribers can receive and display the digital content transmitted by the cable provider. Only very recently, however, have cable-ready digital television sets—which allow cable subscribers to receive their providers’ digital signals directly into the television set— come to the market. Similar televisions sets with built-in tuners for satellite digital signals are not currently on the market. To view the high-definition signals transmitted by their subscription provider, the other possibility for cable and DBS households would be to have a set-top box that downconverts the signals so that they can be displayed on their existing analog television sets. That is, any downconversion in this scenario takes place at the subscribers’ household, as opposed to the subscription television providers’ facilities, as in case one. While all DBS subscribers and about a third of cable subscribers have set-top boxes that enable a digital signal from their provider to be converted to an analog signal for display on existing television sets, few of these set-top boxes are designed for handling high-definition digital signals. As such, if broadcasters’ signals are transmitted by cable and DBS providers in a high-definition format, not all cable and satellite subscribers would need new equipment, although most would. In case two, as in case one, all exclusively over-the-air households need a digital television set or a set-top box. Cost of Federal Subsidy for Set-Top Boxes Varies Considerably, Depending on Several Factors In this section we present the estimated cost of providing a subsidy to consumers for the purchase of a set-top box that would be designed to advance the digital television transition. The estimated subsidy costs presented here vary based on (1) the two cases discussed above about whether cable and DBS providers initially downconvert broadcasters’ digital signals at their facilities before transmitting them to subscribers; (2) varied assumptions about whether a means test is imposed and, if so, at what level; and (3) the expected cost of a simple digital-to-analog set-top box. All of the estimates presented here assume that only one television set is subsidized in each household that is determined to be eligible for the subsidy. Means test. Imposing a means test would limit the subsidy to only those households determined to be in financial need of a subsidy. A means test would limit eligibility for the subsidy to only those households with incomes lower than some specified limit. We employed two different levels of means tests. The scenarios with means tests are roughly based on 200 percent and 300 percent of the poverty level as the income threshold under which a household’s income must lie to be eligible for the subsidy. The poverty level is determined based on both income and the number of persons living in the household; for a family of four the official federal poverty level in 2004 was $18,850. Set-top boxes. We provide estimates based on two possible price levels for the boxes: $50 and $100. This range is based on conversations we had with consumer electronics manufacturers who will likely produce set-top boxes in the future. Set-top boxes for cable and DBS are often rented by subscribers, rather than purchased. Nevertheless, in cases where cable and DBS subscribers need new equipment, we assume that the financial support provided to them would be equivalent to that provided to over-the- air households. Table 1 provides the cost of a subsidy program under the assumption that cable and DBS providers downconvert broadcasters’ signals at their facilities in a manner that enables them to continue to transmit those signals to subscribers as they currently transmit broadcasters’ signals. In this case, cable or DBS subscribers do not require any new equipment, so only over-the-air households—approximately 21 million American households—would need new equipment. As shown in table 1, there is considerable variation in the cost of the subsidy program depending on the level of a means test and the price of the set-top box. Table 2 provides the cost of a subsidy program under the assumption that cable and DBS providers are required to transmit broadcasters’ digital signals in the same format as they are received. Under this scenario, nearly all over-the-air households and most cable and DBS subscribers will not have the equipment in place to view high-definition digital broadcast signals. Although subscribers typically rent, rather than purchase, set-top boxes, we assume that the same level of subsidy is provided to these households as is provided to over-the-air households to defray the cost of having to obtain a new or upgraded set-top box from their provider. There are two issues that stand as important caveats to the analyses we have presented on estimated set-top box subsidy costs. The first is that we based the majority of the analyses on survey results that provide information on the status of American television households as of early 2004. Over the next several years, new households will be established, some households might change the means through which they watch television, televisions sets with integrated digital over-the-air tuners as well as digital cable compatibility will be purchased, and some cable and DBS households will have obtained set-top boxes capable of receiving high-definition digital signals from their providers. Households’ purchase of certain new equipment could obviate the need for a subsidy for new television equipment. For example, some households may purchase a digital television set with an over-the-air tuner and begin to view digital broadcast signals in this manner; some large televisions sold today are required to include such a tuner and by July 2007, all television sets larger than 13 inches are required to include a tuner. In time, these factors could have the effect of reducing the cost of a set-top box subsidy because fewer households would need to be subsidized. The second caveat to these analyses is that these subsidy estimates do not include any costs associated with implementing a subsidy program. If the federal government determines that it would be worthwhile to provide this subsidy, the subsidy would need to be administered in some fashion, such as through a voucher system, a tax credit, a mail-in rebate, government distribution of equipment, or some other means. Any of these methods would impose costs that could be significant for the federal government and any other entities involved in administering the program. Such costs would be difficult to estimate until a host of decisions are made about how a subsidy program would be administered. As I mentioned earlier, our work on the DTV transition continues, and we will provide more information in a report later this year. We will discuss various ways that a subsidy program might be administered and provide some analysis of the benefits and drawbacks of these various methods. We will also provide a discussion of how information regarding the DTV transition and any associated subsidy program might best be provided to the American people. Mr. Chairman, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee may have at this time. Contact and Acknowledgments For questions regarding this testimony, please contact Mark L. Goldstein on (202) 512-2834 or [email protected]. Individuals making key contributions to this testimony included Amy Abramowitz, Dennis Amari, Michael Clements, Andy Clinton, Michele Fejfar, Simon Galed, Eric Hudson, Catherine Hurley, Bert Japikse, Sally Moino, Karen O’Conor, and Madhav Panwar. Appendix I: Methodology for Use of Survey Data Regarding Television Viewing To obtain information on the types of television service and equipment used by U.S. households, we purchased existing survey data from Knowledge Networks Statistical Research. Their survey was completed with 2,375 of the estimated 5,075 eligible sampled individuals for a response rate of 47 percent; partial interviews were conducted with an additional 96 people, for a total of 2,471 individuals completing some of the survey questions. The survey was conducted between February 23 and April 25, 2004. The study procedures yielded a sample of members of telephone households in the continental United States using a national random-digit dialing method. Survey Sampling Inc. (SSI) provided the sample of telephone numbers, which included both listed and unlisted numbers and excluded blocks of telephone numbers determined to be nonworking or business-only. At least five calls were made to each telephone number in the sample to attempt to interview a responsible person in the household. Special attempts were made to contact refusals and convert them into interviews; refusals were sent a letter explaining the purpose of the study and an incentive. Data were obtained from telephone households and are weighted by the number of household telephone numbers. As with all sample surveys, this survey is subject to both sampling and nonsampling errors. The effect of sampling errors due to the selection of a sample from a larger population can be expressed as a confidence interval based on statistical theory. The effects of nonsampling errors, such as nonresponse and errors in measurement, may be of greater or lesser significance but cannot be quantified on the basis of available data. Sampling errors arise because of the use of a sample of individuals to draw conclusions about a much larger population. The study’s sample of telephone numbers is based on a probability selection procedure. As a result, the sample was only one of a large number of samples that might have been drawn from the total telephone exchanges from throughout the country. If a different sample had been taken, the results might have been different. To recognize the possibility that other samples might have yielded other results, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval. We are 95 percent confident that when only sampling errors are considered each of the confidence intervals in this report will include the true values in the study population. All percentage estimates from the survey have margins of error of plus or minus 6 percentage points or less, unless otherwise noted. In addition to the reported sampling errors, the practical difficulties of conducting any survey introduce other types of errors, commonly referred to as nonsampling errors. For example, questions may be misinterpreted, some types of people may be more likely to be excluded from the study, errors could be made in recording the questionnaire responses into the computer-assisted telephone interview software, and the respondents’ answers may differ from those who did not respond. Knowledge Networks has been fielding versions of this survey for over 20 years. In addition, to reduce measurement error, Knowledge Networks employs interviewer training, supervision, and monitoring, as well as computer-assisted interviewing to reduce error in following skip patterns. For this survey, the 47 percent response rate is a potential source of nonsampling error; we do not know if the respondents’ answers are different from the 53 percent who did not respond. Knowledge Networks took steps to maximize the response rate—the questionnaire was carefully designed and tested through deployments over many years, at least five telephone calls were made at varied time periods to try to contact each telephone number, the interview period extended over about 8 weeks, and attempts were made to contact refusals and convert them into interviews. Because we did not have information on those contacted who chose not to participate in the survey, we could not estimate the impact of the nonresponse on our results. Our findings will be biased to the extent that the people at the 53 percent of the telephone numbers that did not yield an interview have different experiences with television service or equipment than did the 47 percent of our sample who responded. However, distributions of selected household characteristics (including presence of children, race, and household income) for the sample and the U.S. Census estimate of households show a similar pattern. To assess the reliability of these survey data, we reviewed documentation of survey procedures provided by Knowledge Networks, interviewed knowledgeable officials about the survey process and resulting data, and performed electronic testing of the data elements used in the report. We determined that the data were sufficiently reliable for the purposes of this report. Due to limitations in the data collected, we made several assumptions in the analysis. Number of televisions and number of people in the household were reported up to five; households exceeding four for either variable were all included in the category of five or more. For the purposes of our analyses, we assumed that households had no more than five televisions that would need to be transitioned and no more than five people. Number of people in the household was only used in calculating poverty, but may result in an underestimate of those households in poverty. Calculations of poverty were based on the 2004 Poverty Guidelines for the 48 contiguous states and the District of Columbia, published by the Department of Health and Human Services. We determined whether or not each responding household would be considered poor at roughly 200 percent and 300 percent of the poverty guidelines. Income data were reported in categories so the determination of whether or not a household met the 200 percent or 300 percent threshold required approximation, and for some cases this approximation may have resulted in an overestimate of the number of poor households. In addition, income data were missing for 24 percent of the respondents. To conduct the analyses involving poverty, we assumed that the distribution of those in varying poverty status was the same for those reporting and not reporting income data. Comparisons of those reporting and not reporting income data show some possible differences on variables examined for this report; however, the income distribution is very close to the 2003 income estimates published by the U.S. Census Bureau. To determine total numbers of U.S. households affected by the transition and total cost estimates for various transition scenarios, we used the U.S. Census Bureau’s Current Population Survey estimate of the total number of households in the United States as of March 2004. To derive the total number of households covered by the various scenarios, we multiplied this estimate by the proportions of households covered by the scenarios derived from the survey data. The standard error for the total number of U.S. households was provided by the Census Bureau, and the standard errors of the total number of households covered by the scenarios take into account the variances of both the proportions from the survey data and the total household estimate. All cost estimates based on the survey data have margins of error of plus or minus 16 percent or less. In addition, we contracted with Knowledge Networks to recontact a sample of their original 2004 survey respondents in October 2004. Households were randomly selected from each of three groups: broadcast- only television reception, cable television service without a set-top box, and satellite television service. For each group, 102 interviews were completed, yielding 306 total respondents (for a 63 percent response rate). To reduce measurement error, the survey was pretested with nine respondents, and Knowledge Networks employed interviewer training, supervision, and monitoring, as well as computer-assisted interviewing, to reduce error in following skip patterns. Due to the small sample size, the findings of these questions are not generalizable to a larger population. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The digital television (DTV) transition offers the promise of enhanced television services. At the end of the transition, radiofrequency spectrum used for analog broadcast television will be used for other wireless services and for critical public safety services. To spur the digital transition, some industry participants and experts have suggested that the government may choose to provide a subsidy for settop boxes, which can receive digital broadcast television signals and convert them into analog signals so that they can be displayed on existing television sets. This testimony provides information on (1) the current distribution of American households by television viewing methods and whether there are demographic differences among these groups; (2) the equipment required for households to receive digital broadcast signals; and (3) the estimated cost to the federal government, under various scenarios, of providing a subsidy for set-top boxes that would enable households to view digital broadcast signals. We developed estimates of the cost of a subsidy for set-top boxes using data on household television characteristics, expected set-top box costs, and varied assumptions about how certain key regulatory issues will be decided. The three primary means through which Americans view television signals are over the air, cable, and direct broadcast satellite (DBS). GAO found that 19 percent, or roughly 21 million American households, rely exclusively on free over-the-air television; 57 percent, or nearly 64 million households, view television via a cable service; and 19 percent, or about 22 million households, have a subscription to a direct broadcast satellite (DBS) service. On average, over-the-air households are more likely to have lower incomes compared to cable and DBS households. While 48 percent of over-the-air households have incomes under $30,000, roughly 29 percent of cable and DBS households have incomes less than that level. Also, 6 percent of overthe- air households have incomes over $100,000, while about 13 percent of cable and DBS households have incomes exceeding $100,000. The specific equipment that each household needs to transition to DTV--that is, to be able to view digital broadcast signals--depends on the method through which the household watches television, whether the household has already upgraded its television equipment to be compatible with DTV, and the resolution of certain key regulatory issues. GAO examined two key cases regarding the regulatory issues. The assumption for case one is that cable and DBS providers would continue providing broadcasters' signals as they currently do, thus eliminating the need for their subscribers to acquire new equipment. In this case, only households viewing television using only an over-the-air antenna would need to take action to be able to view broadcasters' digital signals. The assumption for the second case is that cable and DBS providers would be required to provide broadcasters' digital signals to subscribers in substantially the same format as broadcasters transmitted those signals. This would require cable and DBS subscribers, in addition to over-the-air households, to have equipment in place to be able to receive their providers' high-definition digital signals. If a subsidy for set-top boxes is only needed for over-the-air households (case one), GAO estimates that its cost could range from about $460 million to about $2 billion, depending on the price of the set-top boxes and whether a means test--which would limit eligibility to only those households with incomes lower than some specified limit--is employed. If cable and satellite subscribers also need new equipment (case two), the cost of providing the subsidy could range from about $1.8 billion to approximately $10.6 billion. We provided a draft of this testimony to the Federal Communications Commission (FCC) for their review and comment. FCC staff provided technical comments that we incorporated where appropriate.
GAO_GAO-03-696
Background The Mint and BEP, which are part of the Treasury Department, produce the nation’s coins and currency. The Mint primarily produces circulating coins in Denver, Colorado, and Philadelphia. It also makes numismatic coins and medals and stores over $100 billion in government precious metals at facilities in Denver; Fort Knox; Philadelphia; San Francisco, California; Washington, D.C.; and West Point, New York. BEP produces (1) the nation’s currency for the Federal Reserve System, (2) many security documents that the federal government issues, and (3) some postage stamps. Its production facilities are in Washington, D.C., and Ft. Worth, Texas. During fiscal year 2002, the Mint produced and shipped about 15 billion circulating coins at a cost of $430.9 million, including $47.2 million for security. BEP produced and shipped about 7.1 billion Federal Reserve notes in 2002 at a cost of $376.7 million, including $33.2 million for security. The authority of the Mint and BEP to establish police forces is derived from 40 U.S.C. § 1315, which provides the Mint and BEP police with powers to enforce federal laws and regulations for the protection of individuals and property, including making arrests and carrying firearms. Prior to the enactment of the Homeland Security Act of 2002, the Administrator of the General Services Administration (GSA), through GSA’s Federal Protective Service (FPS), was responsible for policing government buildings under GSA’s control and had delegated this responsibility to the Secretary of the Treasury who redelegated it to the Mint and BEP. Although the Homeland Security Act amended 40 U.S.C. § 1315 by transferring responsibility for this policing authority to the Secretary of the Department of Homeland Security (DHS), the savings provisions in the act state that the existing delegations will continue to apply. Additional security legislation found in Public Law 104-208 (1996) provides Mint and BEP police officers with the authority to carry out their duties on Mint and BEP property and the surrounding areas and while transporting coins, currency, and other agency assets. The primary mission of the Secret Service is to protect the President and other individuals, enforce the nation’s counterfeiting laws, and investigate financial crimes. In carrying out this mission, the Secret Service’s Uniformed Division also protects the buildings in which the people it protects are located, such as the White House complex, the Treasury Department headquarters building and annex, the Vice President’s residence, and foreign diplomatic missions. The Uniformed Division has statutory authority to carry out its duties under 3 U.S.C. § 202 and 18 U.S.C. § 3056, including the power to make arrests, carry firearms, and execute warrants issued under the laws of the United States. The Secret Service’s jurisdiction extends throughout the United States on mission-related work. How Security Is Provided at the Mint, BEP, and Selected Other Organizations The Mint and BEP use their own police forces to protect their facilities and the money they produce. Eight of the 12 coin and currency organizations in the other G7 nations responded to our requests for information. Four organizations reported that they only used their own security forces; 2 organizations said they used their own security forces supplemented with contractor personnel; 1 organization said it used an outside agency to supplement its own security force; and 1 organization said that it used an outside agency to provide its security. The six casino and banking businesses that we contacted, which handle large amounts of cash, used either their own security staff or contract staff. In general, the businesses that used their own employees to provide security said they did so to maintain greater control over their security operations, while the businesses that used contract security personnel generally said they did so because it was less costly. Mint and BEP Police Forces As of March 2003, the Mint had 381 police officers. It also employed 38 people to provide administrative support for its security operations. BEP had 209 police officers as of March 2003. It also employed 36 people to provide administrative support for its security operations. In addition, BEP employed 79 security specialists, investigators, and security managers who BEP does not count as police officers, but who are licensed and trained to carry firearms and can provide back-up for the police. BEP conducts most of its own background investigations, while the Mint contracts out this work. The Mint and BEP police primarily provide security by guarding entry and exit at the agencies’ facilities and conducting electronic surveillance. In contrast to the Secret Service, which is concerned primarily with protecting individuals and, as part of that mission, controlling public access into protected facilities, the Mint and BEP police are focused on preventing employees from taking coins and currency from the facilities. Both the Mint and BEP police use outside experts to conduct threat assessments regarding their facilities and to make recommendations for security improvements. The Mint and BEP police provide security for production facilities that are not located in the same cities. The Mint police provide protection at the primary coin production facilities in Denver and Philadelphia; the facilities in San Francisco and West Point, which produce numismatic coins; the Ft. Knox facility, where gold and other precious metals are stored; and the Mint’s Washington, D.C., headquarters. The BEP police provide protection at BEP’s Washington, D.C., headquarters and at currency production facilities in Washington, D.C., and Ft. Worth. Because both the Mint and BEP protect money producing facilities, the two agencies have considered merging their police forces. According to the Mint, a combined police force could exercise greater flexibility in deploying security personnel in response to emergencies. However, the Mint also said that (1) because of the geographic dispersion of the Mint’s and BEP’s production facilities, the number of police positions that could be eliminated through a merger of the police forces would be limited and (2) all Mint and BEP police officers would have to be trained in the security aspects of both the coin and currency production processes. BEP management was opposed to merging the Mint and BEP police forces because the centralization of the forces would not necessarily lead to a more effective security effort, and these officials raised questions regarding managerial controls, allocation of resources and funds, and accountability. BEP management noted that because Mint and BEP production facilities are not located in the same cities, local supervision still would be needed at each facility. Although the Mint and BEP are not pursuing a merger of their police forces, they are considering sharing certain security-related functions. In April 2003, Mint and BEP officials met to discuss the sharing of security-related services and agreed to share intelligence information, and they are studying the feasibility of jointly conducting drug testing and background investigations. Appendix II provides specific information regarding Mint and BEP police forces in terms of the facilities they protect, job classifications, number of police, application requirements, starting salaries, attrition rates, and training requirements. Security Arrangements at Money Producing Facilities in Other Countries We sent questionnaires to both the coin and currency producing organizations in the six other G7 nations (Canada, France, Germany, Italy, Japan, and the United Kingdom) requesting information about who provides their security and whether they had experienced thefts from 1993 through 2002. Eight of the 12 coin and currency producing organizations responded to our requests for information. Four organizations reported that they only used their own security forces; 2 organizations said they used their own security forces supplemented with contractor personnel; 1 organization said it used its own security force and personnel from the country’s customs agency; and 1 organization said that the country’s Ministry of Defense provided its security. Two of the 8 organizations reported that they had experienced thefts of $1,000 or more over the last 10 years; 1 of those organizations was protected by its own security force, and the other was protected by the country’s Ministry of Defense. The organization that was protected by its own security force reported experiencing two thefts. One incident involved an employee’s theft of gold that was worth about $40,000. The other incident involved two employees’ theft of error coins worth about $1,000 to coin collectors. The second organization, which was protected by the country’s Ministry of Defense, reported that currency worth about $40,200 was stolen from its facilities. The other 6 organizations that responded said they had not experienced any thefts of $1,000 or more over the last 10 years. Security Arrangements at Businesses that Handle Large Amounts of Cash We contacted four banks and two casinos regarding who provides their security and why because, like the Mint and BEP, these entities also handle large amounts of cash. The security director for one banking company said that it only uses its own security guards in its major cash vault facilities, which may contain hundreds of millions of dollars. He said that from his company’s assessment of risk factors and experiences, it appeared that its own well-trained, well-paid security guards are more dependable, reliable, and honest than contract guards. The security directors at the three other banks we interviewed said that they used contract security personnel to provide their security because of the cost advantages compared with hiring in-house staff. Of those three companies that used contract guards, one also used in-house staff to supervise contract personnel and to guard its cash vault operations. Security directors from two major casino companies both said that they employ their own security staff, rather than using contract staff. The security director of the first company said that using its own security staff provides the company with more control, for example, by conducting background investigations on staff to ensure their suitability. Similarly, the security director of the second company said that it is difficult to maintain supervisory control or take corrective actions over contract security officers. Security Arrangements at the Federal Reserve System The Federal Reserve System, the nation’s central bank, employs its own police force. Security personnel were granted federal law enforcement authority under the Uniting and Strengthening America by Providing Appropriate Tools Required to Intercept and Obstruct Terrorism (USA Patriot) Act of 2001. A Federal Reserve security official said that the Federal Reserve preferred to use its own police force because it is important to (1) know the officers and their training and capabilities and (2) have the police force under its management control. The Federal Reserve reported that it had experienced 12 internal thefts by its employees in the past 10 years totaling $516,080, of which $239,562 has been recovered to date. Thefts at the Mint and BEP and Steps They Have Taken to Prevent Such Incidents We asked the Mint and BEP to provide information regarding thefts that occurred over the last decade. We reviewed these incidents with Mint and BEP security officials in terms of what happened, why, and how they occurred, and what steps have been taken to prevent thefts from recurring. According to Mint and BEP security officials, the thefts did not occur because of deficiencies in the existing security forces, but were the result of breaches of trust by employees. Further, both the Mint and BEP have had threat assessments conducted regarding their facilities and have generally implemented the recommendations that were made to improve security. As a result, it does not appear that the Mint and BEP have major security gaps that they are not currently addressing. Thefts at the Mint Although we requested that the Mint provide information on thefts that occurred during the past 10 years, the Mint indicated that it did not have records of thefts that occurred more than 5 years ago and provided records regarding incidents that occurred from 1998 through 2002. The Mint’s records indicated that about $92,862 of government property (primarily coins) was stolen during that time, including $80,000 total market value of coins with production errors (“error coins”) that a Mint employee stole from 1998 through 2000. The records reflected 74 reports of theft that occurred both inside and outside of Mint facilities. They also indicated that two employees were convicted of stealing from the Mint; an employee who stole 400 to 500 error coins was sentenced to prison, and another employee who was caught stealing 35 1-dollar coins was sentenced to probation. Mint records indicated that about $82,357 of property (primarily coins) was reported stolen from inside Mint facilities from 1998 through 2002, involving 28 incidents, including the following: the theft by an employee of 400 to 500 coins, including error coins with a total value to coin collectors of about $80,000, and 27 other incidents involving the theft of $2,357 of coins and government property, such as office and production equipment, including 10 incidents involving coins found on employees or contractors with a face value of at least $36. Outside of Mint facilities, about $10,505 in government property was reported stolen, involving 46 incidents, including the following: 33 reports from Mint customers who claimed that they did not receive coins sent through the mail, valued at $6,357; 9 reports of other stolen property, such as coins, coin blanks (coins that have not yet been stamped), and office equipment, valued at $1,356; 3 reports of penny blanks that were stolen from rail cars in 1999 and 2000, valued at about $592; and 1 report of $2,200 in nickels that were stolen while being transported by truck in 2000. We asked the Mint Police Chief whether the thefts occurred due to deficiencies in the police force and what has been done to prevent thefts from recurring. The Police Chief said that the incidents occurred because of an abuse of trust by employees, which he said that no police force could prevent. Regarding the most serious incident—the theft of 400 to 500 coins by a Mint employee from 1998 through 2000—the Police Chief said that this occurred when the Mint was producing a high volume of coins and new production equipment was installed at the Philadelphia facility, which began producing large numbers of error coins. He said that thousands of error coins were on the production floor during this period. He also said that because the first coin made in a batch was not being checked before continuing a production run, many error coins were produced before corrections were made to the equipment. The Police Chief said that the employee was able to take the coins out of the Mint facility over that 3-year period because he did not exceed the threshold set to trigger the metal detectors. The steps that the Mint took to prevent thefts from recurring did not involve improvements to the police force, but concerned improved internal controls and production procedures. For example, to prevent thefts of error coins, the Mint has required that the first coin produced in a batch be checked for errors; that new equipment be used to quickly destroy error coins once they are made, rather than having them brought to the metal fabricator to be melted; that a report be prepared and provided to the police chief each time an error coin is produced; and that error coins be locked up. The Mint also is in the process of sealing off the production areas from the rest of the production facility. In addition, the Mint is considering requiring production employees to wear uniforms, which would not have pockets or pants cuffs where coins could be hidden. Regarding the coins that Mint customers purchased, but claimed that they did not receive, the Mint’s Police Chief said the Mint has joined the U.S. Postal Service’s interagency fraud group, which helps to identify postal addresses that could be used to fraudulently order coins. Regarding the theft of coins while being transported, the Mint Police Chief said that coins are transported by contractors and that the government is fully insured for their loss. The Police Chief said that the Mint tries to minimize thefts by employees by having background investigations conducted on personnel hired and by severely punishing those who are caught stealing. The Mint indicated that it conducts threat assessments of its facilities every 5 years. In December 2000, Sandia National Laboratories assessed Mint facilities and made 42 recommendations to improve security in its report. None of these recommendations pertained to improvements in the police force, for example, in terms of the officers’ training or skills. In February and March 2003, we visited the Mint’s Philadelphia and Ft. Knox facilities and found that 9 of the 13 recommendations contained in the Sandia report pertaining to those facilities were fully implemented. Of the 4 recommendations that had not been implemented, the Mint indicated that it plans to implement 3 of them. The fourth recommendation had not been implemented because the Mint believed, and we agreed, that it detracted from, rather than enhanced, security. The Mint is also in the process of conducting other security reviews in connection with countering possible terrorist threats. To avoid possibly compromising security, we are not discussing in this report the specific subjects of the ongoing reviews or the specific security recommendations contained in past assessments. We also contacted two coin dealers who specialize in buying and selling error coins to ask about the recent circulation of such coins. The head of one firm said that the number of error coins that he has seen has dropped significantly since the spring of 2001. The head of the other firm said that he is now seeing the fewest number of error coins in decades. Thefts at BEP BEP reported 11 incidents of theft from 1993 through 2002 involving about $1.8 million. According to BEP, seven employees were convicted of theft in connection with these incidents, including one employee who was sentenced to prison, and about $1.5 million of the stolen money was later recovered. The incidents included the theft of $1,630,000 in test $100 bills from BEP’s Advanced Counterfeit Deterrence Vault by a program manager in 1994 ($1.3 million of the stolen money was later recovered); $60,000 from a Federal Reserve vault inside BEP facilities by three BEP $30,000 in blank, engraved $100 bills in 2001 by the former BEP currency $20,960 worth of stamps by a postage stamp worker in 1996, which was $2,000 (100, $20 notes) by a machine operator in 1993 (most of which were later recovered); a 32-note sheet of $10 blank engraved notes by a contract cleaning employee in 1993; and $25 in worn and soiled currency in 1999 by a currency examiner, who also admitted to taking $250 on one occasion and $400 on two other occasions. No suspects were identified with respect to four other security incidents. Three of the four incidents involved $5,500 in currency that was reported missing from BEP facilities in 1997 and 1998. The fourth incident involved the recovery from Atlantic City casinos in 1996 and 1997 of $16,000 in unfinished notes produced by BEP. We asked BEP’s Security Chief whether the thefts occurred because of deficiencies in the police force and what has been done to prevent thefts from recurring. The Security Chief said that the incidents did not occur because of deficiencies in the police force, but were due to a breach of trust by employees. Further, he said that bags and purses that employees carry with them to work are subject to search when leaving the facilities without first establishing probable cause, but that BEP police need to establish probable cause before searching an employee. Further, he pointed out that in some cases, the currency and postage stamps that employees attempted to steal did not leave BEP facilities because the police were effective in preventing removal of the items. BEP’s Security Chief said that the measures taken to prevent the recurrence of thefts include implementing the video surveillance of production staff, reducing the amount of money in the vault where $1.6 million was stolen in 1994, increasing the number of layers of wrap surrounding the currency after it is produced, rewrapping currency in the presence of security personnel when the original wrap has been damaged due to handling, increasing the number of police patrols in certain areas, having currency transported by a least two authorized personnel, and having the word ‘TEST’ imprinted on test currency. In June 1994, following a BEP employee’s theft of $1.6 million in test currency from BEP’s Washington, D.C., production facility, the Treasury Department directed that steps be taken to improve the security and internal controls at BEP, including an in-depth physical security review to be conducted by the Secret Service. In December 1994, the Secret Service completed its review and recommended 343 security improvements at BEP. Also, BEP contracted with KPMG Peat Marwick to review internal controls at BEP’s production facilities. In January 1995, KPMG made 134 recommendations for internal control improvements. Further, in September 1999, BEP contracted with the consulting firm Kelly, Anderson & Associates to review, evaluate, and document security and internal control corrective actions taken by BEP. Kelly Anderson reported in February 2000 that 19 of the Secret Service’s recommendations and 7 of the KPMG recommendations needed additional effort. In February and March 2003, we found that BEP had fully implemented 14 of the 19 Secret Service recommendations and is in the process of implementing another. BEP indicated that it did not intend to fully implement the other 4 recommendations (3 of the 4 were partially implemented) for cost and other reasons, which we did not believe to represent major gaps in security. We selected a random sample of 20 other Secret Service recommendations that were identified as being high risk and KPMG recommendations pertaining to that facility and verified that they had been implemented. Three of the Secret Service recommendations directly pertained to the police force. Two of the recommendations were to improve police training, and the third was to improve background checks on police before they are hired. Kelly Anderson reported in 2000 that these recommendations were fully implemented. BEP’s Security Chief said that, in addition to the agency’s ongoing assessments of terrorist-related threats, BEP is planning to have a contractor further assess terrorist threats and possible countermeasures. To avoid possibly compromising security, we are not discussing in this report what the future threat assessment would encompass or the specific security recommendations contained in past assessments. Potential Benefits and Costs of Having the Secret Service Provide Mint and BEP Security According to the Secret Service, if it were given the responsibility of protecting the Mint and BEP, those agencies could benefit from the Secret Service’s expertise in protection and criminal investigations. However, unlike the Secret Service’s Uniformed Division, the Mint and BEP police are already familiar with the coin and currency production processes, which is an advantage in identifying security risks. In addition, the government would incur additional costs for the initial training of police and retirement benefits if the Secret Service assumed responsibility for protecting the Mint and BEP. Secret Service’s Uniformed Division The Secret Service’s Uniformed Division consists of police officers whose duties are focused on the agency’s protective responsibilities, which are to protect the President and other individuals. As of February 2003, the Uniformed Division had 1,106 officers. The Secret Service requires Uniformed Division officers to obtain top-secret security clearances and submit to a polygraph test, which the Mint and BEP do not. The Secret Service also requires its officers to receive more initial training than the Mint and BEP police, and the Secret Service’s training is focused on its protective mission. Appendix III provides Uniformed Division data regarding facilities that the Secret Service officers protect, number of police, application requirements, starting salaries, attrition rates, and training requirements. We asked the Secret Service to provide data on the number and types of crimes and arrests that had occurred at the White House complex (which includes the White House, the Eisenhower Executive Office Building, and the New Executive Office Building) and the adjacent Treasury Department headquarters building and annex during the last 10 years. It reported an average of 1,574 incidents each year at these facilities from 1993 through 2002. The Secret Service reported, for example, in 2002, 34 arrests, 30 bomb threats, 5 demonstrations, 177 incidents of weapons (not firearms) found during magnetometer checks, 3 fence jumpers and unlawful entries, and 44 suspicious packages and vehicles. We also asked the Secret Service to break down the types of arrests that were made at the White House complex and the Treasury Department headquarters and annex during the past 10 years. The data indicated that from 1993 through 2002, the Secret Service made 72 arrests for unlawful entry, 66 of which were in the White House complex, and 25 arrests for theft in the area surrounding the White House complex (none of the arrests for thefts were reported as having occurred within the White House complex or the Treasury Department building). In providing the data regarding the number of security incidents that occurred at facilities protected by the Secret Service, the Secret Service emphasized that the Uniformed Division has a different mission than the Mint and BEP. The Secret Service said that the Uniformed Division is concerned primarily with protecting individuals and, as part of that mission, controlling public entry into its protected facilities. By comparison, the Mint and BEP police forces are concerned primarily with the theft of coins and currency by their agencies’ own employees from their respective facilities. According to the Secret Service, this difference between the missions of the Uniformed Division and the Mint and BEP is substantial and unique, and to compare data regarding the number of security incidents that occurred at facilities protected by the Uniformed Division and the Mint and BEP would result in an unfair analysis of the abilities and actions of the Uniformed Division. We are not implying that these data are similar or comparable; we present these data to illustrate the differences between the types and number of security incidents that are handled by the Secret Service and the Mint and BEP, which reflect their different missions, and to show that facilities protected by the Secret Service are not crime-free. The Chief of the Uniformed Division said that assuming the additional responsibility of protecting the Mint and BEP would result in the dilution of the Secret Service’s core protective responsibilities. He said that giving the Secret Service responsibility for the security of Mint and BEP facilities would divert from the agency’s core protective mission and would cause a staffing shortage. Further, he said that it would not be in the Secret Service’s best interests to take on the additional responsibility of providing security for the Mint and BEP at a time when the effect of transferring the Secret Service from the Treasury Department to DHS is undetermined. Mint and BEP officials were opposed to having an outside law enforcement agency assume responsibility for their security functions because they said that security is best accomplished by their own employees who are familiar with the agencies’ internal operations and the coin and currency production processes. Mint and BEP officials also said that their police officers have opportunities for advancement through promotion to supervisory positions. BEP also said that police are encouraged to transfer into career security positions, such as general investigator and security specialist. However, they also said that a larger agency such as the Secret Service may offer more opportunities for advancement. We asked the Secret Service to provide data on the number of Uniformed Division officers who had become special agents at the agency from fiscal years 1998 to 2002 and found that relatively few officers had become agents. (Duties of special agents include investigation and protection, while the mission of Uniformed Division officers is focused on protection.) The data indicated that an average of 21 officers had become special agents each year during that 5-year period out of an average Uniformed Division workforce of about 1,040 officers, or about 2 percent. If the Mint and BEP police became part of the Uniformed Division and there was a rotation of duties, the Secret Service’s mission of protecting the president and providing security at national special security events could be more appealing to some police officers, compared with the routine nature of protecting Mint and BEP facilities. The Mint’s Police Chief said that, to provide variety in the work of Mint police officers and to increase morale, up to 50 Mint police officers a year help the Uniformed Division perform duties at special events—for example, at the Olympics. Potential Costs Associated with Having the Secret Service Protect the Mint and BEP If the Secret Service protected the Mint and BEP, the government could incur additional costs because the Secret Service requires more initial training for its officers than the Mint and BEP police, Uniformed Division officers can retire with less government service than the Mint and BEP police, and the Secret Service would have to increase management and overhead to handle the additional workforce. Further, it is unknown how many Mint and BEP police officers would be able to meet the Secret Service’s hiring standards or what the costs would be of absorbing these officers into the Secret Service’s retirement system. The Uniformed Division provides new hires with 6 more weeks of initial training than the Mint police and 1 more week of training than the BEP police. The Uniformed Division spends an average of $20,033 per officer for initial training, compared with $16,306 per officer at the Mint and $18,791 per officer at BEP. The government also could be expected to incur higher retirement costs if the Secret Service protected the Mint and BEP because Uniformed Division officers receive federal law enforcement retirement benefits, which allows them to retire after 20 years of service at age 50 or at any age with 25 years of service. By comparison, Mint and BEP police receive standard retirement benefits for federal employees, which generally allow them to retire after 30 years of service at age 55 if covered by the Civil Service Retirement System (CSRS) or after 30 years of service under the Federal Employees Retirement System (FERS). Agency contributions for employees receiving federal law enforcement retirement benefits are 31.4 percent for employees in CSRS and 22.7 percent for employees in FERS. By comparison, agency contributions for employees receiving standard retirement benefits are 17.4 percent for employees in CSRS and 10.7 percent for employees in FERS. Further, because employees receiving federal law enforcement retirement benefits may retire sooner than those who do not receive such benefits, it is likely that there would be higher turnover in the police force, resulting in the need to train more officers and, thus, in higher training costs over time. If the Secret Service assumed responsibility for protecting the Mint and BEP and added 590 officers to its Uniformed Division to carry out that responsibility, the size of the Uniformed Division’s police force of 1,106 officers would increase by about 50 percent. Such an increase would likely require the Secret Service to add additional overhead and resources to manage the additional workforce. However, there also could be an offset by reducing or possibly eliminating similar positions at the Mint and BEP. It was not possible to estimate during our review what additional people and facilities would be needed or what cost would be incurred. In addition, if the Secret Service assumed responsibility for protecting the Mint and BEP, it is unknown how many of the Mint and BEP police would qualify to become part of the Uniformed Division, considering that applicants to become Uniformed Division officers are required to submit to a polygraph test and obtain top-secret security clearances, which are not required for Mint and BEP police. According to the Secret Service, for example, a substantial number of applicants for the position of Uniformed Division officer are rejected at the polygraph stage of the process. The Secret Service also requires applicants to meet certain physical fitness standards. Lastly, for those Mint and BEP police hired by the Uniformed Division, there would be a cost of including them in the federal law enforcement retirement plan. According to the Office of Personnel Management, it could cost the government an estimated $72.7 million (in present value dollars) if the entire existing Mint and BEP police forces were given law enforcement retirement benefits. This computes to an average of about $123,000 per officer. Because it was not possible to determine how many of the existing Mint and BEP police officers would be absorbed by the Uniformed Division, we could not estimate how much this would cost. An alternative regarding the Mint and BEP police forces would be to transfer them to a new, separate unit of the Uniformed Division. Under this alternative, the existing Mint and BEP police forces would become a second tier of the Uniformed Division and would be trained, supervised, and managed by the Secret Service. One potential advantage of this arrangement would be that the separate unit possibly could be used as a stepping-stone for Mint and BEP police who would like to become Uniformed Division officers. Further, this arrangement could streamline activities, such as procurement, training, and recruitment, that may save the government money. For example, a unified police force could help recruiting efforts by being able to offer a variety of duties and duty stations. However, according to the Secret Service, because of the differences in the hiring standards between the Uniformed Division and the Mint and BEP police, the stepping-stone concept for the Mint and BEP police officers would be impractical and the Secret Service would not use them in fulfilling its other protective responsibilities. The Secret Service said that this alternative offers no advantages to the Secret Service; would place additional financial, manpower, and other administrative burdens on the agency; and would dilute the Uniformed Division’s protective mission. Further, Uniformed Division officers receive federal law enforcement retirement benefits, while Mint and BEP police do not. The Mint and BEP police are covered by the labor management and employee relations provisions set forth in Chapter 71 of Title 5 of the United States Code, while the Secret Service employees are exempt from these provisions pursuant to 5 U.S.C. § 7103 (a)(3)(H). According to the Secret Service, if the Mint and BEP forces became a separate unit of the Uniformed Division, this would create animosity in the agency because the Mint and BEP police would have collective bargaining rights while Uniformed Division officers would not. The Mint said that because Uniformed Division officers receive federal law enforcement retirement benefits and the Mint and BEP police do not, the substantial disparity in the compensation between the Mint and BEP police officers and the Uniformed Division would create problems with morale and performance. In addition, the Mint said that placing responsibility for security in a separate agency that is not part of the Treasury Department could hinder the responsiveness of the security personnel to the Mint and BEP. According to BEP, because of the difference in hiring standards between the Uniformed Division and the Mint and BEP police forces, the Mint and BEP police forces comprising the second tier would always feel less than equal, which would also affect morale and create poor job performance. Agency Comments We provided copies of a draft of this report to the Directors of the Mint, BEP, and Secret Service for comment. On June 30, we received written comments from the Director of the Mint, which are reprinted in appendix IV. The Mint Director said that the Mint concurred with the findings and conclusions that apply to the Mint. BEP and Secret Service liaisons with GAO provided by E-mail technical comments regarding the draft report, which we incorporated where appropriate, but did not provide overall comments on the report. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Banking, Housing, and Urban Affairs, the House Committee on Financial Services, and the House Select Committee on Homeland Security; the Secretary of the Treasury; the Secretary of the Department of Homeland Security; the Directors of the Mint, BEP, and Secret Service; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Major contributors to this report were Bob Homan, John Baldwin, Paul Desaulniers, and John Cooney. If you have any questions, please contact me on (202) 512-2834 or at [email protected]. Scope and Methodology To review how security is provided at the U.S. Mint and the Bureau of Engraving and Printing (BEP) and how it compares with other organizations, we interviewed Mint and BEP officials about their security practices, responsibilities, and security threats. We collected information about their police forces, including job classifications, number of police, application requirements, starting salaries, retirement benefits, attrition rates, training, and statutory authorities. In addition, we discussed with Mint and BEP officials the feasibility of consolidating certain security- related functions and collected information on the number of personnel who work on security at the two agencies, but who are not police officers. We also asked the 12 coin and currency producing organizations in the six other G7 nations (Canada, France, Germany, Italy, Japan, and the United Kingdom); the Federal Reserve; and businesses that handle a large amount of cash, such as banks and casinos, about who provides their security, why, and whether they had experienced thefts of items in value of more than $1,000 in each incident during the last 10 years. We chose to contact the coin and currency producing organizations in the other G7 nations because they were in other major industrialized, democratic countries. We received responses from 8 of the 12 foreign coin and currency producing organizations that we contacted regarding our requests for information. The coin and currency producing organizations in 1 country did not respond to our requests for information. The banks and casinos that we contacted were selected by the American Bankers Association and the American Gaming Association, which represent the banking and casino industries. However, the selected banks and casinos do not represent the views of the two industries. To determine what thefts have occurred at the Mint and BEP and whether steps have been taken to address them, we asked the agencies to provide information about thefts that have occurred at their facilities during the last 10 years. We also reviewed relevant Department of the Treasury Office of Inspector General reports, including investigative reports pertaining to certain incidents of theft. We then discussed those incidents with the Mint and BEP security officials, and we also discussed with the officials the steps that were taken to prevent thefts from recurring and whether the thefts were caused by deficiencies in the police forces. Regarding Mint security, we also reviewed a 2000 Sandia National Laboratories report, which contained 42 recommendations to improve security, to determine whether its recommendations pertaining to the Mint’s Philadelphia, Pennsylvania, and Ft. Knox, Kentucky, facilities had been implemented. We visited those facilities because more recommendations pertained to those facilities compared with other Mint facilities. In addition, we contacted two coin dealers about the circulation of “error coins.” These dealers were selected because they specialized in the buying and selling of error coins. Regarding BEP security, we reviewed a 1994 Secret Service report, a 1995 KPMG Peat Marwick report, and a 2000 Kelly, Anderson & Associates report regarding recommendations to improve security at BEP facilities. We determined whether the 19 United States Secret Service recommendations and the 7 KPMG recommendations that Kelly Anderson indicated had not been implemented in 2000 were implemented. We also determined whether a random sample of 20 high-risk recommendations contained in the Secret Service and KPMG reports, which Kelly Anderson reported had been implemented, were actually implemented. We visited BEP’s Washington, D.C., facility to check whether recommendations had been implemented because the recommendations in the Secret Service and KPMG reports pertained to that facility. To determine the potential benefits and costs of having the Secret Service provide Mint and BEP security, we asked the Mint, BEP, Secret Service, and Treasury Department for their views on which agency would be most effective regarding various security-relations functions. We also compared the information that we collected regarding the Secret Service’s Uniformed Division with the data collected regarding the missions, security forces, training costs, retirement benefits, and security incidents at the Mint and BEP. Regarding retirement costs, we asked the Office of Personnel Management (OPM) to estimate how much more it would cost the government if the Mint and BEP police were given the same law enforcement retirement benefits that the Uniformed Division officers receive. To calculate the estimate, OPM asked us to provide data on the number of police at the Mint and BEP who are in the Civil Service Retirement System and the Federal Employees Retirement System and their average salaries. We also asked the Mint, BEP, and Secret Service to provide their views on the advantages and disadvantages of transferring the Mint and BEP police forces to a second tier of the Uniformed Division. The scope of our work did not include examining the advantages and disadvantages of contracting out security services for the Mint and BEP. We did our work in Washington, D.C.; Philadelphia; and Ft. Knox in accordance with generally accepted government auditing standards and investigative standards established by the President’s Council on Integrity and Efficiency from July 2002 through June 2003. Data Concerning the U.S. Mint and the Bureau of Engraving and Printing Police Forces Mint facilities in Denver, Colorado; Ft. Knox, Kentucky; Philadelphia, Pennsylvania; San Francisco, California; West Point, New York; and Washington, D.C. BEP facilities in Ft. Worth, Texas; and Washington, D.C. Police Officer (job classification 0083) Police Officer (job classification 0083) One year of specialized experience as a police officer or comparable experience (may be substituted with a 4-year college degree in Police Science or comparable field) One year of specialized experience as a police officer or comparable experience (may be substituted with a 4-year college degree in Police Science or comparable field) Fiscal year 2002 police attrition rates 14 percentFiscal year 2001 police attrition rates 7 percent 10 weeks of basic training at the Federal Law Enforcement Training Center (FLETC) Data Regarding the United States Secret Service’s Uniformed Division Secret Service’s Uniformed Division The White House complex, the Treasury Department headquarters building and annex, the Vice President’s residence, and foreign diplomatic missions According to Secret Service officials, pursuant to 5 U.S.C. § 5102 (c), the Uniformed Division is exempt from the federal job classification system and, therefore, its officers do not have the 0083 job classification that applies to the Mint and BEP police. Ages 21 to 36 at time of appointment Pass the National Police Officer Selection Test Pass a medical examination, drug screening, and background investigation Possess a high school diploma or equivalent Qualify for top-secret security clearance Must submit to a polygraph test 2003 starting salaries for police stationed in Washington, D.C. 10 weeks of basic training at FLETC 11 weeks of specialized training after FLETC 22 hours of annual trainingThe Secret Service protects the people who occupy these facilities. assigned to, such as canine or counter-sniper. Comments from the U.S. Mint GAO’s Mission The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e- mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs
The U.S. Mint and the Bureau of Engraving and Printing (BEP), which produce the nation's coins and currency, provide their own security and have experienced some problems with theft by employees. Although security is necessary to carry out the agencies' missions, their primary function is producing money. In light of these thefts, a congressional committee asked GAO whether the Mint and BEP should continue to provide their own security or whether the United States Secret Service should provide their security. Among the issues that GAO was asked to address were (1) how do the Mint, BEP, and other organizations that produce or handle large amounts of cash provide their security; (2) what thefts have occurred at the Mint and BEP and what steps have they taken to prevent thefts from recurring; and (3) what are the potential benefits and costs of having the Secret Service provide Mint and BEP security? The Mint said it generally agreed with the findings and conclusions that applied to the Mint. BEP and the Secret Service provided technical comments regarding the report, which GAO incorporated where appropriate, but had no overall comments on the report. The Mint and BEP use their own police forces to provide security. Eight of the 12 coin and currency organizations in the other G7 nations responded to our requests for information. Four organizations reported that they only used their own security forces; 2 organizations said they used their own security forces supplemented with contractor personnel; 1 organization said it used an outside agency to supplement its own security force; and 1 organization said that it used an outside agency to provide its security. Private businesses that handle large amounts of cash, such as banks and casinos, that we contacted said they used either their own security staff or contractor staff. The Mint and BEP have experienced some thefts by employees over the last decade. The Mint, which did not have records of security incidents that occurred more than 5 years ago, reported 74 incidents of theft involving about $93,000 from 1998 though 2002, while BEP reported 11 incidents of theft from 1993 through 2002 involving about $1.8 million. Both the Mint and BEP had threat assessments made of their facilities and processes and took corrective action to enhance security. The Secret Service said that if its Uniformed Division were charged with the responsibility of protecting the Mint and BEP, the two agencies could benefit from the Secret Service's expertise in protection and criminal investigations. However, unlike Secret Service police officers, Mint and BEP security personnel are already familiar with the coin and currency production processes, which is a benefit in identifying security risks in these manufacturing facilities. Further, if the Secret Service protected the Mint and BEP, the government could incur additional costs because the Secret Service requires more training for its officers than the Mint and BEP police. The Secret Service police officers also are provided more costly retirement benefits than the Mint and BEP police.
GAO_GAO-13-583
Background Insurance Industry Overview Generally, insurers offer several lines, or types, of insurance to consumers and others. Some types of insurance include life and annuity products and P/C. An insurance policy can include coverage for individuals or families, (“personal lines,”) and coverage for businesses, (“commercial lines”). Personal lines include home owners, renters, and automobile coverage. Commercial lines may include general liability, commercial property, and product liability insurance. The U.S. life and P/C industries wrote, or sold, an annual average of $601 billion and $472 billion, respectively, in premiums from 2002 through 2011. Figures 1 and 2 illustrate the percentage of premiums written for selected lines of insurance, compared to total premiums written in the life and P/C industries, for that time period.largest portion of business (32 percent) in the life industry, while private passenger auto liability insurance was the largest portion of business (20 Overall, individual annuities made up the percent) in the P/C industry. In the P/C industry, financial and mortgage guaranty insurance represented less than 2 percent of premiums written on average during the period.we reviewed because they facilitate liquidity in the capital markets. By protecting investors against defaults in the underlying securities, financial and mortgage guaranty insurance can support better market access and greater ease of transaction execution. Effects of the Crisis Were Limited Largely to Certain Products and Lines of Insurance The financial crisis generally had a limited effect on the insurance industry and policyholders, with the exception of certain annuity products in the life insurance industry and the financial and mortgage guaranty lines of insurance in the P/C industry. Several large insurers—particularly on the life side—experienced capital and liquidity pressure, but capital levels generally rebounded quickly. Historically, the number of insurance company insolvencies has been small and did not increase significantly during the crisis. Also, the effects on life and P/C insurers’ investments, underwriting performance, and premium revenues were limited. However, the crisis did affect life insurers that offered variable annuities with optional guaranteed living benefits (GLB), as well as financial and mortgage guaranty insurers—a small subset of the P/C industry. Finally, the crisis had a generally minor effect on policyholders, but some mortgage and financial guaranty policyholders received partial claims or faced decreased availability of coverage. The Financial Crisis Had a Limited Effect on Most Insurers’ Operations Insurers Experienced Some Capital and Liquidity Pressure, but Insolvencies Were Limited Many life insurance companies experienced capital deterioration in 2008, reflecting declines in net income and increases in unrealized losses on investment assets. Realized losses of $59.6 billion contributed to steep declines in life insurers’ net income that year. The realized losses stemmed from other-than-temporary impairments on long-term bonds (primarily mortgage-backed securities, or MBS) and from the sale of equities whose values had recently declined. A dozen large life insurance groups accounted for 77 percent of the total realized losses in 2008, with AIG alone, accounting for 45 percent of the realized losses. As illustrated in figure 3, life insurers’ net income decreased from 2007 to 2008, from positive income of $31.9 billion to negative income (a loss) of $52.2 billion. However, it rebounded back to positive income of $21.4 billion in 2009, largely as a result of decreased underwriting losses and expenses. Income increased further to $27.9 billion in 2010 but fell again—to $14.2 billion—in 2011, reflecting increased underwriting losses and expenses. Total unrealized losses of $63.8 billion in the life insurance industry, combined with the decline in net income, contributed to a modest capital decline of 6 percent, to $253.0 billion, in 2008. As with realized losses, AIG accounted for 47 percent of total unrealized losses, and seven large insurance groups accounted for another 35 percent (see app. II). The majority of the unrealized losses occurred in common stocks and other invested assets (e.g., investments in limited partnerships and joint venture entities). However, the unrealized losses and declines in net income were addressed by a substantial increase in capital infusions from issuance of company stock or debt in the primary market, transfer of existing assets from the holding company, or, notably, from agreements with the U.S. Treasury or Federal Reserve (see paid in capital or surplus in fig. 4). AIG accounted for more than half (55 percent) of the capital infusions in 2008, reflecting an agreement with the U.S. Treasury for the Some other large life Treasury’s purchase of about $40 billion in equity.insurance companies—through their holding companies—were also able to raise needed capital through equity or debt issuance, or through the transfer of existing assets from the holding companies. As shown in figure 4, many publicly traded life insurers or their holding companies continued to pay stockholder dividends throughout the crisis. Life insurers’ capital, increased by 15 percent, to $291.9 billion, from 2008 to 2009, partly as a result of the increase in net income. By 2011, life insurers had net unrealized gains of $20.8 billion, indicating improvements in the value of their investment portfolios. During the crisis, aggregated stock prices of publicly traded life insurers declined substantially. As figure 5 illustrates, aggregate stock prices (based on an index of 21 life insurance companies) began falling in November 2007 and had declined by a total of 79 percent by February 2009. Although prices rose starting in March 2009, they had not rebounded to pre-2008 levels by the end of 2011. In comparison, the New York Stock Exchange (NYSE) Composite Index declined by a total of 55 percent during the same time period. See appendix II for additional analysis of stock prices. P/C insurers also experienced a steep decline in net income during the crisis, with a drop of 94 percent from 2007 to 2008, although the industry’s net income remained positive at $3.7 billion (see previous fig. 3). Realized losses of $25.5 billion contributed to the decline in net income. Seven P/C insurance groups, including six large groups and one smaller financial guaranty insurance group, accounted for 47 percent of the realized losses in 2008. The realized losses resulted primarily from other-than-temporary impairments taken on certain bonds and preferred and common stocks. Net underwriting losses of $19.6 billion (compared to net underwriting gains of $21.6 billion in 2007) also affected net income for the P/C industry in 2008, as did declines in net investment income and other factors. Many of the insurers with the greatest declines in net income from 2007 to 2008 were primarily financial and mortgage guaranty companies. P/C insurers’ capital also declined from 2007 to 2008, to $466.6 billion (a 12 percent decline). Although the reduction in net income was a major factor in the capital decline, unrealized losses of $85.6 billion also played a role. The greatest unrealized losses occurred in common stocks and other invested assets. Three large P/C insurance groups accounted for 55 percent of the losses. Capital infusions mitigated the decline in capital, as illustrated in figure 6, and P/C insurers or their holding companies continued to pay stockholder dividends. P/C insurers’ capital increased by 11.6 percent and 8.3 percent from the previous year, respectively, in 2009 and 2010. Aggregated stock prices of publicly traded P/C companies declined less severely than those of life insurance companies during the crisis. As figure 5 demonstrates, P/C companies, like life insurance companies, saw their lowest stock prices in February 2009, representing a 40 percent decline from the highest closing price in December 2007. However, prices had rebounded to 2006 levels by mid-to-late 2009 and remained there through 2011. See appendix II for additional analysis of stock prices. While regulators we interviewed stated that most life and P/C insurers’ strong capital positions just before the crisis helped minimize liquidity challenges during the crisis, many still experienced pressures on capital and liquidity. For example, a representative of the life insurance industry and a regulator noted that it was extremely challenging for most insurers—as well as banks and other financial services companies—to independently raise external capital during this time, which led to some insurers’ participation in federal programs designed to enhance liquidity. In addition, some life insurers were required to hold additional capital because of rating downgrades to some of their investments. Mortgage and financial guaranty insurers with heavy exposure to mortgages and mortgage-related securities experienced liquidity issues later in the crisis, when mortgage defaults resulted in unprecedented levels of claims. In addition to maintaining the ability to pay claims, it is important for insurers to meet minimum capital standards to maintain their credit ratings, which help them attract policyholders and investors. During this period few insurance companies failed—less than 1 percent. The number of life and P/C companies that go into receivership and liquidation tends to vary from year to year with no clear trend (see table 1). While the number of life insurers being placed into receivership peaked in 2009, receiverships and liquidations for P/C companies in 2009 were generally consistent with other years (except 2008, when incidences declined). Specifically, throughout the 10-year review period, life insurance receiverships and liquidations averaged about 6 and 4 per year, respectively. In 2009, there were 12 receiverships and 6 liquidations. P/C receiverships and liquidations averaged about 15 and 13 per year, respectively; in 2009, there were 15 receiverships and 13 liquidations. However, these companies represented a small fraction of active companies in those years. There were more than 1,100 active individual life companies and 3,000 active individual P/C companies from 2007 through 2009. Appendix II provides information on the assets and net equity (assets minus liabilities) of insurers that were liquidated from 2002 through 2011. Some regulators and insurance industry representatives we interviewed stated that receiverships and liquidations that occurred during and immediately after the financial crisis were generally not related directly to the crisis. While one regulator stated that the crisis might have exacerbated insurers’ existing solvency issues, regulators said that most companies that were placed under receivership during that time had been experiencing financial issues for several years. Regulators and industry officials we interviewed noted two exceptions to this statement; both were life insurance companies that had invested heavily in Fannie Mae and Freddie Mac securities and in other troubled debt securities. See appendix III for a profile of one of these companies. As noted above, for most insurers investment income is one of the two primary revenue streams. Insurers’ net investment income declined slightly during the crisis but had rebounded by 2011. In the life and P/C industries in 2008 and 2009, insurers’ net income from investments declined by 7 percent and 15 percent respectively from the previous year (see fig. 7). For life insurers, these declines primarily reflected declines in income on certain common and preferred stock, derivatives, cash and short term investments, and other invested assets. For P/C insurers, the declines primarily reflected declines in income on U.S. government bonds, certain common stock, cash and short-term investments, and other invested assets. Table 2 illustrates the percentages of life and P/C insurers’ gross investment income derived from various types of investments. Bonds were the largest source of investment income in both industries, and they increased as a percentage of gross investment income during the crisis. Life and P/C insurers’ income from other types of investments, such as contract loans, cash, and short-term investments, decreased during the crisis as a percentage of their gross investment income. According to insurance industry representatives and a regulator, going forward, low interest rates are expected to produce lower investment returns than in the past, reducing insurers’ investment income and likely pressuring insurers to increase revenue from their underwriting activities. Although life and P/C companies had some exposure to MBS (including residential and commercial MBS, known respectively as RMBS and CMBS) from 2002 through 2011, as part of insurers’ total bond portfolios, these securities did not present significant challenges. In both industries, investments in derivatives constituted a negligible amount of exposure and investment income and were generally used to hedge other risks the insurers faced. Life and P/C insurers’ underwriting performance declined modestly during the crisis. In the life industry, benefits and losses that life insurers incurred in 2008 and 2009 outweighed the net premiums they wrote (see fig. 8). A few large insurance groups accounted for the majority of the gap between premiums written and benefits and losses incurred during these 2 years. For example, one large life insurance group incurred $61.3 billion more in benefits and losses than it wrote in premiums in 2009. variable annuities with guarantees purchased before the crisis were “in the money,” meaning that the policyholders’ account values were significantly less than the promised benefits on their accounts, so the policyholders were being credited with the guaranteed minimum instead of the lower rates actually being earned. Thus, policyholders were more likely to stay in their variable annuities during the crisis because they were able to obtain higher returns than they could obtain on other financial products. From 2007 to 2008, the P/C industry’s underwriting losses increased as a percentage of their earned premiums (loss ratio), and the average combined ratio—a measure of insurer underwriting performance—rose from 95 percent to 104 percent, indicating that companies incurred more in claims and expenses than they received from premiums. However, as illustrated in figure 9, the ratios during the crisis were not substantially different from those in the surrounding years. As discussed later in this report, financial and mortgage guaranty insurers’ combined ratios were particularly high and contributed to the elevated overall P/C industry combined ratios from 2008 going forward. P/C insurance industry representatives we interviewed told us that the P/C market was in the midst of a “soft” period in the insurance cycle leading into the crisis. Such soft periods are generally characterized by insurers charging lower premiums in competition to gain market share. In addition, timing of certain catastrophic events in the P/C industry overlapped with crisis- related events. For example, one state regulator noted that in the same week in September 2008 that AIG’s liquidity issues became publicly known, Hurricane Ike struck the Gulf Coast. According to NAIC analysis, this resulted in significant underwriting losses for many P/C insurers. NAIC determined that Hurricane Ike, as well as two other hurricanes and two tropical storms, contributed to more than half of the P/C industry’s estimated $25.2 billion in catastrophic losses in 2008, which represented a threefold increase from the prior year. While the crisis may have exacerbated certain aspects of this cycle, it is difficult to determine the extent to which underwriting losses were a result of the crisis as opposed to the existing soft market or the weather events of 2008. As noted previously, a few industry representatives and a regulator we interviewed stated that decreased investment returns may place more pressure on insurers to increase the profitability of their underwriting operations. As shown in figures 10 and 11, life and P/C insurers’ net investment gains have historically outweighed their net underwriting losses. As shown in figure 10, life insurers experienced net underwriting losses during every year of our review period, with the greatest losses occurring in 2008. Effects on premium revenues were primarily confined to individual annuities in a handful of large insurers. In the life industry, net premiums written declined by 19 percent from 2008 to 2009 to $495.6 billion, reflecting decreases in all four of the lines we reviewed—group and individual life insurance and group and individual annuities—with the largest decline in individual annuities (see fig. 12). Individual annuity premium revenues decreased more than for other life products because these products’ attractiveness to consumers is based on the guarantees insurers can provide. During the crisis, insurers offered smaller guarantees, because insurers generally base their guarantees on what they can earn on their own investments, and returns on their investments had declined. A small group of large companies contributed heavily to the decreases in this area. For example, one large life insurance group accounted for 6 percent of all individual annuity premiums in 2008 and 65 percent of the decreases in that area from 2008 to 2009. Another seven life insurance groups accounted for an additional 29 percent of individual annuity premiums and 25 percent of decreases in that area from 2008 to 2009. By 2011, net premiums in individual annuities had rebounded beyond their precrisis levels. P/C insurers’ net premiums written declined by a total of 6 percent from 2007 through 2009, primarily reflecting decreases in the commercial lines segment. In the lines we reviewed, auto lines saw a slight decline in net premiums written, but insurers actually wrote an increased amount of homeowners insurance. One insurance industry representative we interviewed stated that the recession caused many consumers to keep their old vehicles or buy used vehicles rather than buying new ones, a development that negatively affected net premiums written for auto insurance. Financial and mortgage guaranty insurers experienced respective declines of 43 percent and 14 percent in net premiums written from 2008 to 2009. As noted, many life insurers that offered variable annuities with GLBs experienced strains on their capital when the equities market declined during the crisis. Specifically, beginning in the early 2000s many life insurers began offering GLBs as optional riders on their variable annuity products. In general, these riders provided a guaranteed minimum benefit based on the amount invested, and variable annuity holders typically focused their investments on equities. From 2002 through 2007, when the stock market was performing well, insurers sold a large volume of variable annuities (for example, as table 3 shows, they sold $184 billion in 2007). As illustrated in table 3, as of 2006 (the earliest point for which data were available), most new variable annuities included GLBs. These insurers had established complex hedging programs to protect themselves from the risks associated with the GLBs. However, according to a life insurance industry representative and regulators we interviewed, when the equities market declined beginning in late 2007, meeting the GLBs’ obligations negatively impacted insurers’ capital levels as life insurers were required to hold additional reserves to ensure they could meet their commitments to policyholders. According to a few regulators and a life insurance industry representative we interviewed, ongoing low interest rates have recently forced some life insurers to raise prices on GLBs or lower the guarantees they will offer on new products. In the P/C industry, the financial and mortgage guaranty lines were severely affected by the collapse of the real estate market. As noted earlier, these lines represented less than 2 percent of the total P/C industry’s average annual written premiums from 2002 through 2011 and are unique in that they carry a high level of exposure to mortgages and mortgage-related securities. Mortgage guaranty insurers primarily insured large volumes of individual mortgages underwritten by banks by promising to pay claims to lenders in the event of a borrower default (private mortgage insurance). Financial guaranty insurers also were involved in insuring asset-backed securities (ABS), which included RMBS. Additionally, these insurers insured collateralized debt obligations (CDO), many of which contained RMBS. These insurers guaranteed continued payment of interest and principal to investors if borrowers did not pay. These credit protection products included credit default swaps. Financial and mortgage guaranty insurers we interviewed stated that prior to the crisis, these two industries operated under common assumptions about the real estate market and its risk characteristics—namely, that housing values would continue to rise, that borrowers would continue to prioritize their mortgage payments before other financial obligations, and that the housing market would not experience a nationwide collapse. As a result of these common assumptions, these insurers underwrote unprecedented levels of risk in the period preceding the crisis. For example, according to a mortgage guaranty industry association annual report, the association’s members wrote $352.2 billion of new business in 2007, up from $265.7 billion in 2006. A financial guaranty industry representative told us that the industry had guaranteed about $30 billion to $40 billion in CDOs backed by ABS. The unforeseen and unprecedented rate of defaults in the residential housing market beginning in 2007 adversely impacted underwriting performance significantly for mortgage and financial guaranty insurers. As shown in table 4, combined ratios—a measure of insurer performance— increased considerably for both industries beginning in 2008, with mortgage guaranty insurers’ combined ratios peaking at 135 percent in both 2010 and 2011. In 2008 and later, several insurers in these two industries had combined ratios exceeding 200 percent. Financial and mortgage guaranty insurers are generally required to store up contingency reserves in order to maintain their ability to pay claims in adverse economic conditions. However, during the crisis, many insurers faced challenges maintaining adequate capital as they increased reserves to pay future claims. This led to ratings downgrades across both the financial and mortgage guaranty insurance industries beginning in early 2008. For example, in January 2008, Fitch Ratings downgraded the financial strength rating of Ambac Financial Group, Inc., a financial guaranty insurer, from AAA to AA, and Standard & Poor’s placed Ambac’s AAA rating on a negative rating watch. Standard & Poor’s downgraded the ratings of AMBAC and MBIA, Inc. (also a financial guaranty insurer) from AAA to AA in June 2008, and Fitch Ratings downgraded MGIC Investment Corp. and PMI Group, Inc.—the two largest mortgage insurers—from AA to A+ in June 2008. These downgrades had a detrimental impact on insurers’ capital standing and ability to write new business. For example, because ratings reflect insurers’ creditworthiness (in other words, their ability to pay claims), the value of an insurer’s guaranty was a function of its credit rating. Thus when an insurer receives a credit rating downgrade, the guaranty it provides is less valuable to potential customers. Additionally, credit ratings downgrades sometimes required insurers to post additional collateral at a time when their ability to raise capital was most constrained. According to industry representatives and insurers we interviewed, financial and mortgage guaranty insurers generally had what were believed to be sufficient levels of capital in the period leading into the crisis, but they had varying degrees of success in shoring up their capital in response to the crisis. Industry representatives and insurers also stated that early in the crisis, liquidity was generally not an issue, as insurers were invested in liquid securities and continued to receive cash flows from premium payments. However, as defaults increased and resulted in unprecedented levels of claims in 2008 and 2009, the pace and magnitude of losses over time became too much for some insurers to overcome, regardless of their ability to raise additional capital. As a result, several financial and mortgage guaranty insurers ceased writing new business, and some entered rehabilitation plans under their state regulator. In addition, insurers we interviewed told us that those companies that continued to write new business engaged in fewer deals and used more conservative underwriting standards than before the crisis. The case of one mortgage insurer we reviewed illustrated some of the challenges that financial and mortgage guaranty insurers experienced during the crisis. By mid-2008, the insurer had ceased writing new mortgage guaranty business and was only servicing the business it already had on its books. This insurer is licensed in all states and the District of Columbia. Previously, the insurer provided mortgage default protection to lenders on an individual loan basis and on pools of loans. As a result of continued losses stemming from defaults of mortgage loans— many of which were originated by lenders with reduced or no documentation verifying the borrower’s income, assets, or employment— the state regulator placed the insurer into rehabilitation with a finding of insolvency. See appendix III for a more detailed profile of this distressed mortgage guaranty insurer. The Effect of the Crisis on Policyholders Was Generally Small NAIC and guaranty fund officials told us that life and P/C policyholders were largely unaffected by the crisis, particularly given the low rate of insolvencies. The presence of the state guaranty funds for individual life, fixed annuities and the GLBs on variable annuities, and P/C lines meant that, for the small number of insolvencies that did occur during or shortly after the crisis, policyholders’ claims were paid up to the limits under guaranty fund rules established under state law. However, financial and mortgage guaranty insurers typically are not covered by state guaranty funds and, as described below, some policyholders’ claims were not paid in full. According to industry representatives, the crisis generally did not have a substantial effect on the level of coverage that most life and P/C insurers were able to offer or on premium rates. An insurer and industry representatives told us that due to the limited effect on most insurers’ capital, the industry maintained sufficient capacity to underwrite new insurance. As described earlier, P/C industry representatives told us that the crisis years coincided with a period of high price competition in the P/C insurance industry when rates generally were stable or had decreased slightly (soft insurance market). However, P/C industry representatives indicated that separating the effects of the insurance cycle from the effects of the financial crisis on premium rates is difficult. Moreover, insurers and industry representatives for both the life and P/C industries noted that because investment returns had declined, insurers were experiencing pressure to increase underwriting profits that in some cases could result in increased premium rates. In the annuities line, which was most affected by the crisis in the life insurance industry, effects on policyholders varied. Policyholders who had purchased variable annuities with GLBs before the crisis benefited from guaranteed returns that were higher than those generally available from other similar investments. However, as described previously, a few regulators and a life insurance industry representative told us that the prevailing low interest rates had forced some insurers to either lower the guarantees they offer on GLBs associated with variable annuities or raise prices on these types of products. According to data from LIMRA, the percentage of new variable annuity sales that offered GLB options declined from about 90 percent to 85 percent from 2011 to 2012. As a result, some consumers may have more limited retirement investment options. Financial guaranty and mortgage guaranty policyholders were the most affected among the P/C lines of insurance, although these policyholders were institutions, not individual consumers. While most insurers have continued to pay their claims in full, some insurers have been able to pay partial claims.no longer writing new business. This fact, combined with tightened underwriting standards and practices, may have made it more difficult for some policyholders to obtain coverage. On the other hand, industry officials have told us that the market for financial guarantees has declined because of the absence of a market for the underlying securities on which the guarantees were based; the current low-interest-rate environment; and the lowered ratings of insurers, which have reduced the value of the guarantees. Actions by State Regulators, Federal Programs, and Insurance Business Practices Helped Mitigate Some Effects of the Crisis Multiple regulatory actions and other factors helped mitigate the negative effects of the financial crisis on the insurance industry. State insurance regulators and NAIC took various actions to identify potential risks, and changed the methodology for certain RBC provisions and accounting requirements to help provide capital relief for insurers. In addition, several federal programs were also made available that infused capital into certain insurance companies. Also, industry business practices and existing regulatory restrictions on insurers’ investment and underwriting activities helped to limit the effects of the crisis on the insurance industry. State Regulators Took Actions to Identify and Address Potential Risks During the crisis, state regulators focused their oversight efforts on identifying and addressing emerging risks. Initially, insurers did not know the extent of the problems that would emerge and their effect on the insurance industry and policyholders, according to officials from one rating agency we spoke to. Further, as the financial crisis progressed, the events that unfolded led to a high degree of uncertainty in the financial markets, they said. To identify potential risks, state regulators said they increased the frequency of information sharing among the regulators and used NAIC analysis and information to help focus their inquiries. For example, an official from one state told us that, during the crisis, state regulators formed an ad hoc advisory working group on financial guaranty insurance. The group consisted of state regulators that had oversight of at least one domestic financial guaranty insurer in their state. The group’s purpose was to keep its members informed about the status of specific insurers and stay abreast of developments in the financial guaranty insurance sector. The official stated that the regulators also shared advice and details of regulatory actions they were implementing for specific financial guaranty insurers. Another state regulator increased its usual oversight activities and increased communications with companies domiciled in the state. In addition to using information from other state regulators, state insurance regulators said they also used information from NAIC to identify potential risks. Three state regulators we interviewed said they used NAIC’s information to identify potential problem assets and insurers with exposure to such assets. For example, one state regulator said it used reports on RMBS and securities lending from NAIC’s Capital Markets Bureau to better focus its inquiries with insurers about their risk management activities. According to state regulators and industry representatives we spoke with, with the exception of mortgage and financial guaranty insurers, they did not identify serious risks at most insurers as a result of the crisis. A risk they did identify, although they said not many insurers were engaged in the practice, was securities lending. Two state regulators told us that to address potential risks, they created new rules covering securities lending operations. For example, one state regulator said that during the crisis it sought voluntary commitments from life insurers to limit their securities lending operations to 10 percent of their legal reserves, thereby limiting any risk associated with securities lending activities.regulator stated that it also enacted legislation extending to all insurers certain securities lending provisions. Both states took these actions after AIG’s securities lending program was identified as one of the major sources of its liquidity problems in 2008. NAIC Also Took Steps to Identify Potential Risks and Share Information with States NAIC officials stated that NAIC increased its research activities to identify potential risks and facilitated information sharing with state regulators. NAIC operates through a system of working groups, task forces, and committees comprised of state regulators and staffed by NAIC officials. These groups work to identify issues, facilitate interstate communication, and propose regulatory improvements. NAIC also provides services to help state regulators—for instance, maintaining a range of databases and coordinating regulatory efforts. NAIC officials said that they identified potential risks and other trends through their regular analyses of statutory financial statement filings, which contain detailed investment data. For example, during the crisis NAIC’s analysis of insurers’ investment data identified companies with exposure to certain European markets that posed potential risks for the companies. NAIC passed this information along confidentially to the relevant state regulators for further monitoring. As discussed above, a state regulator we interviewed said they used NAIC’s in-depth analyses to help monitor their domiciled insurers for potential risks such as RMBS. To facilitate information sharing about private mortgage insurance, NAIC officials said it formed an informal working group comprised of domestic regulators of private mortgage insurance companies. These regulators, in turn, kept other states informed about the status of private mortgage insurers. NAIC officials said this informal working group was later made permanent as the Mortgage Guaranty Insurance Working Group, which continues to assess regulations for private mortgage insurance companies for potential improvements. NAIC officials said its Financial Analysis Working Group (FAWG), a standing working group comprised of staff from various state insurance departments, identified insurers with adverse trends linked to developing issues during the crisis and helped ensure that state regulators followed through with appropriate oversight activities. The group shares information about potentially troubled insurers and certain insurance groups, market trends, and emerging financial issues. It also works to help ensure that state regulators have taken appropriate follow-up actions. For example, NAIC officials said that FAWG analyzed each insurer’s exposure to subprime mortgage assets, identified those with the most exposure, and then took steps to ensure that domestic state regulators followed up with them.used FAWG information to help identify emerging issues, potentially troubled companies, and best practices, among other things. Also, NAIC officials said that FAWG had informed state regulators about the current status of financial guaranty and private mortgage insurance companies on a regular basis as these sectors experienced more financial distress than the rest of the insurance industry during the crisis. Regulatory officials from one state said that they relied on information collected by FAWG to monitor financial guaranty and private mortgage insurers operating in their state because none of these insurers were domiciled there. They added that the private mortgage insurers doing business in their state had large exposures because of the large housing market in their state. State regulators told us that they had NAIC also expanded its Capital Markets Bureau activities during the crisis to help analyze information on the insurance industry’s investments, such as exposure to potential market volatility, said NAIC officials. According to NAIC’s website, the mission of the bureau is to support state insurance regulators and NAIC on matters related to the regulation of insurers’ investment activities. The bureau monitors developments and trends in the financial markets that affect insurance companies, researches investment issues, and reports on those that can potentially affect insurance company investment portfolios. State regulators said they used these reports during the crisis. For example, one state said that the report on the effects of the European debt crisis on U.S. insurers was useful and another state said the reports on securities lending helped focus their dialogue with domiciled insurers about their risk management practices. As discussed later in this report, the bureau also worked with third parties to model the values of insurers’ portfolios of RMBS and CMBS. To increase transparency regarding insurers’ securities lending reinvestment activities, NAIC made changes to the statutory accounting rule and added disclosure requirements to address risks that were highlighted by AIG’s securities lending program, which was a major source of its liquidity problems in 2008. According to an NAIC report, AIG’s problems in 2008 highlighted the lack of transparency of securities lending collateral—specifically when the collateral was cash. The report stated that the statutory accounting rule that addresses cash collateral, among other things, was subject to liberal interpretations in the insurance industry and that consequently some companies had not disclosed their cash collateral in their statutory financial statements. To increase transparency, NAIC made changes to the statutory accounting rule in 2010 and subsequently replaced it with the Statement of Statutory Accounting Principles (SSAP) No. 103—Accounting for Transfers and Servicing of Financial Assets and Extinguishments of Liabilities. SSAP No. 103, which took effect on January 1, 2013, increases the details about cash collateral that companies report on statutory financial statements, such as the maturation of investments obtained with it and instances in which counterparties can call it back. NAIC also added a new reporting requirement, Schedule DL which requires insurance companies to provide more details to support the aggregate information about invested collateral reported on an insurer’s statutory financial statements. Changes to Certain Risk- Based Capital Provisions and an Accounting Requirement Helped Reduce Pressure on Insurers’ Capital NAIC Changed the Method of Calculating Risk-Based Capital Charges for MBS NAIC changed the methodology it used in its guidance to state insurance regulators to determine the amount of risk-based capital (RBC) that state regulators should require insurers to hold for nonagency MBS investments. As discussed earlier, life insurance companies saw a decline of almost 6 percent in capital in 2008. Prior to the change, NAIC’s methodology for calculating RBC charges for nonagency MBS relied on agency ratings. For example, capital charges were lower for RMBS with a relatively high agency rating than for those with a lower rating. During the crisis, the historically high levels of failed mortgages across the nation were followed by rating agency downgrades of nonagency RMBS that required insurers to increase their capital levels. NAIC officials told us that, in hindsight, using agency ratings to help determine the amount of capital an insurer should hold for their nonagency MBS investments was not appropriate because these securities were rated too highly before the crisis and overly pessimistic after the crisis. As a result, NAIC moved to a methodology for calculating RBC charges for nonagency MBS that determined an expected recovery value for each security based on a set of economic scenarios. NAIC contracts with BlackRock and PIMCO to conduct these analyses. NAIC reported that this change in methodology not only had eliminated reliance on agency ratings, but also had increased regulatory involvement in determining how RBC charges were calculated for nonagency MBS. NAIC officials saw both of these results as positive. Although this change in methodology did result in a change in RBC charges for more than half of insurers’ RMBS holdings, the change did not significantly affect insurers’ financial statements. Because the new methodology resulted in estimated recovery values that were higher than the amortized values of RMBS shown on financial statements, in 2010 capital requirements for 59 percent of the insurance industry’s nonagency RMBS were reduced. However, almost 88 percent of industrywide CMBS holdings in 2011 were not affected by these changes. Officials from one rating agency said the change was appropriate because the new methodology was actually similar to the one used by the rating agency itself. Officials from another rating agency said that the switch to the new modeling method reduced transparency to insurers because NAIC did not release its modeling results for insurers to use until late in the year. During the financial crisis, some state regulators granted some insurers permission to use prescribed and permitted accounting practices that helped the insurers improve their capital positions. These practices included allowing alternative methods of calculating an insurer’s assets or liabilities that differ from statutory accounting rules and can result in a higher amount of assets admitted to an insurer’s statutory financial statements. Based on data from NAIC, insurers did request modifications to statutory accounting practices. From 2005 to 2007, about 30 such requests were made each year nationwide. In 2008, however, there were over 130 such requests. For each year that an insurer has used a prescribed or permitted practice, statutory accounting rules require it to disclose in its statutory financial statements specific information about each practice it used, including the net effect on its net income and capital. For example, an insurer could request a permitted practice to use a different method of valuing its subsidiary, and a higher valuation would increase the capital reported on its statutory financial statements. Table 5 shows the net effect of prescribed and permitted practices on life and P/C insurers’ net income and capital from 2006 through 2011. In 2009, the life insurance industry’s aggregate net income was about $1 billion less given the effects from prescribed and permitted practices, while P/C insurers’ was about $5 billion more. In terms of capital, both life and P/C insurers experienced a substantial positive impact from prescribed and permitted practices in 2008 compared to 2007; these positive effects remained through 2011. One permitted practice in particular that was sought during the crisis could generally help insurers increase the amount of admitted assets that could be included in their statement of financial position by increasing the percentage of deferred tax assets (DTA) that could be classified as an admitted asset. Admitted assets are those that are available for paying policyholder obligations and are counted as capital for regulatory purposes. Statutory accounting provisions do not allow insurers to include assets in their statutory statements of financial position unless they are admitted assets.determining the percentage of DTAs that could be classified as admitted Specifically, insurers requested that the limits for assets be raised.permitted practices that states granted to insurers in 2008 were related to increasing the limits, which in turn increased the amount of DTA that insurers could classify as admitted assets. This change enabled some insurers to improve their reported capital positions by increasing the amount of assets that were admitted to their statutory financial statements. NAIC officials said that more than half of the 119 Industry stakeholders had mixed views on the effects of state regulators granting permitted practices on a case-by-case basis. A state regulator and an industry representative said insurance companies complained that granting case-by-case permission created an uneven playing field because some insurers were allowed to use the increased limits while others were not. However, one rating agency official said the effects were insignificant because DTAs represent a very small percentage of admitted assets. Another rating agency official added that while the case-by-case permissions might result in differences across different insurers’ statutory financial statements, the financial effects of the changes were disclosed in the financial statements. Therefore, they could be easily adjusted using the disclosures to facilitate comparison of financial statements across different insurers. In 2009, NAIC issued Income Taxes – Revised, A Temporary Replacement of SSAP No. 10 (SSAP 10R), which generally adopted the increased limits that some states had granted to individual insurers and made them available to all life and P/C insurers that met certain eligibility requirements. SSAP 10R, which superseded SSAP 10, had a sunset provision to expire at the end of 2010 and took effect for statutory financial statements filed for year-end 2009. A new feature of SSAP 10R was its eligibility requirements, which were based on certain RBC thresholds that would trigger regulatory action if they were reached. To be eligible to apply SSAP 10R, insurers were to exceed 250 to 300 percent of these thresholds. As a result, only companies at or above certain minimum capital standards were eligible to include expanded portions of DTAs in their admitted assets. NAIC officials said that troubled insurance companies that had violated the threshold for regulatory action were typically troubled and would not be eligible to include higher portions of their DTAs as admitted assets. However, they added that state insurance regulators have the authority to determine if the financial conditions of a troubled company affect the recoverability of any admitted assets, including admitted DTAs, and may disallow the company from classifying certain ones as admitted assets. On January 1, 2012, the Statement of Statutory Accounting Principles No. 101, Income Taxes, a Replacement of SSAP No. 10R and SSAP No. 10 (SSAP 101) went into effect. It permanently superseded the original principle and generally codified the increased limits of SSAP 10R. However, SSAP 101 has tiered eligibility requirements, which provide a more gradual reduction in the portion of an insurer’s DTA that can be included as an insurer’s admitted assets. NAIC officials said that this more gradual reduction can help prevent a sudden drop in capital at a time when an insurer is already experiencing a decline in capital. That is, rather than suddenly losing the ability to count any DTAs as admitted assets, the tiered eligibility requirements can spread these reductions over time. Based on an actuarial study, among other things, NAIC increased the limits of SSAP 10, which could provide insurers with capital relief. According to this study, one of the major contributing factors to DTAs was the large amounts of write-downs on impaired investments during the crisis. As previously discussed, in 2008, life insurers had $64 billion in unrealized losses, as well as other-than-temporary impairments of $60 billion in realized losses on investments. To the extent that an insurer’s DTA increased due to impairments that were taken on its investments, expanding the limits on the admittance of DTA would help to increase their capital. From 2006 to 2011, admitted DTA generally rose from over 4 percent to about 9 percent of capital for life insurers while fluctuating from about 3 percent to over 4 percent for P/C insurers (see figs. 13 and 14). The limits of SSAP 10 were intended to be conservative, explained an NAIC official, admitting far fewer years of DTAs than insurers had accumulated over the years. Industry groups we spoke to had mixed views about expanding the limits of SSAP 10. A consumer advocacy group official stated that while expanding the limits could help insurers show greater amounts of admitted assets and capital in their statutory financial statements, in reality, no actual additional funds were made available to protect policyholders because the additional capital came from DTAs, a non- liquid asset. However, one rating agency official said the increased limits have not significantly affected insurer capital because DTAs are generally a relatively small line item on insurers’ financial statements. The rating agency also said the effects of the expanded limits were insignificant and did not affect the agency’s ratings, nor were they enough to make insolvent companies appear solvent. Officials from one rating agency also explained that insurers pursued the expanded DTA limits even though they were relatively small because, during the crisis, companies were not certain how long the financial crisis would last and therefore sought various avenues to help reduce stress on their capital. According to an actuarial association’s report, the limits in SSAP 10R were low and therefore conservative. Federal Programs Have Increased Access to Capital During the crisis, several federal programs were available to insurance companies to ease strain on capital and liquidity. Several insurers— among the largest life companies—benefited from these federal programs. Troubled Asset Relief Program, the Capital Purchase Program. The U.S. Department of the Treasury’s Troubled Asset Relief Program, the Capital Purchase Program, was created in October 2008 to strengthen financial institutions’ capital levels. Qualified financial institutions were eligible to receive an investment of between 1 and 3 percent of their risk-weighted assets, up to a maximum of $25 billion. Eligibility was based on the applicant’s overall financial strength and long-term viability. Institutions that applied were evaluated on factors including their bank examinations ratings and intended use of capital injections. The program was closed to new investments in December 2009. The Hartford Financial Services Group, Inc. and Lincoln National Corporation, holding companies that own large insurers as well as financial institutions that qualified for assistance from the Capital Purchase Program, received $3.4 billion and $950 million, respectively. A few other large insurance companies with qualifying financial institutions also applied for this assistance and were approved but then declined to participate. Both Hartford and Lincoln bought a bank or thrift in order to qualify for the federal assistance. Commercial Paper Funding Facility. The Federal Reserve’s Commercial Paper Funding Facility became operational in October 2008. The facility was intended to provide liquidity to the commercial paper market during the financial crisis. The facility purchased 3 month unsecured and asset-backed commercial paper from U.S. issuers of commercial paper that were organized under the laws of the United States or a political subdivision or territory, as well as those with a foreign parent. The facility expired on February 1, 2010. Ten holding companies of insurance companies participated in the facility. In 2008 and 2009, the 10 holding companies issued approximately $68.8 billion in commercial paper through the facility. AIG issued about 84 percent of this total. Of the 9 other insurance companies that participated in the facility, several became ineligible for further participation by mid-2009 because of downgrades to their credit ratings. Term Auction Facility. The Federal Reserve established the Term Auction Facility in December 2007 to meet the demands for term funding. Depository institutions in good standing that were eligible to borrow from the Federal Reserve’s primary credit program were eligible to participate in the Term Auction Facility. The final auction was held in March 2010. By virtue of its role as a bank holding company, MetLife, Inc., the life industry’s largest company in terms of premiums written, accessed $18.9 billion in short-term funding through the Term Auction Facility. Term Asset-Backed Securities Loan Facility. The Federal Reserve created the Term Asset-Backed Securities Loan Facility to support the issuance of asset-backed securities collateralized by assets such as credit card loans and insurance premium finance loans. The facility was closed for all new loan extensions by June 2010. Prudential Financial, Inc., Lincoln National Corporation, the Teachers Insurance and Annuity Association of America (a subsidiary of TIAA-CREF), MBIA Insurance Corp. (a financial guaranty insurer subsidiary of MBIA, Inc.), and two other insurance companies borrowed over $3.6 billion in 2009 through the Term Asset-Backed Securities Loan Facility. These loans were intended to spur the issuance of asset- backed securities to enhance the consumer and commercial credit markets. Federal Reserve Bank of New York’s Revolving Credit Facility and Treasury’s Equity Facility for AIG. The Federal Reserve Bank of New York and Treasury made over $182 billion available to assist AIG between September 2008 and April 2009. The Revolving Credit Facility provided AIG with a revolving loan that AIG and its subsidiaries could use to enhance their liquidity. Some federal assistance was designated for specific purposes, such as a special purpose vehicle to provide liquidity for purchasing assets such as CDOs. Other assistance, such as that available through the Treasury’s Equity Facility, was available to meet the general financial needs of the parent company and its subsidiaries. Approximately $22.5 billion of the assistance was authorized to purchase RMBS from AIG’s life insurance companies. A source of loans that eligible insurers have had access to, even prior to the financial crisis, is the Federal Home Loan Bank System. It can make loans, or advances, to its members, which include certain insurance companies that engaged in housing finance and community development financial institutions. The advances are secured with eligible collateral including government securities and securities backed by real estate- related assets. According to a representative of a large life insurance company we interviewed, the borrowing capacity from the Federal Home Loan Bank System was especially helpful because it provided access to capital during the crisis when other avenues to the capital markets were relatively unavailable. In other words, they were able to use their investment assets as collateral to access capital for business growth. The number of insurance company members, as well as the advances they took, increased during the crisis. In 2008, advances to insurance companies peaked at a total of $54.9 billion for 74 companies, from $28.7 billion for 52 companies in 2007. A variety of insurance business practices may have helped limit the effects of the crisis on most insurers’ investments, underwriting performance, and premium revenues. First, insurance industry participants and two regulators we interviewed credited the industry’s investment approach, as well as regulatory restrictions, for protecting most companies from severe losses during the crisis. Typically, insurance companies make investments that match the duration of their liabilities. For example, life insurers’ liabilities are typically long term, so they tend to invest heavily in conservative, long-term securities (30 years). According to a life industry representative, this matching practice helped ensure that life insurers had the funds they needed to pay claims without having to sell a large amount of assets that may have decreased in value during the crisis. A P/C industry representative said P/C insurers, whose liabilities are generally only 6 months to a year, invest in shorter-term, highly liquid instruments and did not experience significant problems paying claims. In addition, P/C insurers’ higher proportion of assets invested in equities (between about 17 to 20 percent from 2002-2011, as opposed to between about 2 to 5 percent for life insurers in the same period) helps explain their greater decline in net investment income during the crisis. Both industries also derived their largest source of investment income from bonds and these increased as a percentage of insurers’ gross investment income during the crisis. Also, state regulations placed restrictions on the types of investments insurers can make. For example, one of NAIC’s model investment acts, which serves as a guide for state regulations, specifies the classes of investments that insurers are authorized to make and sets limits on amounts of various grades of investments that insurers can count towards their admitted assets. Second, industry participants we interviewed noted that the crisis generally did not trigger the types of events that life and P/C companies insure—namely, death and property damage or loss. As a result, most insurers did not experience an increase in claims that might have decreased their capital and increased their liquidity requirements. The exception, as described earlier, was mortgage guaranty and financial guaranty insurers, where defaults in the residential housing market triggered mortgage defaults that, in turn, created claims for those insurers. Finally, low rates of return on investments during the crisis reduced insurers’ investment income, and according to two insurers and two of the state regulators we interviewed, these low yields, combined with uncertainty in the equities markets, moved investors toward fixed annuities with guaranteed rates of return. In addition, industry participants and a state regulator we interviewed said that the guarantees on many annuity products provided higher returns than were available in the banking and securities markets, causing existing policyholders to hold onto their guaranteed annuity products—fixed and variable—longer than they might otherwise have done. In 2008 and 2009, the total amount paid by insurers to those surrendering both individual and group annuities declined. One industry representative we interviewed stated that, for similar reasons, policyholders also tended to hold onto life insurance policies that had cash value. Regulators Have Continued Efforts to Strengthen the Regulatory System since the Crisis State regulators and NAIC efforts to strengthen the regulatory system include an increased focus on insurer risks and group holding company oversight. Industry groups we spoke to identified NAIC’s Solvency Modernization Initiative (SMI) and highlighted the Own Risk and Solvency Assessment (ORSA) and the amended Insurance Holding Company System Regulatory Act as some key efforts within SMI. Although these efforts are still underway, it will likely take several years to fully implement these efforts. Solvency Modernization Initiative Places Increased Emphasis on Insurer Risks and Group Holding Company Oversight Since the financial crisis, regulators have continued efforts to strengthen the insurance regulatory system through NAIC’s SMI. NAIC officials told us that the financial crisis had demonstrated the need to comprehensively review the U.S. regulatory system and best practices globally. According to NAIC, SMI is a self-examination of the framework for regulating U.S. insurers’ solvency and includes a review of international developments in insurance supervision, banking supervision, and international accounting standards and their potential use in U.S. insurance regulation. SMI focuses on five areas: capital requirements, governance and risk management, group supervision, statutory accounting and financial reporting, and reinsurance. The officials highlighted some key SMI efforts, such as ORSA and NAIC’s amended Insurance Holding Company System Regulatory Act, which focus on insurer risks and capital sufficiency and group holding company oversight, respectively. Industry officials pointed to NAIC’s SMI as a broad effort to improve the solvency regulation framework for U.S. insurers. NAIC, state regulators and industry groups identified ORSA as one of the most important modernization efforts, because it would help minimize industry risks in the future. ORSA is an internal assessment of the risks associated with an insurer’s current business plan and the sufficiency of capital resources to support those risks under normal and severe stress scenarios. According to NAIC, large- and medium-sized U.S. insurance groups and/or insurers will be required to regularly conduct an ORSA starting in 2015. ORSA will require insurers to analyze all reasonably foreseeable and relevant material risks (i.e., underwriting, credit, market, operation, and liquidity risks) that could have an impact on an insurer’s ability to meet its policyholder obligations. ORSA has two primary goals: to foster an effective level of enterprise risk management at all insurers, with each insurer identifying and quantifying its material and relevant risks, using techniques that are appropriate to the nature, scale, and complexity of these risks and that support risk and capital decisions; and to provide a group-level perspective on risk and capital. In March 2012, NAIC adopted the Own Risk and Solvency Assessment Guidance Manual, which provides reporting guidance to insurers, and in September 2012 adopted the Risk Management and Own Risk and Solvency Assessment.domestic insurers participated in an ORSA pilot in which insurers reported information on their planned business activities. NAIC officials told us that as part of the pilot, state regulators reviewed the information that insurers reported, made suggestions to improve the reporting, and helped develop next steps. According to the officials, the pilot allowed states to envision how they would use ORSA to monitor insurers. NAIC officials stated that they also received public comments on the ORSA guidance manual and were in the process of updating it to ensure greater consistency between the guidance manual and the ORSA model law. NAIC officials told us that they planned to conduct an additional pilot in the fall of 2013. The officials added that state regulators still needed to develop their regulatory guidance for reviewing ORSA. Another issue that insurance industry participants identified as significant was oversight of noninsurance holding companies with insurance subsidiaries. For instance, industry groups we spoke with identified the need for greater transparency and disclosure of these entities’ activities. One industry association stressed the importance of having all regulatory bodies look across the holding company structure rather than at specific holding company subsidiaries, such as insurance companies. According to NAIC, regulators reviewed lessons learned from the financial crisis— specifically issues involving AIG—and the potential impact of noninsurance operations on insurance companies in the same group. In December 2010, NAIC amended the Insurance Holding Company System Regulatory Act to address the way regulators determined risk at holding companies. As part of this process, between May 2009 and June 2010, NAIC held 16 public conference calls, five public meetings, and one public hearing on the Insurance Holding Company System Regulatory Act. Additionally, NAIC officials told us they also share regulatory and supervisory information with federal regulators such as the Federal Reserve, including information on the amended model act revisions, at the Annual Regulatory Data Workshop. According to NAIC, the U.S. statutory holding company laws apply directly to individual insurers and indirectly to noninsurance holding companies. The revised model act includes changes to (1) communication among regulators; (2) access to and collection of information; (3) supervisory colleges; (4) enforcement measures; (5) group capital assessment; and (6) accreditation. Some specific changes include: expanded ability for regulators to look at any entity in an insurance holding company system that may not directly affect the holding company system but could pose reputational or financial risk to the insurer through a new Form F-Enterprise Risk Report; enhancements to regulators’ rights to access information (including books and records), especially regarding the examinations of affiliates, to better ascertain the insurer’s financial condition; and introduction of and funding for supervisory colleges to enhance the regulators’ ability to participate in the colleges and provide guidance on how to conduct, effectively contribute to, and learn from them. One state regulator stated that the revised Insurance Holding Company System Regulatory Act was expected to make group-level holding company data more transparent to state insurance regulators. Regulators also told us that the amended model act gave them greater authority to participate in supervisory colleges. U.S. state insurance regulators both participate in and convene supervisory colleges. State insurance commissioners may participate in a supervisory college with other regulators charged with supervision of such insurers or their affiliates, including other state, federal, and international regulatory agencies. For instance, the same state regulator stated that the authority allowed for international travel, with the insurers paying the costs. The act also increases the regulators’ ability to maintain the confidentiality of records that they receive or share with other regulators. According to NAIC officials, as of April 2013, 16 states have adopted the model law revisions. Additionally, some state regulators we spoke to indicated that they were working with their state legislatures to introduce the revised Insurance Holding Company System Regulatory Act to their state legislatures. For instance, officials from one state regulator said that the new model act had been introduced in the state legislature in January 2013 and that adopting it would mean rewriting the state’s existing holding company law. As a result, they had decided to ask for the repeal of the existing law and the adoption of the new statute for consistency. Although the Solvency Modernization Initiative is underway, time is needed to allow states to adopt requirements. For instance, NAIC officials said that although they had completed almost all of what they saw as the key SMI initiatives, implementing all SMI activities could take 2 or 3 years. According to the officials, some decisions will be made in 2013, such as how to implement governance activities and changes related to RBC. For instance, the officials stated that they were looking to implement P/C catastrophe risk data analysis later this year and would then consider how to integrate their findings into RBC requirements. As mentioned earlier, ORSA is not expected to be operational until 2015. Also, most states have yet to adopt revisions to the Insurance Holding Company System Regulatory Act. NAIC officials told us that getting changes adopted at the state level was challenging because of the amount of time needed to get all 50 states on-board. For instance, the adoption of model laws requires state legislative change and is dependent on the frequency of state legislative meetings. The officials explained that some states legislatures meet only every 2 years, limiting the possibility of immediate legislative change. As we have previously reported, NAIC operations generally require a consensus among a large number of regulators, and NAIC seeks to obtain and consider the input of industry participants and consumer advocates. Obtaining a wide range of views may create a more thoughtful, balanced regulatory approach, but working through the goals and priorities of a number of entities can result in lengthy processes and long implementation periods for regulatory improvements. As noted in our other work earlier, continued progress in a timely manner is critical to improving the efficiency and effectiveness of the insurance regulatory system. Views Varied on Increased Oversight Efforts Industry officials we spoke with had favorable views of NAIC’s and state regulators’ efforts to strengthen the regulatory system. For example, one insurance association stated that NAIC and states had been reevaluating all regulatory tools beyond those that were related to the financial crisis. Another insurance association noted that ORSA would be a good tool to use to identify potentially at-risk companies before they developed problems. A third insurance association stated that coordination between domestic and international regulators was more robust now and actions taken are more coordinated. The officials also pointed to the work addressing supervisory colleges that involve regulatory actions by other countries that might impact domestic insurers. However, some insurance associations we spoke to voiced concerns about the increased oversight of holding companies, and some insurance associations and insurers also questioned the need for additional regulatory changes. Two insurance associations and a federal entity we spoke to were concerned with potential information gaps related to the increased oversight of holding companies. For instance, one insurance association told us that state insurance regulators do not have jurisdiction over non- insurance affiliates’ activities and as a result, do not have access to information on these affiliates in order to evaluate if their activities could jeopardize the affiliated insurers. Another insurance association stated that there was a need to address holding company regulation, especially potential gaps between the federal and state regulators in their oversight roles. Some insurers also questioned the need for additional regulations and a few suggested that the regulators need to allow time for implementing recent financial reforms under the Dodd-Frank Act. One P/C insurer stated that imposing additional requirements on the entire insurance industry is not necessary especially within the P/C industry. The official explained that there needs to be greater flexibility in reporting and that the P/C industry fared well during the crisis as evident by the lack of widespread insolvencies. The official suggested that NAIC needs to re-evaluate whether the additional requirements are useful. Another financial guaranty insurer told us that no additional changes are needed in the regulatory structure or regulations for the financial guaranty industry. The officials stated that they are now dealing with federal regulators and regulatory changes related to the Dodd-Frank Act. Additionally, one insurance association stated that whether more regulatory coordination activities regarding holding companies are needed is not yet known because federal regulators have not finished implementing the recent Dodd-Frank reforms dealing with holding company oversight. Some Dodd-Frank Provisions That Address Financial Stability Include Insurance Oversight While many factors likely contributed to the crisis, and the relative role of these factors is subject to debate, gaps and weaknesses in the supervision and regulation of the U.S. financial system, including the insurance industry, generally played an important role. The Dodd-Frank Act provided for a broad range of regulatory reforms intended to address financial stability and the creation of new regulatory entities that have insurance oversight responsibilities or an insurance expert’s view, among other things. In our previous work, we noted that the act created the Federal Insurance Office and the Financial Stability Oversight Council. The act also seeks to address systemically important financial institutions (SIFIs) and end bailouts of large, complex financial institutions. The Dodd-Frank Act has not yet been fully implemented; thus, its impacts have not fully materialized. Federal Insurance Office. As mentioned earlier, the Dodd-Frank Act created the Federal Insurance Office within Treasury to, in part, monitor issues related to regulation of the insurance industry. The Federal Insurance Office’s responsibilities include, among other things, identifying issues or gaps in the regulation of insurers that could contribute to a systemic crisis in the insurance industry or the U.S. financial system. The Federal Insurance Office was tasked with conducting a study on how to modernize and improve the system of insurance regulation in the United States and to submit a report to Congress no later than January 21, 2010. The report is to consider, among other things, systemic risk regulation with respect to insurance, consumer protection for insurance products and practices, including gaps in state regulation, and the regulation of insurance companies and affiliates on a consolidated basis. Additionally, the Federal Insurance Office is to examine factors such as the costs and benefits of potential federal regulation of insurance across various line of insurance. As of May 2013, the Federal Insurance Office had not yet issued their report to Congress. Financial Stability Oversight Council. The council was created to identify risks to the stability of the U.S. financial system, including those that might be created by insurance companies. The council includes some representation with insurance expertise. Some authorities given to the Financial Stability Oversight Council allow it to collect information from certain state and federal agencies regulating across the financial system so that regulators will be better prepared to address emerging threats; recommend strict standards for the large, interconnected bank holding companies and nonbank financial companies designated for enhanced supervision; and facilitate information sharing and coordination among the member agencies to eliminate gaps in the regulatory structure. Additionally, the act provides that the Financial Stability Oversight Council have 10 voting and 5 nonvoting members. The 10 voting members provide a federal regulatory perspective, including an independent insurance expert’s view. The 5 nonvoting members offer different insights as state-level representatives from bank, securities, and insurance regulators or as the directors of some new offices within Treasury—Office of Financial Research and the Federal Insurance Office—that were established by the act. The Dodd-Frank Act requires that the council meet at least once a quarter.that Financial Stability Oversight Council members provided benefits—for instance, they were able to discuss activities that could be concerns in future crises and make recommendations to the primary regulators. One industry association we spoke to stated Bureau of Consumer Financial Protection (known as CFPB). The Dodd-Frank Act established CFPB as an independent bureau within the Federal Reserve System and provided it with rulemaking, enforcement, supervisory, and other powers over many consumer financial products and services and many of the entities that sell them. CFPB does not have authority over most insurance activities or most activities conducted by firms regulated by SEC or CFTC. However, certain consumer financial protection functions from seven existing federal agencies were transferred to CFPB. Consumer financial products and services over which CFPB has primary authority include deposit taking, mortgages, credit cards and other extensions of credit, loan servicing, and debt collection. CFPB is authorized to supervise certain nonbank financial companies and large banks and credit unions with over $10 billion in assets and their affiliates for consumer protection purposes. The financial crisis also revealed weaknesses in the existing regulatory framework for overseeing large, interconnected, and highly leveraged financial institutions and their potential impacts on the financial system and the broader economy in the event of failure. The Dodd-Frank Act requires the Board of Governors of the Federal Reserve System (Reserve Board) to supervise and develop enhanced capital and other prudential standards for these large, interconnected financial institutions, which include bank holding companies with $50 billion or more in consolidated assets and any nonbank financial company that the Financial Stability Oversight Council designates. The act requires the enhanced prudential standards to be more stringent than standards applicable to other bank holding companies and financial firms that do not present similar risks to U.S. financial stability. The act further allows the enhanced prudential standards to be more stringent than standards applicable to other bank holding companies and financial firms that do not present similar risks to U.S. financial stability. In April 2013, the Federal Reserve issued a final rule that establishes the requirements for determining when an entity is “predominantly engaged in financial activities.” Among the criteria is whether an institution is primarily engaged in financial activities, which can include insurance underwriting. As of May 2013, the Financial Stability Oversight Council had yet to publicly make any such designations. Agency Comments We provided a draft of this report to NAIC for their review and comment. NAIC provided technical comments which we have incorporated, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Chief Executive Officer of the National Association of Insurance Commissioners. In addition, the report will be made available at no charge on the GAO website at http://www.gao.gov. If you or your staffs have any questions regarding this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff that made major contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology This report examines (1) what is known about how the insurance industry and policyholders were affected by the financial crisis, (2) the factors that affected the impact of the crisis on insurers and policyholders, and (3) the types of regulatory actions that have been taken since the crisis to help prevent or mitigate potential negative effects of future economic downturns on insurance companies and their policyholders. To address these objectives, we reviewed relevant laws and regulations on solvency oversight such as the Dodd-Frank Wall Street Reform and Consumer Protection Act and financial institution holding company supervision such as the model Insurance Holding Company System Regulatory Act. We conducted a literature search using ProQuest, EconLit, and PolicyFile and reviewed relevant literature and past reports on the financial crisis and the insurance industry, the general condition of the U.S. economy in 2008, and the events surrounding the federal rescue of American International Group, Inc. (AIG). We interviewed officials from selected state insurance departments, the Federal Insurance Office, the National Association of Insurance Commissioners (NAIC), the National Conference of Insurance Legislators, insurance associations, insurance companies, credit rating agencies, and consumer advocacy groups. We interviewed or received written responses to our questions from insurance regulators in seven states—California, Illinois, Iowa, Maryland, New York, Texas, and Virginia. We used an illustrative sampling strategy to select states based on the states’ geographic diversity, number of domiciled insurers, and premium volumes, which ranged from small (Iowa) to large (California). We interviewed regulators from six of the states and received written responses to our questions from one of the states. We also met with six industry associations representing insurance companies covering life and property/casualty (P/C), including financial guaranty and mortgage insurance; two associations representing agents and brokers; and two national insurance guaranty fund associations. Additionally, we met with six insurers covering life and P/C insurance lines, including mortgage insurance and financial guaranty insurance. The insurers represent different states of domicile and varying market shares in net premiums written. Finally, we met with two credit rating agencies and two consumer advocacy groups to obtain their perspective on how the financial crisis impacted the insurance industry and policyholders. We also reviewed congressional testimony and other documents from industry participants, several of whom we interviewed for this study. Determining the Effect of Crisis on Insurance Companies and Policyholders To address how the financial crisis affected the insurance industry and policyholders, we reviewed academic papers and government reports, and interviewed industry representatives, regulatory officials, and internal stakeholders to identify the key characteristics associated with the financial crisis. This resulted in a list of five commonly identified major characteristics of the crisis, which are declines in real estate values, declines in equities values, lowered interest rates, increased mortgage default rates, and changes in policyholder behavior. We reviewed industry documents—including NAIC’s annual analyses of the life and P/C industries—to identify commonly used financial measures for insurance companies. These measures help demonstrate insurers’ financial health in a number of areas including investment performance, underwriting performance, capital adequacy, and profitability. We selected specific lines of insurance within the life and P/C industries for our analyses on net premiums written. In the life industry, we focused on individual annuities, individual life insurance, group annuities, and group life insurance. These lines accounted for 77 percent of average life insurance premiums during our review period of 2002 through 2011, and the policyholders were individual consumers (either independently or through their workplaces). In the P/C industry, we focused on private passenger auto liability, auto physical damage, home owners multiple peril, commercial multiple peril, other liability (occurrence), other liability (claims-made), financial guaranty, and mortgage guaranty insurance. These lines of insurance accounted for 68 percent of average P/C insurance premiums over our 10-year review period and involved individual and commercial policyholders. We chose to review financial and mortgage guaranty insurance despite their small percentage of premiums (less than 2 percent of average P/C premiums from 2002 through 2011) because we had learned through research and preliminary interviews that they were more heavily affected by the crisis. We obtained input on the data analysis plan from NAIC and a large rating agency and incorporated their suggestions where appropriate. We obtained the financial data from insurers’ annual statutory financial statements, which insurance companies must submit to NAIC after the close of each calendar year. We gathered the data for all life and P/C insurers for the period January 2002 through 2011 using SNL Financial, a private financial database that contains publicly filed regulatory and financial reports. We chose the 10-year time period in order to obtain context for our findings around the period of 2007 through 2009, which is generally regarded as the duration of the financial crisis. We analyzed data for both operating and acquired or nonoperating companies to help ensure that we captured information on all companies that were operating at some point during the 10-year period. The population of operating and acquired or nonoperating life insurance companies from 2002 through 2011 was 937, while the population of operating and acquired or nonoperating P/C companies from 2002 through 2011 was 1,917. We conducted most of our analyses at the SNL group and unaffiliated companies level, meaning that data for companies that are associated with a larger holding company were aggregated, adjusted to prevent double counting, and presented at the group level. We also ran a few selected analyses (such as our analysis of permitted and prescribed practices) at the individual company level to obtain detail about specific operating companies within a holding company structure. To analyze the number and financial condition of insurers that went into receivership during the 10-year review period, we obtained data that NAIC staff compiled from NAIC’s Global Receivership Information Database. The data included conservation, rehabilitation, and liquidation dates, as well as assets, liabilities, and net equity (the difference between assets and liabilities) generally from the time of the receivership action, Our analysis of numbers of receiverships and among other data items.liquidations included 58 life insurers and 152 P/C insurers. The NAIC staff that compiled the data told us that data on assets, liabilities, and net equity were not always available in either of their data systems. To address this problem of missing data, NAIC staff pulled data when available from the last financial statement before the company was placed into receivership or the first available financial statement immediately after being placed into receivership and replaced the missing data. This was the case for 5 of 58 life insurance companies and 27 of 152 P/C companies. We believe these asset, liability, and net equity levels would have changed little in the time between liquidation and when the financial statements were prepared, and we determined that the time difference was likely to have little effect on our estimate of the general size and net equity levels of insurers at liquidation. However, the average assets and average net equity we report might be slightly higher or lower than was actually the case for each year. In addition, out of the 40 life insurers and 125 P/C insurers that went into liquidation from 2002 through 2011, NAIC staff could not provide asset data for 7 life insurers and 19 P/C insurers, and they could not provide net equity data for 8 life insurers and 29 P/C insurers. We excluded these companies from our analyses and indicated in tables 10 and 11 (app. II) when data were not available. Our analysis of assets at liquidation included 33 life insurers and 106 P/C insurers, and our analysis of net equity at liquidation included 32 life insurers and 96 P/C insurers. To describe how publicly traded life and P/C insurers’ stock prices changed during the crisis, we obtained daily closing price data for A.M. Best’s U.S. Life Insurance Index (AMBUL) and U.S. Property Casualty Insurance Index (AMBUPC). The indexes include all U.S. insurance industry companies that are publicly traded on major global stock exchanges and that also have an interactive A.M. Best rating, or that have an insurance subsidiary with an interactive A.M. Best Rating. The AMBUL index reflects 21 life insurance companies and the AMBUPC index reflects 56 P/C companies. We compared the mean monthly closing price for each index to the closing price for the last day of the month and determined that they were generally similar, so we reported the latter measure. Because 48 of the 77 life and P/C companies in the A.M. Best indexes trade on the New York Stock Exchange (NYSE), we also analyzed closing stock prices from the NYSE Composite Index (NYA), obtained from Yahoo! Finance, to provide context on the overall equities market. NYA reflects all common stocks listed on NYSE, (1,867 companies).2004 through December 2011 because A.M. Best did not have data prior to December 2004. For all indexes, we analyzed the time period December To select the two distressed insurers that we profiled in appendix III, we focused on life and P/C companies that were placed in receivership during the crisis. Based on interviews with regulators and industry officials, we learned that the effects of the financial crisis were limited largely to certain annuity products (provided by life insurers) and the financial and mortgage guaranty lines of insurance. Therefore, through our interviews with industry associations and state regulators, we selected one life insurer and one mortgage guaranty insurer that were directly affected by the crisis to illustrate the effects of the crisis at the company level. We obtained financial data through SNL Financial and publicly available court documents to examine these insurers’ cases. We determined that the financial information used in this report— including statutory financial data from SNL Financial, stock price data from A.M. Best, receivership and permitted practices data from NAIC, and annuity sales and GLB data from LIMRA—was sufficiently reliable to assess the effects of the crisis on the insurance industry. To assess reliability, we compared data reported in individual companies’ annual financial statements for a given year to that reported in SNL Financial. We also aggregated the individual company data for net premiums for two SNL groups (one life and one P/C group) to verify that our results matched SNL’s, because intercompany transactions would be rare in this field.measures—such as net income, capital, net investment income, and surrender benefits and withdrawals—to NAIC’s annual industry commentaries and found that they were generally similar. We also obtained information from A.M. Best, NAIC, and LIMRA staff about their internal controls and procedures for collecting their respective stock price, receivership, and annuities data. Factors That Affected the Impact of the Crisis on Insurers and Policyholders To address the factors that helped mitigate the effect of the crisis, we reviewed NAIC’s model investment act, industry reports, and credit rating agency reports to identify such factors. We also interviewed state insurance regulators, insurance company associations, insurance companies, and credit rating agencies to obtain their insights on the mitigating effects of industry investment and underwriting practices, regulatory restrictions, and effects of the crisis on policyholder behavior. We also reviewed our prior work and other sources to identify federal programs that were available to insurance companies to increase access to capital, including the Troubled Asset Relief Program, the Board of Governors of the Federal Reserve System’s and Federal Reserve Banks’ (Federal Reserve) liquidity programs, and the Federal Home Loan Bank System, including assistance to help some of the largest life insurers such as AIG during the crisis. Regulatory Actions That Have Been Taken to Protect Insurers and Policyholders To assess the state insurance regulatory response system in protecting insurers and policyholders and the types of insurance regulatory actions taken during and following the crisis, we reviewed and analyzed relevant state guidance. This included NAIC documents such as Capital Markets Bureau reports, statutory accounting rules such as the Statements of Statutory Accounting Principles, and information on securities lending and permitted practices. We also reviewed the Solvency Modernization Initiative, including associated guidance manuals and model laws such as the Insurance Holding Company System Regulatory Act. In addition, we analyzed SNL Financial data and reviewed reports on deferred tax assets, including actuary association reports, a consumer group’s public comments, and information from state insurance regulator and industry consultant websites. We interviewed officials from state regulators, NAIC, FIO, industry associations, insurers, and others to obtain their perspectives on state regulatory actions taken in response to the crisis and impacts on insurers and policyholders and efforts to help mitigate potential negative effects of future economic downturns. Additionally, we reviewed past reports on the provisions of the Dodd-Frank Act and the impacts on the insurance industry with regard to oversight responsibilities. We conducted this performance audit from June 2012 to June 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Additional Data on Life and P/C Industry Financial Performance, 2002-2011 This appendix provides some additional data on life and P/C insurers’ financial performance, including realized and unrealized losses, financing cash flow, P/C premium revenues, assets and net equity of companies in liquidation, and stock price data. Realized and Unrealized Losses In 2008 and 2009, a small number of large insurance groups generally comprised the majority of realized and unrealized losses in the life and P/C industries. Tables 6 and 7 lists the life insurers with realized or unrealized losses exceeding $1 billion in 2008 and 2009, and tables 8 and 9 list the same data for P/C insurers. All of the insurers listed are either life or P/C “groups” in the SNL Financial database, meaning that they include all of the U.S. insurance companies in either the life or P/C industry within the same corporate hierarchy. Tables 8 and 9 list the P/C insurers with realized and unrealized losses exceeding $1 billion in 2008 and 2009. Financing cash flow reflects the extent to which insurers are willing or able to access external capital to finance or grow their operations. It represents the net flow of cash from equity issuance, borrowed funds, dividends to stockholders, and other financing activities. With exceptions in 2004 and 2007 for life insurers and 2005 for P/C insurers, both industries had negative financing cash flows a few years before the crisis began, indicating that insurers were reducing their outstanding debt and equity. These reductions could have resulted from the insurers buying back their stock and not issuing new debt as their existing debt matured. The increasingly negative financing cash flows for both industries starting in 2008 also reflect what we were told about the difficulty of obtaining outside capital during the crisis. Insurers might not have been able to raise money during the crisis even if they had wanted or needed to do so. In the P/C industry as a whole, net premiums written declined from $443.7 billion in 2006 to $417.5 billion in 2009—a total decline of 6 percent during the crisis years. In most of the lines of P/C insurance that we reviewed, declines in premiums during the crisis were modest (see fig.16). Financial and mortgage guaranty insurance (which combined represent less than 2 percent of the P/C industry)—as well as other liability (occurrence) (insurance against miscellaneous liability due to negligence or improper conduct)—were the exceptions. For example, financial guaranty insurers’ net premiums written fell from $3.2 billion in 2008 to $1.8 billion in 2009 (a 43 percent decline). By 2011, net financial guaranty premiums written were less than $1 billion, reflecting a total decline of 69 percent since 2008. Mortgage guaranty insurance premiums fell from $5.4 billion to $4.6 billion (a 14 percent decline) from 2008 to 2009 and to $4.2 billion (another 8 percent decline) in 2010. Net premiums written for other liability (occurrence) declined from $25.9 billion to $24.3 billion (a 6 percent decline) in 2008 and to $20.9 billion (a 14 percent decline) in 2009. On the other hand, net premiums written for homeowners’ insurance increased in every year of the 10-year review period, including increases of about 2 percent annually in 2008 and 2009 with net premiums of $56.9 billion in 2009. Net premiums written for all other lines of P/C insurance combined declined from $142.2 billion in 2007 to $129.0 billion in 2009, reflecting annual decreases of less than 1 percent in 2007, 3 percent in 2008, and 7 percent in 2009. Based on the available data that NAIC provided us on companies that were liquidated from 2002 through 2011, average assets and net equity of liquidated life and P/C insurers varied by year. As tables 10 and 11 illustrate, average assets of liquidated companies were significantly above the 10-year average in 2004 for the life industry and in 2003 and 2008 for the P/C industry. This was generally due to one or two large companies being liquidated. For example, in 2004, London Pacific Life and Annuity Company was liquidated with $1.5 billion in assets and negative $126 million in net equity, meaning that its liabilities exceeded its assets by that amount. Similarly, MIIX Insurance Company, a P/C insurer, was liquidated in 2008 with assets of $510 million and negative $32 million in net equity. Average net equity, based on the available data, was positive for liquidated life insurers in 2003, 2007, 2009, and 2010 (see table 10). According to NAIC staff, this is not unusual, as regulators typically try to liquidate distressed insurers before their net equity reaches negative levels. We analyzed the monthly closing stock prices of publicly traded life and P/C insurance companies for the period December 2004 through December 2011. We used two A.M. Best indexes—the A.M. Best U.S. Life Index and the A.M. Best Property Casualty Index—as a proxy for the life and P/C industries. According to A.M. Best, the indexes include all U.S. insurance industry companies that are publicly traded on major global stock exchanges that also have an A.M. Best rating, or that have an insurance subsidiary with an A.M. Best rating. They are based on the aggregation of the prices of the individual publicly traded stocks and weighted for their respective free float market capitalizations. The life index represents 21 life insurance companies and the P/C index represents 56 P/C companies. Since more than 60 percent of the companies on the A.M. Best indexes we selected trade on NYSE, we also obtained monthly closing stock prices on the New York Stock Exchange (NYSE) Composite Index, which, as of February 2012, represented 1,867 companies that trade on NYSE, to provide a contextual perspective on the overall stock market during our review period. As figure 17 illustrates, life and P/C insurers’ aggregate stock prices generally moved in tandem with the larger NYSE Composite Index from the end of 2004 through 2011, but life insurers’ aggregate stock prices fell much more steeply in late 2008 and early 2009 than P/C insurers’ and NYSE companies’ aggregate stock prices. We selected several key time periods or events from the financial crisis and identified the largest drops in life and P/C insurers’ aggregate stock prices during those time periods (see fig.18). While many factors can affect the daily movement of stock prices, we observed that changes in life insurers’ aggregate stock prices tended to be more correlated with several of the events that occurred during the crisis than P/C insurers’ aggregate stock prices. Appendix III: Profiles of Distressed Insurers This appendix provides more detail on two distressed insurers—one mortgage guaranty insurer and one life insurer—during the financial crisis. Mortgage Guaranty Insurer We studied a mortgage guaranty insurer operating in a run-off of its existing book of business (that is, it had ceased writing new mortgage guaranty business and was only servicing the business it already had on its books). This insurer is licensed in all states and the District of Columbia. Prior to its run-off, the insurer provided mortgage default protection to lenders on an individual loan basis and on pools of loans. As a result of continued losses stemming from defaults of mortgage loans, the state regulator placed the insurer into rehabilitation with a finding of insolvency in 2012. During the financial crisis, the insurer began experiencing substantial losses due to increasing default rates on insured mortgages, particularly in California, Florida, Arizona, and Nevada. As table 12 shows, in 2007 and 2008, over 30 percent of the insurer’s underwritten risk—the total amount of coverage for which it was at risk under its certificates of insurance—was originated in these distressed markets, which experienced default rates that peaked at more than 35 percent in 2009. In addition, the insurer had significant exposure to Alt-A loans, which are loans that were issued to borrowers based on credit scores but without documentation of the borrowers’ income, assets, or employment. These loans experienced higher default rates than the prime fixed-rate loans in the insurer’s portfolio. This insurer rapidly depleted its capital as it set aside reserves to meet obligations resulting from the overall rising volume of mortgage defaults. Rising defaults combined with unsuccessful attempts to raise additional capital during the crisis adversely affected its statutory risk-to-capital ratio starting in 2008. While state insurance regulations generally require this relationship of insured risk to statutory capital (in this case, the sum of statutory surplus and contingency reserves) to be no greater than 25 to 1, this insurer’s statutory capital declined 85 percent from year-end 2007 to year-end 2008, increasing the risk-to-capital ratio from 21 to 1 to 125 to 1. As a result, in 2008, this insurer entered into an order with its state regulator to cease writing new business and operate in run-off status. Due to continued increases in mortgage defaults, the regulator required a capital maintenance plan in 2009 that allowed the insurer to maintain a positive statutory capital position during the run-off and also to pay partial claims. According to court filings, the insurer reported to the state regulator that its liabilities outweighed its assets by more than $800 million for the second quarter of 2012. As a result, the state regulator entered an order with the relevant county circuit court in late 2012 to take the insurer into rehabilitation with a finding of insolvency. At that time, the court named the state insurance regulator as rehabilitator, which means that it gave the regulator authority over the insurer’s property, business, and affairs until the insurer’s problems are corrected. Life Insurer We studied a life insurer that primarily writes life, annuity, and accident and health business. Due to losses sustained from equity investments in Fannie Mae and Freddie Mac in 2008, the state regulator placed the insurer in rehabilitation in early 2009. In late 2011, the regulator approved of the insurer’s acquisition by a third-party insurer. This transaction facilitated the insurer’s successful exit from rehabilitation in mid-2012. The insurer was invested in Fannie Mae and Freddie Mac stock. In 2008, the insurer sustained approximately $95 million in investment losses. Approximately $47 million of those investment losses were related to investments in Fannie Mae and Freddie Mac stock. These events adversely affected the insurer’s capital, which declined by over 38 percent from March 31, 2008 to September 30, 2008. As of December 31, 2008, the insurer had capital of $29 million, down from about $126 million as of December 31, 2007. On September 6, 2008, the Federal Housing Finance Agency (FHFA) placed Fannie Mae and Freddie Mac into conservatorship out of concern that their deteriorating financial condition would destabilize the financial system. company’s surplus and asset levels. According to the testimony, this exemption allowed the insurer to report capital of $400,000 instead of a $259 million deficit as of December 31, 2009. In late 2009, the receiver issued a request for proposal for the sale of the insurer. By mid-2010, the receiver was in negotiations with another life insurance group. In 2011, policyholders and the receiver approved of a purchase plan. The plan would recapitalize the insurer to allow it sufficient surplus to meet state minimum requirements to resume writing new business. The plan was executed in mid-2012, which allowed the insurer to exit rehabilitation. Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Patrick A. Ward (Assistant Director), Emily R. Chalmers, William R. Chatlos, Janet Fong, David J. Lin, Angela Pun, Lisa M. Reynolds, Jessica M. Sandler, and Jena Y. Sinkfield made significant contributions to this report.
Insurance plays an important role in ensuring the smooth functioning of the economy. Concerns about the oversight of the $1 trillion life and property/casualty insurance industry arose during the 2007-2009 financial crisis, when one of the largest holding companies, AIG, suffered severe losses that threatened to affect its insurance subsidiaries. GAO was asked to examine any effects of the financial crisis on the insurance industry. This report addresses (1) what is known about how the financial crisis of 2007-2009 affected the insurance industry and policyholders, (2) the factors that affected the impact of the crisis on insurers and policyholders, and (3) the types of actions that have been taken since the crisis to help prevent or mitigate potential negative effects of future economic downturns on insurance companies and their policyholders. To do this work, GAO analyzed insurance industry financial data from 2002 through 2011 and interviewed a range of industry observers, participants, and regulators. The effects of the financial crisis on insurers and policyholders were generally limited, with a few exceptions. While some insurers experienced capital and liquidity pressures in 2008, their capital levels had recovered by the end of 2009. Net income also dropped but recovered somewhat in 2009. Effects on insurers' investments, underwriting performance, and premium revenues were also limited. However, some life insurers that offered variable annuities with guaranteed living benefits, as well as financial and mortgage guaranty insurers, were more affected by their exposures to the distressed equity and mortgage markets. The crisis had a generally minor effect on policyholders, but some mortgage and financial guaranty policyholders--banks and other commercial entities--received partial claims or faced decreased availability of coverage. Actions by state and federal regulators and the National Association of Insurance Commissioners (NAIC), among other factors, helped limit the effects of the crisis. First, state insurance regulators shared more information with each other to focus their oversight activities. In response to transparency issues highlighted by American International Group, Inc.'s securities lending program, NAIC required more detailed reports from insurers. Also, a change in methodology by NAIC to help better reflect the value of certain securities also reduced the risk-based capital some insurers had to hold. To further support insurers' capital levels, some states and NAIC also changed reporting requirements for certain assets. These changes affected insurers' capital levels for regulatory purposes, but rating agency officials said they did not have a significant effect on insurers' financial condition. Several federal programs also provided support to qualified insurers. Finally, insurance business practices, regulatory restrictions, and a low interest rate environment helped reduce the effects of the crisis. NAIC and state regulators' efforts since the crisis have included an increased focus on insurers' risks and capital adequacy, and oversight of noninsurance entities in group holding company structures. The Own Risk and Solvency Assessment, an internal assessment of insurers' business plan risks, will apply to most insurers and is expected to take effect in 2015. NAIC also amended its Insurance Holding Company System Regulatory Act to address the issues of transparency and oversight of holding company entities. However, most states have yet to adopt the revisions, and implementation could take several years.
GAO_GAO-05-788T
Background Intellectual property is an important component of the U.S. economy, and the United States is an acknowledged global leader in the creation of intellectual property. However, industries estimate that annual losses stemming from violations of intellectual property rights overseas are substantial. Further, counterfeiting of products such as pharmaceuticals and food items fuels public health and safety concerns. USTR’s Special 301 reports on the adequacy and effectiveness of intellectual property protection around the world demonstrate that, from a U.S. perspective, intellectual property protection is weak in developed as well as developing countries and that the willingness of countries to address intellectual property issues varies greatly. Eight federal agencies, as well as the Federal Bureau of Investigation (FBI) and the U.S. Patent and Trademark Office (USPTO), undertake the primary U.S. government activities to protect and enforce U.S. intellectual property rights overseas. The agencies are the Departments of Commerce, State, Justice, and Homeland Security; USTR; the Copyright Office; the U.S. Agency for International Development (USAID); and the U.S. International Trade Commission. U.S. Agencies Undertake Three Types of IPR Efforts The efforts of U.S. agencies to protect U.S. intellectual property overseas fall into three general categories—policy initiatives, training and technical assistance, and U.S. law enforcement actions. Policy Initiatives U.S. policy initiatives to increase intellectual property protection around the world are primarily led by USTR, in coordination with the Departments of State and Commerce, USPTO, and the Copyright Office, among other agencies. A centerpiece of policy activities is the annual Special 301 process. “Special 301” refers to certain provisions of the Trade Act of 1974, as amended, that require USTR to annually identify foreign countries that deny adequate and effective protection of intellectual property rights or fair and equitable market access for U.S. persons who rely on intellectual property protection. USTR identifies these countries with substantial assistance from industry and U.S. agencies and publishes the results of its reviews in an annual report. Once a pool of such countries has been determined, the USTR, in coordination with other agencies, is required to decide which, if any, of these countries should be designated as a Priority Foreign Country (PFC). If a trading partner is identified as a PFC, USTR must decide within 30 days whether to initiate an investigation of those acts, policies, and practices that were the basis for identifying the country as a PFC. Such an investigation can lead to actions such as negotiating separate intellectual property understandings or agreements between the United States and the PFC or implementing trade sanctions against the PFC if no satisfactory outcome is reached. Between 1994 and 2005, the U.S. government designated three countries as PFCs—China, Paraguay, and Ukraine—as a result of intellectual property reviews. The U.S. government negotiated separate bilateral intellectual property agreements with China and Paraguay to address IPR problems. These agreements are subject to annual monitoring, with progress cited in each year’s Special 301 report. Ukraine, where optical media piracy was prevalent, was designated a PFC in 2001. The United States and Ukraine found no mutual solution to the IPR problems, and in January 2002, the U.S. government imposed trade sanctions in the form of prohibitive tariffs (100 percent) aimed at stopping $75 million worth of certain imports from Ukraine over time. In conjunction with the release of its 2005 Special 301 report, USTR announced the results of a detailed review examining China’s intellectual property regime. This review concluded that infringement levels remain unacceptably high throughout China, despite the country’s efforts to reduce them. The U.S. government identified several actions it intends to take, including working with U.S. industry with an eye toward utilizing World Trade Organization (WTO) procedures to bring China into compliance with its WTO intellectual property obligations (particularly those relating to transparency and criminal enforcement) and securing new, specific commitments concerning actions China will take to improve IPR protection and enforcement. By virtue of membership in the WTO, the United States and other countries commit themselves not to take WTO-inconsistent unilateral action against possible trade violations involving IPR protections covered by the WTO but to instead seek recourse under the WTO’s dispute settlement system and its rules and procedures. This may impact any U.S. government decision regarding whether to retaliate against WTO members unilaterally with sanctions under the Special 301 process when those countries’ IPR problems are viewed as serious. The United States has brought a total of 12 IPR cases to the WTO for resolution, but has not brought any since 2000 (although the United States initiated a WTO dispute panel for one of these cases in 2003). A senior USTR official emphasized that this is due to the effectiveness of tools such as the Special 301 process in encouraging WTO members to bring their laws into compliance with WTO intellectual property rules. Training and Technical Assistance In addition, most of the agencies involved in efforts to promote or protect IPR overseas engage in some training or technical assistance activities. Key activities to develop and promote enhanced IPR protection in foreign countries are undertaken by the Departments of Commerce, Homeland Security, Justice, and State; the FBI; USPTO; the Copyright Office; and USAID. Training events sponsored by U.S. agencies to promote the enforcement of intellectual property rights have included enforcement programs for foreign police and customs officials, workshops on legal reform, and joint government-industry events. According to a State Department official, U.S. government agencies have conducted intellectual property training for a number of countries concerning bilateral and multilateral intellectual property commitments, including enforcement, during the past few years. For example, intellectual property training has been conducted by numerous agencies in Poland, China, Morocco, Italy, Jordan, Turkey, and Mexico. U.S. Law Enforcement Efforts A small number of agencies are involved in enforcing U.S. intellectual property laws, and the nature of these activities differs from other U.S. government actions related to intellectual property protection. Working in an environment where counterterrorism is the central priority, the FBI and the Departments of Justice and Homeland Security take a variety of actions that include engaging in multicountry investigations involving intellectual property violations and seizing goods that violate intellectual property rights at U.S. ports of entry. The Department of Justice has an office that directly addresses international IPR problems. Further, Justice has been involved with international investigation and prosecution efforts and, according to a Justice official, has become more aggressive in recent years. For instance, Justice and the FBI coordinated an undercover IPR investigation, with the involvement of several foreign law enforcement agencies. The investigation focused on individuals and organizations, known as “warez” release groups, which specialize in the Internet distribution of pirated materials. In April 2004, these investigations resulted in 120 simultaneous searches worldwide (80 in the United States) by law enforcement entities from 10 foreign countries and the United States in an effort known as “Operation Fastlink.” In addition, in March 2004, the Department of Justice created an intellectual property task force to examine all of Justice’s intellectual property enforcement efforts and explore methods for the department to strengthen its protection of IPR. A report issued by the task force in October 2004 provided recommendations for improvements in criminal enforcement, international cooperation, civil and antitrust enforcement, legislation, and prevention of intellectual property crime. Some of these recommendations have been implemented, while others have not. For example, Justice has implemented a recommendation to create five additional Computer Hacking and Intellectual Property (CHIP) units to prosecute IPR crimes. Additionally, Justice has designated a CHIP coordinator in every U.S. Attorney’s office in the country, thereby implementing a report recommendation that such action be taken. However, an FBI official told us the FBI has not been able to implement recommendations such as posting additional personnel to the U.S. consulate in Hong Kong and the U.S. embassy in Budapest, Hungary for budgetary reasons; Justice has not yet implemented a similar recommendation to deploy federal prosecutors to these same regions and designate them as Intellectual Property Law Enforcement Coordinators. Fully implementing some of the report’s recommendations will require a sustained, long-term effort by Justice. For example, to address a recommendation to develop a national education program to prevent intellectual property crime, Justice held two day-long events in Washington, D.C. and Los Angeles with high school students listening to creative artists, victim representatives, the Attorney General, and a convicted intellectual property offender, among others, about the harm caused by intellectual property piracy. The events were filmed by Court TV and produced into a 30 minute show aired on cable television. Further, to enhance intellectual property training programs for foreign prosecutors and law enforcement officials, as recommended in the report, Justice worked with the Mexican government to provide a three-day seminar for intellectual property prosecutors and customs officials in December 2004. Such actions are initial efforts to address recommendations that can be further implemented over time. The Department of Homeland Security (DHS) tracks seizures of goods that violate IPR and reports seizures that totaled almost $140 million resulting from over 7,200 seizures in fiscal year 2004. In fiscal year 2004, goods from China (including Hong Kong) accounted for almost 70 percent of the value of all IPR seizures, many of which were shipments of cigarettes and apparel. Other seized goods were shipped from, among other places, Russia and South Africa. A DHS official pointed out that providing protection against IPR-infringing imported goods for some U.S. companies—particularly entertainment companies—can be difficult, because companies often fail to record their trademarks and copyrights with DHS. DHS and Commerce officials told us that they believe this situation could be ameliorated if, contrary to current practice, companies could simultaneously have their trademarks and copyrights recorded with DHS when they are provided their intellectual property right by USPTO or the Copyright Office. To identify shipments of IPR-infringing merchandise and prevent their entry into the United States, DHS is developing an IPR risk-assessment computer model. The model uses weighted criteria to assign risk scores to individual imports. The methodology is based on both historical risk-based trade data and qualitative rankings. The historical data are comprised of seizure information and cargo examination results, while qualitative rankings are based on information such as whether a shipment is arriving from a high-risk country identified by USTR’s annual Special 301 report. According to DHS officials, the model has been piloted, and several issues have been identified which must be addressed before it is fully implemented. DHS officials also told us that problems in identifying and seizing IPR- infringing goods frequently arise where the department’s in-bond system is involved. The in-bond system allows cargo to be transported from the original U.S. port of arrival (such as Los Angeles) to another U.S. port (such as Cleveland) for formal entry into U.S. commerce or for export to a foreign country. We previously reported that weak internal controls in this system enable cargo to be illegally diverted from the supposed destination. The tracking of in-bond cargo is hindered by a lack of automation for tracking in-bond cargo, inconsistencies in targeting and examining cargo, in-bond practices that allow shipments’ destinations to be changed without notifying DHS and extensive time intervals to reach their final destination, and inadequate verification of exports to Mexico. DHS inspectors we spoke with during the course of our previous work cited in-bond cargo as a high-risk category of shipment because it is the least inspected and in-bond shipments have been increasing. We made recommendations to DHS regarding ways to improve monitoring of in- bond cargo. USTR’s 2005 Special 301 report identifies customs operations as a growing problem in combating IPR problems in foreign countries such as Ukraine, Canada, Belize, and Thailand. Several Mechanisms Coordinate IPR Efforts, but Their Usefulness Varies Several interagency mechanisms exist to coordinate overseas law enforcement efforts, intellectual property policy initiatives, and development and assistance activities, although these mechanisms’ level of activity and usefulness vary. Formal Interagency Coordination on Trade Policy According to government and industry officials, an interagency trade policy mechanism established by the Congress in 1962 to assist USTR has operated effectively in reviewing IPR issues. The mechanism, which consists of tiers of committees as well as numerous subcommittees, constitutes the principle means for developing and coordinating U.S. government positions on international trade, including IPR. A specialized subcommittee is central to conducting the Special 301 review and determining the results of the review. This interagency process is rigorous and effective, according to U.S. government and industry officials. A Commerce official told us that the Special 301 review is one of the best tools for interagency coordination in the government, while a Copyright Office official noted that coordination during the review is frequent and effective. A representative for copyright industries also told us that the process works well and is a solid interagency effort. National Intellectual Property Law Enforcement Coordination Council (NIPLECC) NIPLECC, created by the Congress in 1999 to coordinate domestic and international intellectual property law enforcement among U.S. federal and foreign entities, seems to have had little impact. NIPLECC consists of (1) the Under Secretary of Commerce for Intellectual Property and Director of the United States Patent and Trademark Office; (2) the Assistant Attorney General, Criminal Division; (3) the Under Secretary of State for Economic and Agricultural Affairs; (4) the Deputy United States Trade Representative; (5) the Commissioner of Customs; and (6) the Under Secretary of Commerce for International Trade. NIPLECC’s authorizing legislation did not include the FBI as a member of NIPLECC, despite its pivotal role in law enforcement. However, according to representatives of the FBI, USPTO, and Justice, the FBI should be a member. USPTO and Justice cochair NIPLECC, which has no staff of its own. In the council’s several years of existence, its primary output has been three annual reports to the Congress, which are required by statute. (NIPLECC’s 2004 report has been drafted but is not yet available.) According to interviews with industry officials and officials from its member agencies, and as evidenced by its own reports, NIPLECC has struggled to define its purpose and has had little discernable impact. Indeed, officials from more than half of the member agencies offered criticisms of NIPLECC, remarking that it is unfocused, ineffective, and “unwieldy.” In official comments to the council’s 2003 annual report, major IPR industry associations expressed a sense that NIPLECC is not undertaking any independent activities or effecting any impact. One industry association representative stated that law enforcement needs to be made more central to U.S. IPR efforts and said that although he believes the council was created to deal with this issue, it has “totally failed.” The lack of communication regarding enforcement results in part from complications such as concerns regarding the sharing of sensitive law enforcement information and from the different missions of the various agencies involved in intellectual property actions overseas. According to a USTR official, NIPLECC needs to define a clear role in coordinating government policy. A Justice official stressed that, when considering coordination, it is important to avoid creating an additional layer of bureaucracy that may detract from efforts devoted to each agency’s primary mission. According to an official from USPTO, NIPLECC has been hampered primarily by its lack of its own staff and funding. In our September 2004 report, we noted that “If the Congress wishes to maintain NIPLECC and take action to increase its effectiveness, the Congress may wish to consider reviewing the council’s authority, operating structure, membership, and mission.” In the Consolidated Appropriations Act, 2005, the Congress provided $2 million for NIPLECC expenses, to remain available through fiscal year 2006. The act addressed international elements of the council and created the position of the Coordinator for International Intellectual Property Enforcement, appointed by the President, to head NIPLECC. This official may not serve in any other position in the federal government, and the NIPLECC co-chairs, representatives from USPTO and Justice, are to report to the Coordinator. The law also provides additional direction regarding NIPLECC’s international mission, providing that NIPLECC shall (1) establish policies, objectives, and priorities concerning international intellectual property protection and intellectual property law enforcement; (2) promulgate a strategy for protecting American intellectual property overseas; and (3) coordinate and oversee implementation of items (1) and (2) by agencies with responsibilities for intellectual property protection and intellectual property law enforcement. The Coordinator, with the advice of NIPLECC members, is to develop a budget proposal for each fiscal year to implement the strategy for protecting American intellectual property overseas and for NIPLECC operations and may select, appoint, employ, and fix compensation of such officers and employees as may be necessary to carry out NIPLECC functions. Personnel from other departments or agencies may be temporarily reassigned to work for NIPLECC. Agency officials told us that, as of June 2005, no Coordinator had been named (although a selection process was underway), the $2 million in NIPLECC funding has not been spent, and NIPLECC continued to accomplish little. Strategy Targeting Organized Piracy (STOP!) In October 2004, USTR and the Departments of Commerce, Justice, and Homeland Security announced STOP! to fight trade in pirated and counterfeit goods. Other STOP! participants are the Department of State and the Department of Health and Human Service’s Food and Drug Administration. STOP!, which is targeted at cross-border trade in tangible goods and was initiated to strengthen U.S. government and industry enforcement actions. STOP! has five general objectives: 1. Stop pirated and counterfeit goods at the U.S. border. Such efforts are to be achieved through, for example, the implementation of the DHS IPR risk model, mentioned above, to better identify and seize infringing goods at U.S. borders. 2. Dismantle criminal enterprises that steal intellectual property. Justice and DHS are taking measures to maximize their ability to pursue perpetrators of intellectual property crimes through, for example, the addition of the 5 new Justice CHIP units mentioned above. Justice and DHS are also committed under STOP! to work with the Congress to update IPR legislation. 3. Keep counterfeit and pirated goods out of global supply chains. Commerce is working with industry to develop voluntary guidelines companies can use to ensure that supply and distribution chains are free of counterfeits. 4. Empower U.S. businesses to secure and enforce their rights at home and abroad. For example, Commerce is meeting with small and medium enterprises to inform companies on how to secure and protect their rights in the global marketplace. 5. Reach out to U.S. trading partners to build an international coalition to block trade in pirated and counterfeit goods. USTR and State are engaging in multilateral forums, such as the Organization for Economic Cooperation and Development (OECD) and the Asia- Pacific Economic Cooperation (APEC), through the introduction of new initiatives to improve the global intellectual property environment. Agency officials told us that STOP! has both furthered ongoing agency activities and facilitated new initiatives. For example, Commerce officials told us that while they had been working on having the OECD conduct a study of the extent and impact of counterfeiting and piracy, STOP! provided additional momentum to succeed in their efforts. They said that the OECD has now agreed to conduct a comprehensive study on the extent and effect of international counterfeiting and piracy in tangible goods, with a study addressing the digital arena to follow. In addition, in March 2005, Justice announced the continuation of work by its intellectual property task force, which had been rolled into STOP!. Regarding new initiatives, USPTO has established a hotline for companies to obtain information on intellectual property rights enforcement and report problems in other countries. According to USPTO, this hotline has received 387 calls since it was activated in October 2004. Commerce has also developed a website to provide information and guidance to IPR holders for registering and protecting their intellectual property rights in other countries. The most visible new effort undertaken as a part of STOP! is a coordinated U.S. government outreach to foreign governments. In April 2005 officials from seven federal agencies traveled to Hong Kong, Japan, Korea, and Singapore and in June, they traveled to Belgium, France, Germany, and the United Kingdom. According to USTR officials, the goals of these trips are to describe U.S. initiatives related to IPR enforcement and to learn from the activities of “like-minded” trading partners with IPR concerns and enforcement capacities similar to the United States. DHS officials reported that their Asian counterparts were interested in the U.S. development of the IPR risk model to target high-risk imports for inspection, while a USTR official emphasized that U.S. participants were impressed by a public awareness campaign implemented in Hong Kong. Officials involved in STOP! told us that one key goal of the initiative is to improve interagency coordination. Agency officials told us that to achieve this goal, staff-level meetings have been held monthly and senior officials have met about every 6 weeks. Agency officials also told us that as an Administration initiative with high-level political support, STOP! has energized agencies’ enforcement efforts and strengthened interagency efforts. A USPTO official explained that STOP! has laid the groundwork for future progress and continued interagency collaboration. Agency officials noted that STOP! goals and membership overlap with those of NIPLECC, and remarked that STOP! could possibly be integrated into NIPLECC at some future date. In May 2005, a NIPLECC meeting was held to address coordination between STOP! and NIPLECC. According to a Justice official, once an International Intellectual Property Enforcement Coordinator is appointed, there may be an opportunity to continue the momentum that STOP! has provided in the context of NIPLECC activities. One private sector representative we met with said that although U.S. industry has worked closely with agencies to achieve the goals of STOP!, he is frustrated with the lack of clear progress in many areas. For instance, he said that the administration has neither supported any pending legislation to improve intellectual property rights protection, nor proposed such legislation. He added that agencies need to do more to integrate their systems, noting the situation where companies must currently receive a trademark or copyright from USPTO or the Copyright Office, and then separately record that right with DHS. Another industry representative noted that STOP! has been announced with great fanfare, but that progress has been sparse. However, he noted that industry supports this administration effort and is working collaboratively with the federal agencies to improve IPR protection. Another industry official cited issues of concern such as insufficient enforcement resources “on the ground” (particularly at DHS). Other Coordination Mechanisms Other coordination mechanisms include the National International Property Rights Coordination Center (IPR Center) and informal coordination. The IPR Center in Washington, D.C., a joint effort between DHS and the FBI, began limited operations in 2000. According to a DHS official, the potential for coordination between DHS, the FBI, and industry and trade associations makes the IPR Center unique. The IPR Center is intended to serve as a focal point for the collection of intelligence involving copyright and trademark infringement, signal theft, and theft of trade secrets. However, the center is not widely used by industry. For example, an FBI official told us that from January 2004 through May 2005, the FBI has received only 10 referrals to its field offices from the IPR Center. Further, the number of FBI and DHS staff on board at the center has decreased recently and currently stands at 10 employees (down from 20 in July 2004), with no FBI agents currently working there and fewer DHS agents than authorized. However, IPR Center officials emphasized one recent, important case that was initiated by the center. DHS, in conjunction with the Chinese government and with the assistance of the intellectual property industry, conducted the first ever joint U.S.-Chinese enforcement action on the Chinese mainland, disrupting a network that distributed counterfeit motion pictures worldwide. More than 210,000 counterfeit motion picture DVDs were seized, and in 2005, four individuals (two Chinese and two Americans) were convicted in China. Policy agency officials noted the importance of informal but regular communication among staff at the various agencies involved in the promotion or protection of intellectual property overseas. Several officials at various policy-oriented agencies, such as USTR and the Department of Commerce, noted that the intellectual property community was small and that all involved were very familiar with the relevant policy officials at other agencies in Washington, D.C. Further, State Department officials at U.S. embassies regularly communicate with agencies in Washington, D.C., regarding IPR matters and U.S. government actions. Agency officials noted that this type of coordination is central to pursuing U.S. intellectual property goals overseas. Although communication between policy and law enforcement agencies can occur through forums such as the NIPLECC, these agencies do not systematically share specific information about law enforcement activities. According to an FBI official, once a criminal investigation begins, case information stays within the law enforcement agencies and is not shared. A Justice official emphasized that criminal law enforcement is fundamentally different from the activities of policy agencies and that restrictions exist on Justice’s ability to share investigative information, even with other U.S. agencies. Enforcement Overseas Remains Weak and Challenges Remain U.S. efforts such as the annual Special 301 review have contributed to strengthened foreign IPR laws, but enforcement overseas remains weak. The impact of U.S. activities is challenged by numerous factors. Industry representatives report that the situation may be worsening overall for some intellectual property sectors. Weak Enforcement Overseas The efforts of U.S. agencies have contributed to the establishment of strengthened intellectual property legislation in many foreign countries, however, the enforcement of intellectual property rights remains weak in many countries, and U.S. government and industry sources note that improving enforcement overseas is now a key priority. A recent USTR Special 301 report states that “although several countries have taken positive steps to improve their IPR regimes, the lack of IPR protection and enforcement continues to be a global problem.” For example, although the Chinese government has improved its statutory IPR regime, USTR remains concerned about enforcement in that country. According to USTR, counterfeiting and piracy remain rampant in China and increasing amounts of counterfeit and pirated products are being exported from China. In addition, although Ukraine has shut down offending domestic optical media production facilities, pirated products continue to pervade Ukraine, and, according to USTR’s 2004 Special 301 Report, Ukraine is also a major trans-shipment point and storage location for illegal optical media produced in Russia and elsewhere as a result of weak border enforcement efforts. Although U.S. law enforcement does undertake international cooperative activities to enforce intellectual property rights overseas, executing these efforts can prove difficult. For example, according to DHS and Justice officials, U.S. efforts to investigate IPR violations overseas are complicated by a lack of jurisdiction as well as by the fact that U.S. officials must convince foreign officials to take action. Further, a DHS official noted that in some cases, activities defined as criminal in the United States are not viewed as an infringement by other countries and that U.S. law enforcement agencies can therefore do nothing. Challenges to U.S. Efforts In addition, U.S. efforts confront numerous challenges. Because intellectual property protection is one of many U.S. government objectives pursued overseas, it is viewed internally in the context of broader U.S. foreign policy objectives that may receive higher priority at certain times in certain countries. Industry officials with whom we met noted, for example, their belief that policy priorities related to national security were limiting the extent to which the United States undertook activities or applied diplomatic pressure related to IPR issues in some countries. Further, the impact of U.S. activities is affected by a country’s own domestic policy objectives and economic interests, which may complement or conflict with U.S. objectives. U.S. efforts are more likely to be effective in encouraging government action or achieving impact in a foreign country where support for intellectual property protection exists. It is difficult for the U.S. government to achieve impact in locations where foreign governments lack the “political will” to enact IPR protections. Many economic factors complicate and challenge U.S. and foreign governments’ efforts, even in countries with the political will to protect intellectual property. These factors include low barriers to entering the counterfeiting and piracy business and potentially high profits for producers. In addition, the low prices of counterfeit products are attractive to consumers. The economic incentives can be especially acute in countries where people have limited income. Technological advances allowing for high-quality inexpensive and accessible reproduction and distribution in some industries have exacerbated the problem. Moreover, many government and industry officials believe that the chances of getting caught for counterfeiting and piracy, as well as the penalties when caught, are too low. The increasing involvement of organized crime in the production and distribution of pirated products further complicates enforcement efforts. Federal and foreign law enforcement officials have linked intellectual property crime to national and transnational organized criminal operations. Further, like other criminals, terrorists can trade any commodity in an illegal fashion, as evidenced by their reported involvement in trading a variety of counterfeit and other goods. Many of these challenges are evident in the optical media industry, which includes music, movies, software, and games. Even in countries where interests exist to protect domestic industries, such as the domestic music industry in Brazil or the domestic movie industry in China, economic and law enforcement challenges can be difficult to overcome. For example, the cost of reproduction technology and copying digital media is low, making piracy an attractive employment opportunity, especially in a country where formal employment is hard to obtain. The huge price differentials between pirated CDs and legitimate copies also create incentives on the consumer side. For example, when we visited a market in Brazil, we observed that the price for a legitimate DVD was approximately ten times the price for a pirated DVD. Even if consumers are willing to pay extra to purchase the legitimate product, they may not do so if the price differences are too great for similar products. Further, the potentially high profit makes optical media piracy an attractive venture for organized criminal groups. Industry and government officials have noted criminal involvement in optical media piracy and the resulting law enforcement challenges. Recent technological advances have also exacerbated optical media piracy. The mobility of the equipment makes it easy to transport it to another location, further complicating enforcement efforts. Likewise, the Internet provides a means to transmit and sell illegal software or music on a global scale. According to an industry representative, the ability of Internet pirates to hide their identities or operate from remote jurisdictions often makes it difficult for IPR holders to find them and hold them accountable. Industry Concerns Despite improvements such as strengthened foreign IPR legislation, international IPR protection may be worsening overall for some intellectual property sectors. For example, according to copyright industry estimates, losses due to piracy grew markedly in recent years. The entertainment and business software sectors, for example, which are very supportive of USTR and other agencies, face an environment in which their optical media products are increasingly easy to reproduce, and digitized products can be distributed around the world quickly and easily via the Internet. According to an intellectual property association representative, counterfeiting trademarks has also become more pervasive in recent years. Counterfeiting affects more than just luxury goods; it also affects various industrial goods. An industry representative noted that U.S. manufacturers of all sizes are now being adversely affected by counterfeit imports. An industry representative also added that there is a need for additional enforcement activity by the U.S. government at the border. However, he recognized that limited resources and other significant priorities for DHS heighten the need to use existing resources more effectively to interdict more counterfeit and pirated goods. Conclusions The U.S. government has demonstrated a commitment to addressing IPR issues in foreign countries using multiple agencies. However, law enforcement actions are more restricted than other U.S. activities, owing to factors such as a lack of jurisdiction overseas to enforce U.S. law. Several IPR coordination mechanisms exist, with the interagency coordination that occurs during the Special 301 process standing out as the most significant and active. Efforts under STOP! appear to have strengthened the U.S. government’s focus on addressing IPR enforcement problems in a more coordinated manner. Conversely, NIPLECC, the mechanism for coordinating intellectual property law enforcement, has accomplished little that is concrete and its ineffectiveness continues despite recent congressional action to provide funding, staffing, and clearer guidance regarding its international objectives. In addition, NIPLECC does not include the FBI, a primary law enforcement agency. Members, including NIPLECC leadership, have repeatedly acknowledged that the group continues to struggle to find an appropriate mission. Mr. Chairman, this concludes my prepared statement. I would be pleased to respond to any questions that you or other members of the subcommittee may have at this time. Contacts and Acknowledgments Should you have any questions about this testimony, please contact me by e-mail at [email protected]. I can also be reached at (202) 512-4128. Other major contributors to this testimony were Emil Friberg, Leslie Holen, Jason Bair, Ming Chen, Sharla Draemel, and Reid Lowe. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Although the U.S. government provides broad protection for intellectual property domestically, intellectual property protection in parts of the world is inadequate. As a result, U.S. goods are subject to piracy and counterfeiting in many countries. A number of U.S. agencies are engaged in efforts to improve protection of U.S. intellectual property abroad. This testimony, based on a prior GAO report as well as recent work, describes U.S. agencies' efforts, the mechanisms used to coordinate these efforts, and the impact of these efforts and the challenges they face. U.S. agencies undertake policy initiatives, training and assistance activities, and law enforcement actions in an effort to improve protection of U.S. intellectual property abroad. Policy initiatives include identifying countries with the most significant problems--an annual interagency process known as the "Special 301" review. In addition, many agencies engage in assistance activities, such as providing training for foreign officials. Finally, a small number of agencies carry out law enforcement actions, such as criminal investigations and seizures of counterfeit merchandise. Agencies use several mechanisms to coordinate their efforts, although the mechanisms' usefulness varies. The National Intellectual Property Law Enforcement Coordination Council, established in 1999 to coordinate domestic and international intellectual property law enforcement, has struggled to find a clear mission, has undertaken few activities, and is generally viewed as having little impact despite recent congressional action to strengthen the council. The Congress's action included establishing the role of Coordinator, but the position has not yet been filled (although the selection process is underway). The Administration's October 2004 Strategy Targeting Organized Piracy (STOP!) is intended to strengthen U.S. efforts to combat piracy and counterfeiting. Thus far, the initiative has resulted in some new actions and emphasized other ongoing efforts. U.S. efforts have contributed to strengthened intellectual property legislation overseas, but enforcement in many countries remains weak, and further U.S. efforts face significant challenges. For example, competing U.S. policy objectives such as national security interests take precedence over protecting intellectual property in certain regions. Further, other countries' domestic policy objectives can affect their "political will" to address U.S. concerns. Finally, many economic factors, as well as the involvement of organized crime, hinder U.S. and foreign governments' efforts to protect U.S. intellectual property abroad.
GAO_GAO-05-92
Background The EITC, enacted in 1975, was originally intended to offset the burden of Social Security taxes and provide a work incentive for low-income taxpayers. The credit has been modified several times since its introduction, and three laws have been enacted in recent years aimed at resolving some concerns with EITC rules. Despite modifications, the original goal of the credit remains intact and the EITC continues to provide a substantial benefit to millions of American families. The EITC is a refundable tax credit, meaning that qualifying working taxpayers may receive a refund greater than the amount of income tax they paid for the year. EITC payments have a (1) phase-in range in which higher incomes yield higher EITC amounts, (2) plateau phase in which EITC amounts remain the same even as income rises, and (3) phase-out range in which higher incomes yield lower EITC amounts. The amount of credit a taxpayer receives is based on several other factors, such as the presence and number of qualifying children. In general, taxpayers with one or more qualifying children receive a higher credit than taxpayers without qualifying children. For tax year 2003, the amount of EITC that could be claimed with two qualifying children ranged from $0 to $4,204 per tax return filed, depending on income and filing status. EITC requirements for tax year 2003 include rules for all taxpayers claiming the credit and additional rules that differ depending on whether or not a taxpayer has qualifying children (see table 1). IRS has periodically measured EITC compliance. For tax year 1999, (the most current data available), IRS estimated the EITC overclaim rates at 27 to 32 percent of EITC dollars claimed, or $8.5 billion to $9.9 billion. IRS has limited data on underclaims, which for tax year 1999 were estimated to be $710 million to $765 million. Because of the persistently high rates of noncompliance, we also have identified the EITC program as a high-risk area for IRS since 1995. In February 2002, the compliance study was released and the Assistant Secretary of the Treasury, Tax Policy, and IRS Commissioner convened a joint IRS/Treasury task force to identify ways of reducing EITC overclaims, while maintaining participation among eligible claimants and minimizing taxpayer and IRS’s administrative burden. The task force found that the leading causes of errors resulting in EITC overclaims were due to taxpayers (1) claiming children who were not a qualifying child, (2) using an incorrect filing status, and (3) misreporting their income. With this information, the task force designed what ultimately became initial versions of the three tests, as show in figure 1. As envisioned by the task force, even if fully implemented, IRS does not plan to apply the test requirements to the entire EITC population because IRS can use available data to verify the eligibility of certain taxpayers. Because a new analysis of EITC compliance using 2001 tax return information is not expected to be complete until spring 2005, IRS did not know whether compliance has significantly changed since 1999 when developing the EITC tests. However, IRS officials do not think EITC compliance has improved substantially since then. In October 2004, Congress passed a new law to make the definition of a qualifying child uniform in various IRS provisions, but those changes are not effective until tax years after December 31, 2004. In general, the revised definition appears to mainly affect other tax situations, such as claiming dependents, more than just affecting the EITC. IRS is studying whether the change would affect any testing that may be done in 2006. Having a Complete Evaluation Plan Before Implementation Helps to Ensure Success IRS completed its initial evaluation plans for the three EITC tests in December 2003. In September 2003, we recommended that IRS accelerate the development of its qualifying child evaluation plan to help ensure the success of the test. An evaluation plan ideally should be completed and disseminated for review and feedback before beginning the research activity (or in this case, test). As we reported, although an evaluation plan need not precisely identify all issues and how they will be evaluated before implementation, the more complete a plan is, the more likely the evaluation will be sufficient and support future decisions. IRS’s Internal Revenue Manual also recognizes the desirability of having an evaluation plan in place before a project is implemented; for example, it requires such plans before reorganizations. IRS Implemented Three Tests on Leading Sources of EITC Noncompliance and Reported Spending Most of the Funding Received on the Tests In an effort to implement the joint IRS/Treasury task force recommendations, IRS implemented three new tests—qualifying child certification, filing status, and income misreporting—in 2004. IRS reported spending about $17.5 million on the three EITC tests—about $3.2 million less than planned. Because IRS spent less than planned on the tests, it was able to fund some activities under the five-point initiative that otherwise would have gone unfunded. Qualifying Child Certification Test Required Substantiation of Child Residency The purpose of the qualifying child certification test was to evaluate the impact on the test goals of asking taxpayers to substantiate—when filing their tax return—that their qualifying child lived with them for more than half the tax year, as required by the EITC (see table 1). Under current rules, taxpayers are only required to substantiate that their child satisfied this residency requirement if they are being audited by IRS on this issue. This test involved two random samples of 25,000 taxpayers who claimed one or more qualifying children for tax year 2002: a test sample, whose members were asked to substantiate their qualifying child’s residency, and a control sample, whose members had similar characteristics to the test sample, but were not asked for any substantiation. Both samples were designed to include taxpayers (1) most likely to make errors and (2) whose qualifying child eligibility could not be verified from information available to IRS. IRS used prior research results to determine which taxpayers would be most likely to incorrectly claim a qualifying child. The research showed that with those taxpayers most likely to make errors, the errors often correlated with the taxpayer’s relationship to the child and the taxpayer’s gender and filing status. Taxpayers most frequently making qualifying child errors included both fathers and males and females who were not the child’s parents and who filed as single or head of household. IRS also used available data to obtain evidence about taxpayers and whether their qualifying children met residency and relationship requirements. For example, a child’s residency could be established with some certainty by using the Department of Health and Human Services’ Federal Case Registry, and a child’s relationship to the taxpayer could be established with some certainty using the Social Security Administration’s KIDLINK. When this available evidence supported the taxpayers’ EITC claim that they had a qualifying child, those taxpayers were excluded from the qualifying child test. Prior research showed that taxpayers who comply with the residency requirement also comply in most cases with the relationship requirement. Thus, if a taxpayer’s child met the residency requirement, there was a high probability that the relationship requirement would be met as well. Given this analysis and difficulties IRS encountered in identifying documents that taxpayers could readily obtain to prove their relationship to the child, any taxpayer whose EITC eligibility was not verified from available data became eligible to be selected for the qualifying child test in which they would be asked to substantiate that the child lived with the taxpayer for more than 6 months during the year. Our September 2003 report contains a more detailed explanation on how the sample was designed. As figure 2 shows, males filing as single or head of household comprised the majority of the test sample. The control group had characteristics similar to the test group. The qualifying child test had three components—a general test and two subtests. Under the general test, taxpayers received test documentation in English only and could have provided substantiation in one or any combination of three ways—records, letters, or a Schedule A, also known as the general affidavit. Records that a taxpayer could provide included school, medical, landlord, or child-care provider documentation. Letters were statements from certain individuals, such as a member of the clergy or a community based organization official, on official letterhead. Affidavits were legal documents in which an individual attests that the taxpayer’s qualifying child resided with the taxpayer for a certain period of time. To be accepted, the document(s) had to contain various data, such as the names of the qualifying children and the dates the child lived with the taxpayer. In response to concerns that taxpayers may have difficulty obtaining certification through the official sources cited on the Schedule A, such as through an attorney or landlord, and that English-only documents might weaken participation among taxpayers with limited English proficiency, IRS also implemented two subtests. The Schedule B, also known as the friends and neighbors affidavit subtest, for 1,000 of the 25,000 taxpayers, broadens the definition of the individuals allowed to certify the child’s residency to include those who have personal knowledge of a taxpayer’s circumstances, such as certain family members. The purpose of this subtest was to determine whether such individuals could facilitate an increase in residency certification for eligible taxpayers. The taxpayers in the Spanish subtest, 1,000 of the remaining 24,000, received documents in both English and Spanish. The purpose of this subtest was to determine whether Spanish language documents would increase the number of taxpayers attempting to certify their child’s residency. Table 2 describes the test and subtests. IRS sent the selected taxpayers five documents in December 2003 informing them about the test, including: 1. Notice 84-A, a letter informing the taxpayer about the new certification 2. Form 8836, Qualifying Children Residency Statement, to be completed by the taxpayer and returned to IRS; 3. Schedule A or B (an affidavit) that could be used for certification; 4. Publication 3211M, Earned Income Tax Credit Questions and 5. Publication 4134, Free/Nominal Cost Assistance Available for Low Income Taxpayers. Under the test and subtests, once taxpayers received the documents from IRS, they were supposed to obtain supporting documents to prove their qualifying child’s residency and send that documentation back to IRS in the same envelope as their 2003 tax return. IRS would withhold, or “freeze,” the EITC portion of the taxpayers’ refund until acceptable documentation proving a child’s residency was received. Once IRS received the documentation, IRS examiners in Kansas City, Missouri, would review it and send a letter to the taxpayer accepting the claim, asking for additional information, or rejecting the claim. If the taxpayers provided acceptable documents, IRS would release the taxpayer’s EITC portion of their refund. If acceptable documentation was not provided or if no response was provided following a second notification letter, the taxpayer’s EITC claim would be denied and the taxpayer would be informed of his or her right to appeal to the U.S. Tax Court. This process is depicted in figure 3. Filing Status Test Required Substantiation That the Correct Filing Status Was Used Another cause of EITC errors is when taxpayers claim an incorrect filing status. EITC filing status errors occur when married taxpayers incorrectly use single or head of household. Married taxpayers who incorrectly file individually as single or head of household could qualify for a larger EITC than they would otherwise be entitled to if they claimed the correct filing status. This is because, pursuant to statute, IRS considers the combined income of married taxpayers who file jointly for purposes of determining the amount of EITC for which the taxpayer(s) qualifies. Using combined income may result in taxpayers exceeding the EITC income ceiling, therefore receiving no credit at all, or qualifying for a lesser credit amount. For example, in tax year 2003, married taxpayers filing jointly with $17,500 of income each, or a combined earned income of $35,000, and four qualifying children would not be eligible for the EITC. However, if each taxpayer incorrectly filed as head of household, claimed two qualifying children, and their $17,500 income, they would each receive a credit of $3,405 or a combined total of $6,810. IRS’s databases offer limited ability to independently or systematically identify taxpayers who may be claiming an incorrect filing status. The primary purpose of the filing status test was to evaluate the impact on overclaims of requiring taxpayers whose filing status has changed from married to single or head of household any time between 1999 through 2002 to substantiate the filing status they claimed on their 2003 tax return. To select the population for the filing status test, IRS started with a computer file of approximately 1.6 million taxpayer returns, or a 10 percent sample of all taxpayers who claimed the EITC with one or more qualifying children on their 2002 return. IRS eliminated the qualifying child and income misreporting test populations and, applied other exclusions, such as taxpayers subject to an audit examination, or taxpayers with more than one potential EITC-related issue. From that population, IRS selected taxpayers whose returns showed a filing status of married at least once in the previous 3 years. This resulted in a sample of 69,000 taxpayers, which IRS sorted by gender, zip code and filing status. Using a random sampling method, IRS selected 36,000 of these taxpayers for this sample who filed as single or head of household on their 2003 tax return. Females filing as single or head of household comprised 96.9 percent of the test sample. The taxpayers in the 36,000 sample who filed a 2003 tax return claiming the EITC received a letter from IRS about 2 weeks after filing their return informing them that the EITC portion of their refund would be delayed until IRS reviewed their return. Within 30 days, IRS sent a second letter asking taxpayers to verify their filing status, using the enclosed Form 886- FS, Filing Status Information Request and send it back to the IRS. This form requires taxpayers to provide documentation as to why they did not file as married for tax year 2003. Taxpayers were asked to provide IRS with documentation that they were divorced or legally separated as of December 31, 2003, they did not live with their spouse for the last 6 months of the year, the spouse was deceased, or some other reason existed to warrant a change of filing status. IRS examiners reviewed the form and accompanying documentation and sought clarification or additional proof, if needed. If IRS examiners accepted the documentation, they released the EITC portion of the taxpayer’s refund and closed the case. If a taxpayer did not respond or IRS found the taxpayer’s documentation unacceptable, then IRS sent the taxpayer a notice stating that IRS (1) changed the taxpayer’s filing status from single or head of household to the married filing separate status, (2) disallowed the EITC, and (3) changed the taxpayer’s standard deduction to the appropriate amount. In addition, IRS forwarded the taxpayers a letter informing them of their right to appeal the changes to U.S. Tax Court. This process is depicted in figure 4. The filing status test also included a subtest to gather additional information on EITC claimants who used the head of household filing status. The IRS/Treasury task force found that taxpayers using the head of household filing status were more likely to misstate their filing status than taxpayers using a different one. IRS selected 500 taxpayers who filed as head of household on their 2003 tax return. The sample of 500 taxpayers showed 99 percent females and 1 percent males with the head of household filing status. Unlike the test for the 36,000 sample, IRS did not ask taxpayers in the subtest of 500 who filed a 2003 return to provide supplemental documentation to support their filing status until after they had received their EITC refunds. And, unlike taxpayers in the 36,000 sample, where IRS had some information they had filed as married at least once from 1999 - 2002, IRS did not have such information on the taxpayers in the 500 sample. In fact, IRS could not determine whether these taxpayers were ever married. As a result, IRS asked these taxpayers to confirm their eligibility for the head of household filing status, which they claimed on their 2003 tax return, by either (1) calling IRS on a special toll-free number and stating that they used the correct filing status or (2) completing a stub that was attached to the letter they received, checking yes or no, and mailing or faxing it to IRS. IRS did not ask these taxpayers to provide substantiation to support the filing status they claimed. This was, in part, because IRS had not identified any documentation that would be available to support a taxpayer’s claim that he or she had never been married. If taxpayers indicated they were not eligible to use the head of household filing status, they could correct their filing status by sending in an amended tax return either by mail or fax. IRS asked taxpayers to provide the information within 45 days from the date on the letter. All taxpayers who did not respond would be subject to an examination before their 2004 EITC refund would be released. In another aspect of the filing status test, IRS planned to determine whether a third-party service that attempted to locate the address of taxpayers could be as reliable as the filing status test in identifying taxpayers who had used an incorrect filing status. The locator service used information from credit bureaus to determine whether taxpayers were living together and possibly married. The information from the locator service had no impact on taxpayers for this year’s filing status test. Income Misreporting Test Used New Screening Process to Find Cases Likely to Yield the Highest Assessments Although some taxpayers could receive a larger EITC by over-reporting their income, misreporting of income for EITC is generally an understatement, according to IRS, resulting in the taxpayer receiving a higher credit amount than entitled. The purpose of the income misreporting test was to evaluate the impact on the test goal of a new screening process to select EITC tax returns that identify taxpayers likely to have the most significant changes in their tax assessments due to underreporting of income on their tax return. Income misreporting is a component of an existing program known as Automated Underreporter (AUR). Under that program, IRS attempts to match income information as reported by the taxpayer on the tax return to information reported by third-party sources, such as a taxpayer’s employer or bank. In instances where this matching process identifies discrepancies, IRS may assess additional taxes on the taxpayer. The annual AUR matching program identifies far more cases than IRS has staff to work. In determining which cases to work, IRS selects not only cases that it believes will generate the highest probable assessment, but also cases involving taxpayers who underreport different types of income (e.g., wages, interest). In the past, some of those cases—roughly 300,000 per year— involved the EITC. However, EITC was not one of the different types of categories from which IRS historically had chosen cases. For the income misreporting test, IRS attempted to select---from all the EITC cases for which AUR found an income mismatch on 2002 tax returns—300,000 EITC cases expected to provide the highest EITC assessments. IRS employed a computer selection tool that used variables such as the taxpayer’s filing history, filing status, and number of children to rank the cases in terms of the highest probable EITC assessments. Additionally, IRS designed the test to determine whether certain characteristics of the selected cases made them more likely to yield higher assessments. Thus, IRS placed each of the selected cases in one of four groups: (1) “repeater egregious,” cases in the same income category for the third year in a row and were assessed an additional tax for the previous 2 years; (2) “repeater worked,” cases worked at least once during the last 3 years; (3) “repeater not worked,” cases in the income misreporting inventory at least once in the last 3 years, but not worked; and (4) “other criteria,” cases randomly selected from the other three categories and other criteria, such as first-time underreporters. As figure 5 shows, the majority, 62 percent, of the taxpayers selected for the income misreporting test filed their return using the head of household filing status, while 30 percent claimed married filing jointly and 8 percent claimed a single filing status. IRS added the income misreporting test cases back into the general AUR inventory, and examiners in Atlanta, Georgia; Austin, Texas; and Fresno, California worked the test cases using the same processes as for all other AUR cases. Examiners manually screened all cases for simple math errors or errors that could not be picked up by a computer (e.g., placing an amount on the wrong line). If such an error was found and resolved, the tax return was accepted, and the case was closed. If the examiner could not resolve the discrepancy, the examiner sent a notice to the taxpayer explaining that IRS found a discrepancy on his or her return. The taxpayer was given 30 days to respond to the notice. If no response was received, IRS sent another notice informing the taxpayer that the IRS had determined there was a deficiency in the return and the taxpayer must pay an assessment based on the deficiency or file a petition with the U.S. Tax Court within 90 days. If IRS received a response that took issue with IRS’s assessment, the examiner would then determine whether the response was sufficient to support the taxpayer’s original tax return. If the response was sufficient, the examiner would close the case with no additional tax assessed. If the response was not sufficient or a response was not received, the IRS examiner would assess the taxpayer the additional tax. This process is depicted in figure 6. IRS Reported Spending Less on Tests Than Anticipated IRS reported spending about $17.5 million on the three EITC tests—about $3.2 million less than planned. This funding was part of the Consolidated Appropriations Act of 2004, which provided IRS with $52 million in fiscal year 2004 for a five-point initiative to improve service, fairness, and compliance with the EITC program. IRS announced the new initiative in June 2003. The initiative addresses: reducing the backlog of pending EITC examinations to ensure that eligible taxpayers whose returns are being examined receive their refunds quickly; minimizing burden and enhancing the quality of communications with taxpayers by improving the existing audit process; encouraging eligible taxpayers to claim the EITC by increasing outreach efforts and making the requirements for claiming the credit easier to understand; ensuring fairness by refocusing compliance efforts on taxpayers who claimed the credit, but were ineligible because their income was too high (or filing status was incorrect); and piloting a certification effort to substantiate qualifying child residency eligibility for claimants whose returns are associated with a high risk of error. Of the $52 million budgeted, IRS reported spending or obligating $51.8 million in fiscal year 2004. Of that, IRS officials said they spent about $17.5 million on the tests---$7.4 million was spent on the income misreporting test, $5.6 million on the filing status test, and $4.5 million on the qualifying child test. IRS officials noted that, in some cases, the amounts they reported spending differed from what they budgeted. For example, IRS originally budgeted $7.2 million on the filing status test, but reported spending $5.6 million on direct costs for that test. According to IRS officials, they spent about $3.2 million less than anticipated on the tests primarily because some planned work did not materialize. For example, for the filing status test, IRS originally planned to work more cases but about 10,000 taxpayers who were originally selected for the filing status test were not included for various reasons, such as they did not claim the EITC. IRS officials said that, as a result, they redirected funding to improvement projects within the five-point initiative that would otherwise have gone unfunded. Tests Implemented Smoothly, and Refinements for the Fiscal Year 2005 Tests Made IRS’s implementation of the tests generally proceeded smoothly because of IRS actions including use of a detailed project plan and management involvement. IRS addressed most of the major issues that arose during implementation and released a status report to Congress in August 2004. IRS’s plans for most refinements for the 2005 tests are based on the lessons that it learned from the 2004 tests. Tests Were Executed Largely as Planned, Thus Meeting the Original Intent The implementation plans for all three tests generally followed the recommendations of the IRS/Treasury task force, and IRS’s only significant departure from those recommendations was based on an informed decision. The task force recommended that taxpayers claiming the EITC (1) provide IRS with documentation to prove a qualifying child’s residency prior to payment of the credit (the qualifying child test), (2) submit additional data to establish that they are claiming the correct filing status (the filing status test), and (3) use a new screening process to select tax returns from an existing program to identify taxpayers likely to have the most significant underreporting of income on their tax return and, therefore, the highest potential EITC overclaim amount (income misreporting test). In all three tests, IRS gathered information needed to determine whether the task force recommendations have potential for reducing the EITC overclaim rate without undue adverse effects. It was important that IRS followed the task force recommendations; otherwise, the validity of those recommendations would remain unknown. IRS made an informed choice in not implementing one recommendation. The task force had also recommended that taxpayers certify the child’s relationship to the taxpayer. However, IRS determined that this was a lesser compliance problem than residency and that it could be difficult for taxpayers to provide some of the documentation that IRS planned to request for certification of the relationship. In addition, since both residency and relationship requirements had to be met to claim the EITC, if taxpayers fail residency certification, which is more likely according to the compliance study, there would be no need to test for relationship. To implement each test, IRS prepared a detailed project plan with time frames for numerous action items such as developing notices, creating organization charts, hiring staff, developing training materials, working on systems needs, and determining samples. We found that IRS officials used these plans extensively. For example, initially, IRS managers checked the plan daily to determine if the schedule was being followed and less often as the tests progressed. For a task to be marked as completed, certain information had to be provided to the person in-charge of monitoring the plan, including validation from a senior manager that the task had been completed. According to IRS officials, the extensive use of the project plan helped them execute and effectively monitor the implementation of the tests. Through Hiring, Training, and Management Actions, IRS Facilitated a Smooth Implementation Implementation went smoothly, in part because IRS hired sufficient numbers of staff and provided adequate training to them. IRS hired about 410 staff, primarily examiners who processed cases and answered telephones, to implement the three EITC tests in total. About 260 of the staff were for the qualifying child and filing status test, while about 150 were for the income misreporting test. The majority of the qualifying child and filing status test staff were new to IRS, were hired on a temporary appointment, and worked in Kansas City, Missouri. The income misreporting staff worked in Atlanta, Georgia; Austin, Texas; and Fresno, California. According to IRS officials, these staffing levels were appropriate to manage the workload, thus contributing to the overall smooth implementation of the tests. IRS provided specific training for the qualifying child and filing status tests. Among other things, the training included a history leading up to the tests, a description of the test processes, the roles and responsibilities of staff, several examples of how to determine whether taxpayer substantiation was acceptable, and information on how to use the Earned Income Credit Proof of Concept (EICPC) database, the database IRS used to manage the qualifying child and filing status tests. Examiners we met with in Kansas City told us the training was sufficient. However, there was a gap between the time examiners first received training and when they actually started doing the work. According to IRS officials, this gap caused the staff to lose some knowledge before they were able to apply it. To remedy this problem, IRS provided the staff refresher training and a staff-developed job aid. Examiners we interviewed said, that as a result, they felt confident in making decisions to accept or reject taxpayer substantiation. IRS did not provide specific training for the income misreporting test, and instead relied upon the AUR program training because the process for working cases remained the same—only how IRS selected the cases changed. In our visits to Atlanta and Fresno, we found consistency in the training that staff received for the income misreporting test, including how the procedures were used when screening and working cases. Management took steps to foster staff members’ understanding of the importance of the tests. Once the current EITC program director was installed in late 2003, management oversight became more apparent for the tests. Senior IRS management responsible for EITC were involved in managing many details of implementation of the tests. To help garner staff support, when the tests first began, IRS managers held meetings with the examiners and took action based on their expressed concerns, such as making key revisions to the EICPC system. In addition, front-line managers with whom we spoke in Kansas City said the EITC director’s involvement helped marshal staff support at that location. Managers said this was critical for smooth implementation of the tests, since they were the ones dealing directly with the taxpayers. The examiners we interviewed also said team meetings with managers helped them understand and effectively convey information about the tests to taxpayers. IRS Monitored Undeliverable Mail and Attempted to Resend It to Corrected Addresses to Help Ensure Taxpayers’ Response to Tests IRS tracked undeliverable mail, mail that was sent to taxpayers and returned to IRS by the U.S. Postal Service, which was critical to the success of the tests. If taxpayers did not receive IRS’s correspondence—letters, forms, and notices—they would not have known they needed to respond. And, had there been large volumes of undeliverable mail, the feasibility of the tests could have been undermined. Ensuring that those selected for the tests received the correspondence could have been particularly difficult because research has shown that some EITC claimants are highly mobile. Although IRS used the most current address for test populations—in most cases the address taxpayers provided on their 2002 tax returns—IRS officials anticipated some mail being returned as undeliverable because the taxpayer no longer lived at that address. IRS first learned that it had a taxpayer’s incorrect address when it received the undeliverable mail from the U.S. Postal Service. As it typically does for undeliverable mail, IRS employed a locater service to attempt to find the taxpayer’s new address by using other kinds of information, such as addresses associated with any credit cards. When the locator service found a new address, IRS resent correspondence to the affected taxpayer. IRS Provided Several Means to Help Answer Taxpayers’ Questions and Found Strong Performance Where Data Were Available IRS provided several means for taxpayers selected for any of the three tests to contact the agency for assistance. For example, in the initial contact letter for the qualifying child test, taxpayers were informed that they could get help from a toll-free telephone number where examiners could answer their questions, any local IRS office—commonly known as walk-in sites, and any of the approximately 200 low-income taxpayer clinics (LITCs) in the U.S. In addition, the National Taxpayer Advocate was prepared to assist taxpayers selected for all three tests as needed. Determining whether taxpayers received the correct information is an important aspect of implementation. IRS’s performance in terms of the percentage of callers getting through to the agency and the quality of the answers given was strong and comparable to similar IRS operations. IRS received about 100,000 telephone calls from taxpayers about the qualifying child test and about 75,000 calls about the filing status test, as of September 30, 2004. Common questions about the qualifying child certification test included “What documentation is acceptable?” and “When will my refund be released?” According to IRS’s telephone data, about 86 percent of callers got through and received service. Based on historical data, IRS officials considered this level acceptable. Based on our annual reviews of IRS’s telephone performance during the filing season, we have reported comparable levels of service. For example, in 2003, 85 percent of taxpayer’s calling IRS’s main toll-free telephone lines got through and received service. IRS’s internal quality reviews showed that, as of September 30, 2004, test examiners provided accurate responses to taxpayers seeking assistance for the EITC tests via the telephone about 96 percent of the time, which was somewhat higher than the quality of IRS’s responses on its toll- free telephone lines. Because outside stakeholders expressed much concern about the tests, the National Taxpayer Advocate decided to assist any taxpayer selected for the tests, even if the assistance did not meet its established criteria. The Advocate expected to assist about 2,600 taxpayers based on a needs assessment, which was rooted in historical data. However, as of September 30, 2004, the Taxpayer Advocate assisted a total of 837 EITC taxpayers participating in these tests. Most of the assistance provided included helping taxpayers receive an expedited refund due to a financial hardship. Internal quality reviews showed that the Advocate met its quality standards 100 percent of the time for the test cases selected as part of those reviews. Steps Taken to Ensure Procedures Used by Examiners Led to Consistent Decisions For each test, IRS took several steps that were designed to ensure consistency among the examiners making decisions about whether to accept taxpayers’ substantiation. The qualifying child and filing status cases were worked in one location—Kansas City, Missouri—to make it easier to ensure consistency among examiners. The income misreporting test cases were worked in three locations—Atlanta, Georgia; Fresno, California; and Austin, Texas. IRS officials said they did not believe there would be a consistency problem in having the income misreporting test conducted across these locations because the test was not a significant departure from the general AUR work that had been done in these locations for the past several years. Other examples of steps IRS took to ensure consistent decision-making by examiners included holding multiple team meetings with staff, sending out notices to staff when errors were noted, having certain groups work only certain kinds of cases, preparing a job aid for examiners, and conducting extensive quality reviews. According to IRS managers and examiners we spoke with, these steps helped examiners make consistent decisions in the cases they were reviewing. Another step IRS took to ensure consistency was to have managers in Kansas City review all those cases where taxpayers provided substantiation for the qualifying child test prior to filing their tax return—a total of about 800 reviews. This review helped IRS identify and correct problems that arose early in the tests. IRS officials stated that the review helped provide for a smooth implementation because it identified problems, which IRS corrected, and enabled IRS to issue supplemental guidance to ensure repeat errors did not occur. For example, for the qualifying child test, taxpayer substantiation did not always clearly indicate the exact dates of a child’s residency with the taxpayer—for example, some may have shown “July through December 2003.” Some examiners interpreted that to mean July 1 through December 31, giving taxpayers the time needed to certify for the qualifying child’s residency. Other examiners interpreted this same information to mean July 1 through December 1, therefore not giving taxpayers the time needed to qualify their child. This review identified the inconsistent interpretation of dates, and IRS developed a policy decision and issued guidance on how to interpret the dates when the dates provided were vague. IRS Internal Reviews Showed Few Significant Implementation Problems The several on-going internal quality reviews during the tests generally found few significant implementation problems. IRS managers conducted reviews at the test sites, which examined accuracy, timeliness, and staff professionalism. For the qualifying child and filing status tests, these reviews showed generally high performance—case file documentation was correct 87 percent and 93 percent of the time, respectively, as of September 30, 2004. IRS did not capture this data for the income misreporting test; however, IRS data show that, as of September 30, 2004, 95 percent of all AUR cases, which included the income misreporting cases, contained correct documentation. The EITC Program Office also conducted a review that assessed whether policies and procedures for the qualifying child and filing status cases were being timely, accurately, and consistently followed. According to IRS, the review showed good results. For example, for the filing status test, the time between an examiner’s decision to accept taxpayers’ documentation and the issuance of the taxpayers’ refund averaged fewer than 30 days. IRS Addressed All But One Significant Problem That Arose During Implementation Although several problems surfaced during implementation, particularly in the qualifying child test, IRS officials said that because they were able to take quick actions to address the problems, the problems did not adversely affect the tests or taxpayers selected for them to any great extent. It was not surprising that most of the problems involved the qualifying child test because it was a greater departure from past practice than were the other two tests. For example, although IRS had previously asked taxpayers to provide documents substantiating their qualifying child upon audit, IRS has not previously allowed taxpayers to provide affidavits to prove their claim; therefore, examiners had never reviewed such documents. In contrast, IRS considers the filing status and income misreporting tests expansions of existing IRS programs. Examples of problems and IRS’s actions to address them include: Early in the implementation of the test, some examiners advised taxpayers who had called about letters received from IRS to complete documentation for the qualifying child test even though they were selected for the filing status test. Examiners were instructed via an e- mail alert to use the EICPC database to determine the test for which the taxpayer was selected. Some qualifying child and filing status case files (paper and electronic) had documentation deficiencies, such as not getting a taxpayer’s phone number for the case file or not obtaining complete/required information for cases where the taxpayer agreed with IRS’s proposed changes. Through an e-mail alert, IRS officials reminded examiners of the procedures they must follow to properly document files. Some files were missing for the qualifying child and filing status tests. IRS established a new procedure that when a file could not be located within 2 weeks after the taxpayer had submitted correspondence, a new file would be created and marked “Possible Duplicate Folder.” In all three cases, IRS officials stated that the on-going quality reviews helped ensure that examiners followed the correct procedures. Although IRS addressed problems that arose during the implementation of the tests, one significant problem still lingers. Some important information about all three tests, including a key policy decision regarding the filing status test, were either not well documented or not documented at all. Internal control standards state that significant decisions and events should be documented and readily available for examination. When documentation is lacking, it is difficult for management or staff to gain an understanding of the program, refine work processes, or fully monitor the implementation. Further, developing and documenting such information would help ensure that test results are accurately determined and would enable others to review the methodology. IRS developed various management documents to organize, direct, and monitor the test operations. However, while some important decisions about the tests were made after these documents were developed and after test implementation began, IRS did not explain the decisions by making additions to the documents. For example, IRS’s initial plan required that the filing status subtest involving taxpayers who had never filed as married, but had filed as head of household on their 2003 return, include 5,000 taxpayers. Several months later, IRS reduced the sample to 500, but did not document the rationale for this decision until much later and at our request. Also, certain other key information, such as when and how information from a third- party locator service for the filing status test would be used, was not fully developed or sufficiently detailed. The Treasury Inspector General for Tax Administration (TIGTA) found similar deficiencies in IRS’s documentation about the tests occurring during implementation. As a result, this lack of documentation hindered not only test monitoring and oversight, but also did not foster a common understanding of the tests. According to IRS officials, time or other priorities caused some significant decisions about the test not to be documented at the time those decisions were made. Further, they said that changes to tests are common during implementation and that they focused attention on ensuring the tests were carried out correctly, rather than on documenting the reasons for changes and other decisions as the tests proceeded. However, IRS officials acknowledged that documenting significant events was important. In some correspondence to taxpayers about the tests, IRS referred taxpayers to LITCs or walk-in sites for assistance. However, IRS did not gather information on or measure the level or quality of assistance provided to test participants at LITCs or walk-in sites. IRS officials said they did not collect these data because they thought taxpayer use of this assistance would be minimal and there was no practical or cost effective way to gather the information. In his response to our draft report, the Commissioner echoed this sentiment, noting that because qualifying child test participants were randomly selected from around the country, efforts to measure services would be extremely difficult. Further, IRS officials did not view this as an implementation problem, but instead viewed it as a limitation of the tests. Whether it is a problem with implementation or test design, there are some important reasons why it would be useful to know the level and quality of services provided. For example, our prior work found that the quality of service IRS walk-in sites provided taxpayers was unknown. Further, face-to-face assistance is costly, especially when compared to telephone services, which were used extensively in the 2004 tests. Recognizing that options for providing taxpayer assistance and outreach efforts are important, if IRS had data on the level and quality of service provided, it would be in a better position to determine the cost and benefit of providing this assistance. Officials recognize that information on use of these forms of assistance would be useful and indicate that they will collect information in conjunction with a planned 2005 simulation of a nationwide test. The simulation is discussed later in this letter. Lessons Learned from 2004 Tests Prompt Most Refinements for New Round of Tests IRS officials identified lessons learned from the 2004 tests that were implemented to help improve the 2005 tests. For example, IRS officials plan to use of its automated telephone responses, which is important because most taxpayers contacted IRS by telephone to obtain information about the tests. Changes to forms and letters based on case reviews and examiner input. IRS officials told us that their modifications to letters and forms for the qualifying child and filing status tests to be used for the 2005 tests were primarily based on case file reviews and discussions with IRS examiners who interacted with the taxpayers selected for the 2004 tests. In April 2004, for example, EITC program officials reviewed case files and met with examiners to discuss common taxpayer errors and questions about letters and forms for the qualifying child and filing status tests. Examples of taxpayer questions were: “How do I prove I did not live with my spouse?” or “Who can fill out the affidavit?” Examples of taxpayer errors on forms included no signature, incomplete dates to prove residency, and either no Social Security number listed on the form or an incorrect number. As a result of their review, IRS officials revised the forms containing such information (Form 8836, Qualifying Children Residency Statement and the accompanying affidavit). For example, IRS changed the layout of the affidavit to help reduce taxpayer errors involving dates and the amount of time a child resided with the taxpayer. IRS did not make changes to the letters and forms for the income misreporting test because they were the same ones used under the general AUR program. Improvements to key database based on examiner suggestions. Examiners in Kansas City, the site responsible for the qualifying child and filing status cases, suggested about 30 updates or other improvements to the EICPC system that they said would either reduce errors in the database, help IRS better manage the cases and workload, or improve subsequent data analyses. For example, examiners noted they were lacking computer fields to record certain information such as the taxpayer’s telephone number, whether the case was worked by the Taxpayer Advocate’s Office, or if an amended return had been received. As a result, examiners suggested ways to capture these data, which have been incorporated into the EICPC. IRS is continuing to update and make improvements to the EICPC. Use of automated telephone voice response expanded. For the fiscal year 2004 qualifying child and filing status tests, IRS did not use automated responses to answer routine telephone calls and did not have a mechanism in place to obtain taxpayers’ views about telephone services provided. Both options were available for the income misreporting test and are available for users of IRS’s other toll-free telephone numbers. Officials recognized that commonly asked questions, such as “Where do I mail the required documentation?” or “Who can sign an affidavit?” could be answered via automated responses, and plan to provide this option for the fiscal year 2005 tests, leaving only the more difficult questions to be answered by an examiner. IRS also decided to implement a random feedback survey of taxpayers on the quality of service they received for the qualifying child and filing status tests when they called the toll-free number. The survey is a modified version of the one that IRS uses for its regular toll- free telephone operations. Changes made to the qualifying child test encourage early certification and simulate implementation across the country. There are two major changes to the qualifying child test for 2005: (1) taxpayers will be encouraged to certify in advance of filing their return that their child met the EITC residency requirement; and (2) a portion of the taxpayers will be drawn from a single community—Hartford, Connecticut, while the rest will be drawn randomly from across the nation. IRS officials contend that an early certification could help reduce delays in releasing EITC refunds because examiners would be able to validate cases before the start of the tax filing season when workloads reach their peak. Eligible taxpayers who provide acceptable documentation before the start of the tax filing season could get their EITC refund more expeditiously, IRS officials say, because the documentation would already be validated at the time taxpayers file their tax returns. IRS has some evidence that taxpayers are willing to certify in advance of the filing season because about 800 taxpayers did so as part of the 2004 qualifying child test, even though they were only asked to do so when filing their returns. Regarding the targeting of the single Hartford, Connecticut community, IRS officials told us that they intend to simulate what might happen if an early certification requirement were imposed across the country. This change was the result of a recommendation from a contractor’s review of the 2004 test’s sampling methodology. As part of this test, IRS plans to mount an outreach campaign to include partnering with local governmental and community-based entities to provide taxpayers assistance. Need for refinement prompts reduction in filing status sample size. Based on its experience with the sample selected for the 2004 filing status test, IRS decided to dramatically reduce the sample size for next year’s test, while simultaneously trying to improve the criteria for selecting the sample. As this year’s test was implemented, IRS officials realized that the test was yielding a high number of taxpayers claiming the correct filing status, suggesting that the criteria for selecting them could be improved and the burden on taxpayers to prove their filing status was high, relative to the benefits gained. As a result, IRS officials reduced the sample size from 36,000 to 5,000 for the 2005 test to minimize taxpayer burden as IRS works to improve the selection criteria. IRS also is testing two refinements in the sample selection criteria for the 2005 filing status tests to determine whether the selection criteria can be improved. First, IRS plans to apply TIGTA’s finding, which IRS officials said that they had also identified, that IRS could better use information it possesses to verify the filing status of some taxpayers, such as those whose spouses have died or those who have submitted an amended return. Any such taxpayers whose filing status could be verified using such information would not be included in the sample. Second, IRS also plans to refine the sample selection to not include taxpayers whose filing status of single or head of household can be corroborated by information from the third-party locator service, which was tested in 2004. Income misreporting changes designed to improve sample selection. IRS has planned minimal changes for the 2005 income misreporting test because it found few issues that needed to be addressed. Changes were made to selection criteria to help identify cases with a potentially higher assessment amount. For example, IRS will no longer select cases where the taxpayer’s adjusted gross income is over the maximum amount for claiming the EITC and EITC is claimed anyway because IRS found those cases yielded a lower assessment than other cases. IRS’s 2004 Evaluation Plans Lacked Sufficient Documented Detail to Allow for Oversight; Evaluation Plans for 2005 Tests Were Not Completed Before Two of the Tests Had Begun IRS’s plans for evaluating the three 2004 tests lacked sufficient documented detail to facilitate managerial review and stakeholders’ oversight and thereby help ensure that the evaluation of the tests’ results would be as sound as possible and the results would be communicated with full recognition of their strengths and limitations. For many aspects of IRS’s evaluation plans, we were able to discern IRS intentions by piecing together information from multiple sources, including interviews with IRS officials. In essence, an evaluation plan is used to manage the evaluation endeavor. As such, the more completely a plan is developed, the more likely it will be useful to managers in ensuring that the evaluation is well- executed. Despite the importance of having detailed plans prior to implementation, IRS had not completed its evaluation plans for the 2005 tests before two of those tests had begun. Evaluation Plans Had Strengths Including Linkage Among Test Goals, Evaluation Objectives, and Outcome Measures Considering the written evaluation plans themselves, interviews with IRS officials, IRS’s status report to Congress and other documents, we found that IRS’s plans for assessing the three tests had important strengths. For instance, IRS’s evaluation plans: had clear goals for each the three tests. The primary goal of all three tests was to reduce overclaim rates. There were additional goals for the qualifying child test—maintaining EITC participation for eligible participants and minimizing taxpayer and IRS administrative burden. linked evaluation objectives and outcome measures—which determine the extent to which the goals were met—to the test goals. For example, the income misreporting test had outcome measures that included the percentage of cases where an EITC claim was reduced or disallowed and the average amount of the change. These measures were clearly linked to the test’s goal of reducing EITC overclaim rates. selected samples to provide information that could be generalized to the EITC population being targeted. Both TIGTA and an outside consultant reviewed the samples for the qualifying child test and found that the 23,000 sample for the general test was sufficient—a conclusion with which we also agree. TIGTA also reviewed the samples for the income misreporting test and found that it should provide reliable results. Lack of Detail and Documentation in Evaluation Plans Undermined Their Value IRS’s evaluation plans for the 2004 tests lacked sufficient documented detail for us to determine how IRS planned to conduct key aspects of the evaluations. When we were able to determine how key aspects of the evaluations would be conducted, we often did so based on interviews and analyses of various documents. The general lack of detail and documentation undermined the value of the plans by, for example, limiting IRS’s and stakeholders’ ability to oversee the evaluations, identify and address limitations in the evaluations, and ensure that limitations will be clearly communicated when the results are disseminated. IRS’s written evaluation plans for the three tests were essentially outlines that were not comprehensive, meaning that they did not fully document all key aspects of the evaluation. For example, IRS’s written plans did not provide information on the sampling methodologies used in all three tests. These were not articulated until IRS issued its August 2004 status report to Congress. In addition to the status report, which also provided additional insights into the types of analyses IRS plans to conduct, we relied on multiple other sources to gain a complete understanding of IRS’s planned evaluation activities. We interviewed IRS officials and reviewed other information and documents they provided, such as the contractor’s report on the qualifying child test’s sampling methodology. According to IRS officials, the lack of comprehensive and detailed written plans was due to other priorities, such as undertaking the numerous steps needed to implement the tests themselves. While we recognize competing demands on the EITC program office, striking a balance between documenting evaluation plans and implementing and evaluating the tests helps ensure all parties understand the evaluations and the managers and stakeholders are able to oversee implementation and evaluations. Well-developed evaluation plans have a number of benefits, perhaps most importantly, increasing the likelihood that evaluations will yield methodically sound results, thereby supporting effective policy decisions. Such plans help (1) ensure that the agency has addressed the principal aspects of the evaluation, including the research design, outcome measures, target and sample populations, data collection activities, analyses, and dissemination of results, (2) officials monitor changes to tests and assess the impact of those changes on the planned evaluations, and (3) facilitate management and stakeholder review. Having comprehensive and detailed evaluation plans helps ensure that all those working directly on the evaluation have a common understanding of how data will be collected, analyzed and impacts assessed. Concerns or weaknesses can be identified and corrected, and plans can be updated to reflect any changes during implementation and afterwards, as the evaluation plan could be considered to be a “living document.” Finally, a well-developed plan helps ensure that evaluation results can be communicated with appropriate recognition of the evaluation’s strengths and limitations so stakeholders can better understand how to use the results when making decisions. The following are illustrations of the overall lack of detail and documentation in IRS’s evaluation plans for the 2004 tests. Evaluation objectives were not documented in one place. Although we found that IRS’s evaluation plans had objectives linked to the test goals, the objectives were not identified in any single location for any of the three tests nor specifically identified as objectives. Thus, we pieced together the information from multiple sources, including interviews with IRS officials. For example, we had difficulty identifying the evaluation objectives pertaining to the use of the third-party locator service for the filing status test. Key outcome measures lacked important detail. IRS’s evaluation plans lacked important information for all the key outcome measures, such as their definition and purpose, formula/methodology, data source and collection method. For example, IRS’s evaluation plans for the qualifying child test did not identify the specific data that would be used to produce the outcome measure—the number of taxpayers who claim (or do not claim) the EITC. IRS has provided this type of information about its measures for other programs. For example, for its telephone and other operations, IRS annually prepares a comprehensive document known as a data dictionary, which includes items such as the definition and purpose of the measure and its formula/methodology. IRS officials agreed that providing such information in the evaluation plans could have been valuable in managing the EITC tests. Without knowing details on outcome measures, stakeholders do not have enough information about a measure to know whether it is valid and reliable. Limited information was provided on planned analyses. The evaluation plans also lacked specificity with regard to the key analyses planned and what those analyses were intended to accomplish. For example, IRS conducted a survey to obtain information about a taxpayer’s experience with the qualifying child test. IRS originally planned to survey these taxpayers in April 2004. The survey was not conducted until September 2004, primarily due to delays in selecting a contractor and developing the survey instrument. The 5-month delay may substantially reduce the number of taxpayers who accurately remembered the actions they took and thus affect the quality of the responses (i.e., recall bias). The accuracy of individuals’ survey responses declines the further away those responses are from the date of the actual events. IRS and the contractor are aware that such recall bias could exist and stated that they will consider it when analyzing the survey results, but no detail was available on how they would do so. This is critical because the potential utility of the survey results could be in question. The lack of detail in IRS’s evaluation plans also increased the risk that reports disseminating the results of the tests would not fully disclose the evaluations’ potential limitations. In its August status report to Congress, IRS did not make clear that the qualifying child test results could only be generalized to taxpayers IRS had reason to believe were most likely to make an erroneous claim for the EITC when filing for the EITC in 2002. Absent such clarity, stakeholders might incorrectly assume that test results apply to all taxpayers claiming qualifying children for the EITC. Also, IRS did not describe potential limitations of the outcome measures, specifically, how non-respondents would be accounted for in measure calculations. IRS officials recognize that their final 2005 report will need to include information on the evaluation limitations, and expect to provide sufficient detail and explanation of limitations in that report. Evaluation Plans for 2005 Tests Not Completed Before Two of the Tests Began As of early December 2004, IRS had not completed evaluation plans for the 2005 testing, even though the qualifying child and income misreporting tests began in November. According to IRS officials, they had not yet completed an evaluation plan for the 2005 tests because final decisions about the testing were still being deliberated in October. In their view, it was less important to finish an evaluation plan for these tests by the time testing began, because IRS could use the 2004 evaluation plans in the interim. IRS officials acknowledge that evaluation plans are important and have started to develop them for the 2005 tests. IRS can build upon the 2004 evaluation plans for all three tests. However, IRS made substantial changes for the qualifying child and filing status tests, which would need to be taken into account in developing comprehensive and detailed evaluation plans for the 2005 tests. Therefore, while we recognize that there will be similarities with the 2004 evaluation plans, the importance of having evaluation plans in place as testing begins or soon thereafter is heightened because of planned changes to the test. Conclusions The EITC program lifts millions of low-income taxpayers and their families out of poverty. However, its high rates of noncompliance—overclaims for the credit—could potentially undermine the credibility of the program because billions of dollars are annually paid out that should not have been. IRS’s three tests—qualifying child certification, filing status, and income misreporting—are major initiatives to reduce overclaims by addressing the leading errors taxpayers make. Given the importance of the EITC to many low-income households and concerns about high overclaims, these tests are being closely watched by numerous stakeholders. Although IRS has generally implemented each of the tests smoothly, it did not fully document some key management decisions and other significant events. Documentation supports a common understanding among staff about the program they are administering—particularly one as complicated as the EITC—and helps managers monitor whether a program is implemented as planned. Having adequate documentation during the 2005 tests could help foster a better understanding of the tests, ensure results are accurately determined, and facilitate review and oversight. In addition, while IRS told taxpayers selected for the qualifying child test they could visit various physical locations for assistance, including LITCs and IRS walk-in sites, IRS did not collect information from those sites to determine the level and quality of services provided. Because officials believe relatively few taxpayers used these sites, collecting information from the sites may not have been practical. However, the single city simulation of nationwide implementation may offer an opportunity to gather some information on these services. The evaluations that IRS is conducting of each test are likely to yield some useful information and results that will help IRS officials and other stakeholders judge whether and how to proceed with further implementation of the new approaches to reducing EITC overclaims. Nevertheless, the lack of detail and documentation in the evaluation plans impeded officials’ ability to manage the evaluations as well as external stakeholders’ ability to review and understand the evaluations’ strengths and limitations. As of early December 2004, IRS had not completed its 2005 evaluation plans, although testing was underway for the qualifying child and income misreporting tests. A well-developed and timely plan would help IRS to improve on the 2004 evaluation plans and take into account changes in the tests themselves. Recommendations for Executive Action The Commissioner of Internal Revenue should adopt a policy of documenting the rationale for key policy decisions and other significant events as the 2005 tests are implemented; develop a means of gathering information during the 2005 tests on the use of such locations as LITCs and walk-in sites on the level and quality of service provided by those sites, particularly in light of IRS’s plans to draw its sample from a single community for the qualifying child test; ensure that reports disseminating the results of the 2004 and 2005 test evaluations clearly outline aspects of test design and evaluation shortcomings that limit the interpretation and utility of the results; and complete the development of comprehensive and adequately detailed evaluation plans for the 2005 tests. These actions should be done as soon as possible, with any significant changes to the evaluation plan appropriately documented as the evaluation unfolds. Agency Comments and Our Evaluation In his December 22, 2004 letter, the Commissioner of Internal Revenue agreed with our recommendations. Regarding the issue of documenting significant policy decisions, he noted competing demands that can often affect the quality of documentation, which we acknowledge in our report, and that IRS has implemented a process to meet this recommendation. The Commissioner noted that providing taxpayers with assistance is a top IRS priority. As such, the Commissioner reported that IRS has plans to identify the level and quality of services provided to taxpayers at LITCs and walk-in sites in the single test community. Regarding dissemination of results, the Commissioner reported that IRS is committed to ensuring all aspects of the test design and evaluation will be clearly described to stakeholders. Finally, the Commissioner reported that IRS intends to complete the 2005 evaluation plans, in part, based on GAO’s recommendations about what a plan should contain. He also noted that IRS may need to assess whether any modifications to the 2004 qualifying child test criteria are appropriate in light of public events and community leadership reaction in the single test community. We are sending copies of this report to the Chairmen and Ranking Minority Members of the Senate Committee on Finance and the House Committee on Ways and Means. We are also sending copies to the Secretary of the Treasury; the Commissioner of Internal Revenue; the Director, Office of Management and Budget; and other interested parties. We will make copies available to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. This report was prepared under the direction of Joanna Stamatiades, Assistant Director. Other major contributors are acknowledged in appendix IV. If you have any questions about this report, contact Ms. Stamatiades at (404) 679-1900 or me on (202) 512-9110. Scope and Methodology For all three objectives, we reviewed and analyzed documents including Treasury’s EITC compliance study of 1999 tax returns; a joint IRS/Treasury task force report; monthly status reports for each of the tests; draft and final letters, forms, and notices for each of the tests; implementation and evaluation plans; our prior reports; status results of the tests reported by IRS and its contractors; and reports and EITC literature by external stakeholders. We also interviewed Department of the Treasury and IRS officials involved in the EITC tests, including the National EITC Director, National Taxpayer Advocate, Director of Research Analysis and Statistics, and other IRS officials involved with implementing the tests. Additionally, we interviewed external stakeholders such as individuals at the TIGTA, Center for Budget and Policy Priorities, and Urban Institute, and reviewed and analyzed their reports. We took steps to ensure that the data we received from IRS were reliable for the purposes of this report and determined that they were. Some of those steps included interviewing IRS officials knowledgeable about the computer systems where the data we obtained came from and reviewing documentation, such as system manuals and flowcharts. We identified and assessed potential data limitations and compared those results to our data reliability standards, noting no significant weaknesses. In addition, to describe the three tests and determine how IRS was spending the money appropriated it for the tests, we interviewed managers and budget officials in the EITC Program office and reviewed and analyzed IRS’s fiscal year 2004 budget request and compared its planned to actual EITC spending plan. Because IRS does not have an adequate cost accounting system, we could not verify the accuracy of the figures IRS provided to describe how funds appropriated for the tests were spent. We identified attributes of sound program implementation based on reviews of the social science literature, our prior work, and interviews conducted with IRS research and program management officials and external stakeholders, such as the Urban Institute. We tailored these attributes to apply them specifically to IRS’s tests as shown in table 3. Finally, to assess how well IRS implemented the tests and determine IRS’s planned refinements for further testing in fiscal year 2005, we reviewed policies, procedures, and training documents; observed procedures and operations in Kansas City, Missouri; Atlanta, Georgia; and Fresno, California; and interviewed front line IRS managers and examiners in these locations. We reviewed several case files for each test. Additionally, we analyzed relevant interim reports prepared by IRS and its contractors; and identified key results, and discussed them with IRS officials. To assess whether IRS’s plan for evaluating the tests contained sufficient documented detail to facilitate managerial review and stakeholder oversight, we used GAO guidance and the social science evaluation literature to identify key attributes of an evaluation. These attributes included the research design, outcome measures, target and sample populations, data collection activities, analyses, and dissemination of results. We obtained all available documentation on IRS’ s evaluation plans for each of the tests and reviewed that documentation to determine whether we could understand from the documentation alone how IRS planned to address the key attributes. Where we could not, we interviewed IRS officials to further understand whether and how the officials planned to address those key attributes. Written documentation should be complete, facilitate tracing of events, and be readily available for examination to foster a common understanding of the program and facilitate oversight. To describe the status of IRS’s evaluation plan for the fiscal year 2005 tests, we primarily relied on interviews with IRS officials. Updated Results from the Income Misreporting Test as of September 30, 2004 In August 2004, IRS issued a status report to Congress, which was mandated by the Consolidated Appropriations Act of 2004. The report presents an overview of each of the three EITC tests, along with the design, status, and preliminary findings as of June 2004. According to the EITC National Director, the report contained some of the types of information that will be needed to support future decisions about the full implementation of the tests. Additionally, the EITC National Director noted that IRS also used the status report to provide information on such items as the sampling strategy that have been lacking in other documents. IRS had updated results for the income misreporting test as of September 30, 2004. Updated results were not available for the qualifying child or filing status tests. As IRS stated in its status report, which showed data as of June 26, 2004, it is important to note that because the results are interim, no conclusions should be drawn from the information provided and no analyses about the impact of the tests were included. As table 4 shows, IRS has screened all 300,000 tax returns for the income misreporting test, and slightly more than half have been closed with taxpayer agreement. Comments from the Internal Revenue Service GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to those named above, Tom Beall, Evan Gilman, Veronica Mayhand, Susan Mak, Donna Miller, Libby Mixon, Chris Moriarity, Ed Nannenhorn, Cheryl Peterson, Michael Rose, and Robyn Trotter made key contributions to this report. GAO’s Mission The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Congressional Relations Public Affairs
Research has shown that the Earned Income Tax Credit (EITC) has helped lift millions of individuals out of poverty. In recent years, the Internal Revenue Service (IRS) has paid approximately $30 billion annually to about 20 million EITC recipients. However, the program also has experienced a high rate of noncompliance. IRS estimated that EITC overclaim rates for tax year 1999, the most recent data available, were between 27 and 32 percent of dollars claimed or $8.5 billion and $9.9 billion, respectively. We were asked to describe the three tests IRS has begun to reduce overclaims and how the funds appropriated for them were spent; assess how well IRS implemented the tests and describe planned refinements for the 2005 tests; and assess whether IRS's evaluation plans had sufficient documented detail to facilitate managerial review and stakeholder oversight and describe the status of the 2005 evaluation plans. IRS implemented three tests in 2004 to address leading sources of EITC errors: a qualifying child test, where selected taxpayers were asked to document that their child lived with them for more than half the year in 2003; a filing status test, where selected taxpayers were asked to provide documentation to prove the accuracy of their 2003 filing status, and an income misreporting test, where a new screening process was used to select EITC returns that identify taxpayers likely to have the most significant changes in their assessments due to underreporting of income on their tax return. IRS's implementation of the tests proceeded smoothly and largely as planned. However, some information, such as a key change in the filing status test, was not well documented and the level and quality of some services provided to test participants were not measured. This lack of documentation hindered monitoring, oversight, and did not foster a common understanding of the tests. For the 2005 tests, IRS made key changes to the qualifying child test to encourage taxpayers to certify in advance of filing their return and to attempt to simulate what might happen with nationwide implementation. IRS also changed the sample selection criteria for the filing status test to better target noncompliant taxpayers. IRS's plans for evaluating the 2004 tests generally lacked documentation and detail for many key issues, which undermined their value to managers and stakeholders. For example, IRS did not specify how it planned to analyze some qualifying child survey data. In essence, an evaluation plan is the management plan or roadmap for the evaluation endeavor and well-developed plans facilitate test management and oversight. Despite the importance of having evaluation plans prior to implementation, IRS had not completed its plans for the 2005 tests before two of the tests had begun.
GAO_GAO-06-538
Background As of January 1, 2000, all federal agencies covered by EEOC regulations were required to establish or make available an ADR program for both the informal and formal complaint stages of the EEO process. On March 9, 2000, at a joint hearing held by the Subcommittee on Civil Service of the House Committee on Government Reform and the Subcommittee on Military Readiness of the House Armed Services Committee, the Navy discussed the results of its experiences under its 18- month pilot program for resolving EEO complaints through the use of ADR, which resulted in resolution on an average of 31 days. The Floyd D. Spence National Defense Authorization Act, for fiscal year 2001, authorized the Secretary of Defense to carry out at least three pilot programs—one at a military department and two at DOD agencies. The programs were authorized to operate for 3 years. The act exempts the programs from EEOC’s procedural requirements or restrictions. In 2004, DOD authorized the following as pilot programs: (1) DLA, which provides worldwide logistics support—munitions and supplies—for the missions of military departments; (2) DeCA, which operates a worldwide chain of commissaries providing groceries to military personnel, retirees, and their families at a discount; and (3) 31 bases of the USAF, accounting for about one-third of USAF bases with federal EEO programs. The pilot programs were authorized by the Secretary for 2 years with an option for an additional (third) year. The legislative objectives for the programs are to: reinforce local management and chain of command accountability, and provide the parties involved with early opportunity for resolution. The legislation also provides that pilot program participants voluntarily participate in the pilot program, and that participants maintain their right to appeal final agency decisions to EEOC and file suit in federal district court as is the case in the federal EEO complaint process. The Office of the Deputy Undersecretary of Defense for Civilian Personnel Policy, the Office of the Deputy Undersecretary for Equal Opportunity, and the Office of Complaint Investigations within the Civilian Personnel Management Service have ongoing responsibility for oversight, monitoring, and evaluation of the overall pilot program. Under EEOC regulations, during the informal, or precomplaint counseling stage, ADR techniques can be used. Counselors are to advise individuals that, when the agency agrees to offer ADR in the particular case, they may choose to participate in either counseling or in ADR. If the matter is not resolved by counseling or if ADR is unsuccessful, the counselor is required to inform the employee in writing of his or her right to file a formal discrimination complaint with the agency. ADR can also be used after an agency receives a formal complaint. After a complainant files a formal discrimination complaint, the agency must decide whether to accept or dismiss the complaint and notify the complainant. If the agency dismisses the complaint, the complainant can appeal the dismissal to EEOC. If the agency accepts the complaint, it has 180 days to investigate the accepted complaint and provide the complainant with a copy of the investigative file. Within 30 days of receipt of the copy of the investigative file, the complainant must choose between requesting (1) a hearing and decision from an EEOC administrative judge (AJ) or (2) a final decision from the agency. When a hearing is not requested, the agency issues a final decision. A complainant may appeal an agency’s final decision to EEOC. In cases where a hearing is requested, the AJ has 180 days to issue a decision and send the decision to the complainant and the agency. If the AJ issues a finding of discrimination, he or she is to order appropriate relief. After the AJ decision is issued, the agency can issue a final order notifying the complainant whether or not the agency will fully implement the decision of the AJ, and the employee can file an appeal with EEOC. If the agency issues an order notifying the complainant that the agency will not fully implement the decision of the AJ, the agency also must file an appeal with EEOC at the same time. See appendix I for more details and associated time frames related to the EEO complaint process. The Three Programs Emphasize ADR Techniques, Share Common Implementation Strategies, and Report Low Case Activity Although features of the three programs vary by agency and focus on different stages of the complaint process, they all emphasize the use of ADR techniques available under the current federal EEO process. They also share common implementation strategies, including outreach to eligible staff to inform them about the programs, staff training, and electronic data collection. In its 9-month evaluation, DOD observed that pilot program activity had been lower than anticipated; DOD did not provide a baseline for its comparison or elaborate on the reason for this occurrence. After 12 months, program officials continue to report low case activity. DOD’s Pilot Program Emphasizes ADR Techniques In developing the overall EEO pilot program, DOD allowed DLA, DeCA, and USAF to determine their individual program design. However, in its memo soliciting pilot program proposals, DOD encouraged potential participants to work with the Office of Complaint Investigations to develop the format and content of their proposals, offering the assistance of the Office’s experienced staff of certified complaint investigators and mediators with success in using ADR techniques. Two of the programs— DLA and DeCA—emphasize the use of ADR in the informal stage, consistent with federal EEO regulations. Program officials said that their programs are attempting to address the legislative objective of providing early opportunity for resolution by focusing on ADR. The third program, in selected bases of the USAF, changes the formal stage of the federal EEO process by combining the investigative and hearing phases after a complainant has filed a formal complaint. This program also emphasizes the use of ADR techniques both during the informal stage as well as at the time a complainant files a formal complaint. DLA’s Pilot Program DLA’s program, Pilot for Expedited Complaint Processing (PECP), began in October 2004 at DLA headquarters in Fort Belvoir, Va. DLA considers several types of cases, such as those that challenge government policy, inappropriate for PECP and screens them out. The PECP process is similar to the informal stage of the current EEO process. DLA officials said the PECP process has three steps. The first step occurs when an employee who believes he or she has been discriminated against makes initial contact with DLA’s EEO office. An EEO Intake Specialist collects specific information about the employee’s concerns and drafts an intake report, which includes a description and basis of the claim. The EEO Intake Specialist advises the employee orally and in writing about (1) PECP and how it compares to the federal counseling process and (2) the employee’s right to opt out of the pilot program at any time before the filing of a formal complaint. The second step begins when the employee chooses to participate in PECP. At this time, the EEO Intake Specialist discusses and offers the employee ADR. The EEO Intake Specialist also informs the employee that participating in ADR is optional and can be used at any stage of the complaint process. The EEO Intake Specialist considers two methods of ADR— mediation or facilitation. Mediation is the primary method used by PECP. According to DLA, the method of ADR used is based on the employee’s claim and the EEO Intake Specialist’s assessment of the method that would more likely encourage communication between the employee and management and resulting resolution. Under the third step, ADR takes place. DLA pilot program officials acknowledged that the pilot program’s ADR features do not differ from those offered under the current EEO process. DLA has an ADR program called Reach Equitable Solutions Voluntarily and Easily (RESOLVE), which is used when mediation is offered. RESOLVE is managed by DLA’s General Counsel. According to DLA officials, RESOLVE mediators cannot mediate precomplaints or complaints involving organizations they may service in another capacity, thus ensuring the neutrality of the mediator. DeCA’s Pilot Program DeCA’s program, Early Resolution Opportunity (ERO), began in February 2005 and covers 23 stores in three zones (DeCA West Zone 16-San Diego, Calif.; DeCA East Zone 28-Virginia Beach, Va; and DeCA East Zone 6-San Antonio, Tex.). Using ADR techniques, ERO seeks to provide early resolution opportunities, because according to DeCA, ineffective communication between employees and supervisors or managers often results in perceptions of discrimination. Moreover, DeCA believes that disputes can be resolved before they enter the informal counseling stage if a trained EEO facilitator can intervene to negotiate resolution. Cases that involve alleged violent acts, theft, sexual harassment, termination, or may be precedent setting, are ineligible for ERO. ERO is divided into two steps. In the first step, a trained DeCA facilitator attempts to resolve a claim before the start of the informal stage of the current process. Employees at stores participating in ERO can call a toll- free number to discuss their concerns with a trained facilitator. For example, an employee could call about perceived discrimination over schedule changes, and the facilitator may discuss what had occurred and rationale for schedule changes (e.g., to cover absences). According to DeCA officials, some employees “self screen” during the facilitation process, deciding not to pursue an EEO complaint or to pursue another avenue, such as the negotiated grievance process. The second step of ERO, which follows if facilitation is unsuccessful in resolving the employee’s concerns, involves calling in a third-party mediator. According to a DeCA official, DeCA uses mediators from DOD’s Office of Complaint Investigations, because they are trained, experienced ADR professionals, and have a greater perception of neutrality as they do not work for DeCA. If mediation fails, an individual may choose to file a formal complaint. According to a DeCA official, ERO seeks to reduce the processing time of the formal stage. To help achieve this goal, DeCA reduces processing times for two phases of the formal stage of the complaint process: (1) after a complainant files a formal complaint, DeCA has set a goal in ERO of 14 days to accept, partially accept, or dismiss it; and (2) after the report of investigation is completed, DeCA sends a notice informing the complainant that he or she has 7 days to either request a hearing or a final agency decision, reducing the time from 30 days under EEOC regulations. In addition, to further reduce processing time for ERO cases, paper documents are replaced with electronic files. Finally, according to a DeCA official, officials from DeCA and the Office of Complaint Investigations can download relevant case documents from a secure shared drive for complaints filed under both ERO and under the current EEO process. USAF’s Pilot Program USAF’s program, called Compressed Orderly Rapid Equitable (CORE), focuses on the formal phase of the EEO complaint process. The program began January 1, 2005, at 29 continental U.S. sites and 2 overseas offices that we refer to as test bases. Although the 31 test bases account for less than one-third of all USAF bases with EEO programs, they produce over 80 percent of all USAF EEO complaints. Cases that involve class and mixed- case complaints or cases related to claims already accepted under the current federal EEO complaint process are not eligible to participate in CORE. CORE has a two-step process that begins at the time the complainant files a formal complaint. Until a complaint is filed, USAF officials attempt early resolution of allegations of discrimination in the informal stage using the current federal EEO process. If resolution is not achieved during this stage, the complainant must choose between CORE and the current federal EEO process. The first step of CORE involves mediation. If the complainant declines mediation or mediation is unsuccessful, step two begins, and a CORE Fact-Finding Conference is conducted. USAF defines this conference as a “non-adversarial, impartial fact-gathering procedure.” The conference is conducted by a CORE fact-finder, provided by the Office of Complaint Investigations. During the conference, the fact-finder hears testimony from witnesses and receives documentary evidence; also at this time, a verbatim transcript is taken by a certified court reporter. Following the conference, the fact-finder completes the record of the complaint and recommends a decision in the case to the director of the USAF Civilian Appellate Review Office. The director of the USAF Civilian Appellate Review Office may accept, reject, or modify the fact-finder’s recommended decision. The director then prepares a final agency decision for signature by the director of USAF Review Boards Agency. The director of USAF Review Boards Agency issues the final agency decision. Any further action on the complaint, including rights to appeal to EEOC and file a lawsuit, are governed by current federal EEO complaint procedures. According to USAF, by combining the investigative and hearing phases of the current federal EEO complaint process, USAF aims to issue a final agency decision within 127 days or less of filing the formal complaint; the current process can take up to 360 days plus another 70 days to provide the complainant and the agency their allotted time for decision making. USAF officials also indicated that through the CORE Fact-Finding Conference, each complainant gets their “day in court,” whereas under the current EEO process, complainants often wait months to request a hearing and can have their complaint dismissed by an EEOC AJ without a hearing. DOD’s EEO Pilot Programs Share Common Implementation Strategies The three programs share common implementation strategies but implement them differently. In our review of the programs and subsequent discussions with DOD and program officials, DeCA and USAF conducted some level of outreach to program-eligible employees to inform them about the programs. For example, DeCA officials went to participating stores and handed out brochures describing ERO. According to USAF program officials, outreach on CORE included sending a letter to all participating bases from the Chief of Staff for Personnel as well as a notice to the unions. Additionally, CORE was publicized in USAF news service and governmentwide media. We also found that agencies varied in how they trained their EEO employees about the programs. USAF officials used contractors to train some employees in CORE over a 1-week period; in turn, those employees trained others. DLA officials had informal in-house employee training. DeCA sent EEO officials and an attorney from its headquarters trained in ERO to each of its three zones to train EEO managers as well as managers and supervisors at its 23 stores. Finally, all three programs used electronic data collection for tracking and monitoring, with each program developing its own electronic data collection method. For example, USAF uses EEO-Net system and software to collect program data. USAF also uses USAF-specific software, the Case Management and Tracking System, to manage the EEO process, including CORE, and an electronic case identifier to mark CORE cases to help in monitoring those program cases that reach EEOC on appeal. DeCA currently uses an Access database to track ERO activity, and DLA uses an Excel spreadsheet to track PECP activity. Officials from both the programs and DOD’s EEO pilot program oversight entities have indicated their willingness to share information. As we have previously reported, by assessing their relative strengths and limitations through collaboration, agencies can look for opportunities to address resource needs by leveraging each others’ resources and obtaining additional benefits that would not be available if they were working separately. While the focus of our earlier work was on coordination between agencies from different departments, the findings would also be applicable to agencies within a department that are engaged in similar activities. DOD’s EEO Pilot Programs Report Low Case Activity In its 9-month evaluation report, DOD stated that program activity for all three programs had been lower than anticipated. At the end of the first year, program officials reported continued low program activity. However, in its report, DOD did not provide a baseline for its comparison or elaborate on the reason for this occurrence. Instead, DOD’s evaluation plan states that data collected during the pilot program are to be measured against fiscal year 2004 baseline data. Therefore, we are including fiscal year 2004 data as reported to EEOC for each program for comparison purposes. Since many cases are still going through the program process, for comparison, we report only the number of initial contacts or formal complaints. According to DeCA officials, from January 1, 2005, through January 31, 2006, 42 employees contacted DeCA’s EEO office; of those, 41 were offered participation in ERO, and all opted for ERO. Of those who completed ERO, 16 did so with resolution; 9 did so without resolution, and 14 are still in process. Data are not available for DeCA test stores for fiscal year 2004, the year before ERO was implemented. According to DLA officials, from January 1, 2005, through January 31, 2006, 15 employees contacted DLA’s EEO office; of those, 13 were offered participation in PECP, and 12 opted for it. Of those who completed PECP, 10 did so with resolution; 1 declined participation, and 1 withdrew the precomplaint; 1 opted out of PECP. For fiscal year 2004, the year before DLA implemented PECP, 26 employees contacted DLA’s headquarters EEO office. According to USAF officials, from January 1, 2005, through January 31, 2006, a total of 634 formal complaints were filed USAF-wide. The CORE process was available to 534 of the complainants. Of those complainants offered CORE, 104 opted to process their complaint using CORE. Of these 104, 63 have been closed with resolution, and 28 CORE cases are still in progress. Thirteen complainants opted out of CORE and chose to return to the current EEO process. For fiscal year 2004, the year before USAF implemented CORE, 667 formal complaints were filed USAF-wide; of these, 488 were filed at what are now CORE test sites. DOD’s 9-month report stated that case activity was lower than expected. As a result of the low case activity, program officials have said they will seek to extend their respective programs for an additional (third) year. According to the authorizing memo from DOD implementing the pilot program, in April 2006 program officials can request to extend the pilot program for a third year. At the time of this report, DeCA and USAF had made requests of DOD to extend the operation of their pilot programs for a third year. Although Containing Some Strengths, Limitations in Its Evaluation Plan Will Hinder DOD’s Ability to Assess Pilot Program Results Our initial assessment of DOD’s evaluation plan for the pilot program found both strengths and limitations. One strength of the plan was the inclusion of forms for collecting baseline data (before the programs began) and pilot program data, which provides a tool for the pilot programs to measure some aspects of their progress. Although DOD developed an evaluation plan for the overall pilot program, the plan lacked some key features of a sound evaluation plan, including measures that are directly linked to the program objectives, criteria for determining pilot program performance, and an appropriate data analysis plan for the evaluation design. Without such features, DOD will be limited in its ability to conduct an accurate and reliable assessment of the programs’ results. In addition, the lack of established key evaluation features in DOD’s plan increases the likelihood of insufficient or unreliable data, further limiting confidence in pilot program results. Without confidence in pilot program results, DOD will be limited in its decision making regarding this pilot program, and Congress will be limited in its decision making about the pilot program’s potential broader application. Officials from DOD’s pilot program oversight entities have acknowledged shortcomings and have indicated a willingness to modify the plan. Strengths of DOD’s Evaluation Plan Considering the evaluation plan itself and interviews with DOD officials, we found that DOD’s plans for assessing the pilot programs had some strengths, including: Forms in the evaluation plan for collecting baseline data (before the pilot programs began) and pilot program data. According to the evaluation plan, baseline data from fiscal year 2004 are recorded on a template (i.e., a modified version of EEOC Form 462) appropriate to the part of the complaint process the pilot program focuses on. Data collected during the programs will be measured against the baseline data collected in the prior year’s EEOC Form 462. Pilot program and nonpilot program data are to be collected by an Individual Data Report form, which is to collect processing-time data, comparative information on early ADR, and early management involvement in cases at each pilot program site. Comparing data from the modified EEOC Form 462 to data from the program as well as to nonprogram cases is expected to help DOD determine whether processing times and redundancy were reduced concerning early resolution and streamlining as a result of the pilot program. Detailed time frames, roles and responsibilities, and report planning in the evaluation plan. The evaluation plan includes a schedule that details tasks, roles and responsibilities, and milestones for completing set tasks in evaluating the pilot program. This schedule provides a framework that is organized and easy to follow. Inclusion of reasonable research design. The evaluation plan includes a reasonable method for assessing pilot program results. Because the pilot program legislation mandates voluntary participation in the program, DOD was restricted from one form of design (i.e., randomly assigning employees alleging or filing complaints of discrimination to participate in the pilot program). As a result, DOD chose to compare prepilot and postpilot program data as well as pilot and nonpilot program cases. In addition, DOD officials said that the plan can be adjusted to the extent feasible to ensure that the data collected are sufficient for evaluating the pilot program. Without Key Evaluation Plan Features, DOD Will Be Limited in Its Ability to Assess Pilot Programs’ Results DOD’s plan for evaluating the effectiveness of the pilot program lacks some key features that are essential to assessing performance. Well-developed evaluation plans, which include key evaluation features, have a number of benefits, perhaps most importantly, increasing the likelihood that evaluations will yield methodologically sound results, thereby supporting effective program and policy decisions. The lack of established key evaluation features in DOD’s plan increases the likelihood of insufficient or unreliable data, limiting confidence in pilot program results. Without confidence in pilot program results, DOD will be limited in its decision making regarding this pilot program, and Congress will be limited in its decision making about the pilot program’s potential broader application. Some key features of a sound evaluation plan include: well-defined, clear, and measurable objectives; measures that are directly linked to the program objectives; criteria for determining pilot program performance; a way to isolate the effects of the pilot programs; a data analysis plan for the evaluation design; and a detailed plan to ensure that data collection, entry, and storage are reliable and error-free. DOD’s evaluation plan contains the following limitations: The objectives in DOD’s evaluation plan are not well defined or clear, which makes measurement problematic. For example, the evaluation plan identifies management accountability as an objective without defining it, who it applies to, and how it will be measured. Without well- defined, clear, and measurable objectives, the appropriate data may not be collected, thus hindering the assessment of pilot program progress. DOD’s data collection efforts are not linked to objectives in the evaluation plan. For example, the evaluation plan contains a variety of surveys that the individual pilot programs can use to measure customer satisfaction, but customer satisfaction is not included in the evaluation plan as an objective of the plan. Directly linking objectives and measures is a key feature of an evaluation plan. Without such linkage, data collection efforts may not directly inform stated objectives, and in turn, may not inform the evaluation effort. DOD’s evaluation plan does not establish standards for evaluating pilot program performance. For example, DOD’s plan does not state the amount or type of change required to indicate that a pilot program has succeeded in reducing processing time. Without targets or standards for determining success, it will be difficult to determine if the pilot program was effective. DOD’s evaluation plan does not mention controlling for possible outcomes that are attributable to factors other than the effects of the programs. A preferred research method is to use random assignment of program participants to provide greater confidence that results are attributable to a program. As we mentioned, DOD was restricted from randomly assigning employees alleging or filing complaints of discrimination to participate in the pilot program. As a result, other factors, such as the type of complaint, complainant, or the mediator may affect pilot program outcomes. Establishing controls for such factors could help isolate the effects attributable to the pilot programs. When an evaluation design involves, for example, a comparison between prepilot and postpilot program conditions, the research design should include controls to ensure that results will be attributable to the pilot program and not to other factors. DOD’s evaluation plan does not explain how the data will be analyzed. Although the evaluation plan has templates for collecting data, including pilot program baseline data, individual data reports, and various surveys, it does not state how the data collected will be analyzed. A data analysis plan is a key feature of an evaluation plan as it sets out how data will be analyzed to determine if program objectives have been met. Without a data analysis plan, it is not clear how the data will be analyzed to inform the objectives of the evaluation and assess the performance of the programs. DOD’s plan does not explain how the integrity of the data collected will be ensured. A detailed plan to ensure that data collection, entry, and storage are reliable and error-free is a key feature of an evaluation plan that gives greater confidence to data quality and reliability and to any findings made from these data. Without a detailed plan to ensure that data collection, entry, and storage are reliable and error-free, confidence in pilot program results will be limited. Conclusions All three programs share a common feature of emphasizing the use of ADR to meet the legislative mandate to improve the efficiency of the EEO complaint process. In addition, although authorized to operate outside of current EEOC regulations, to a large extent, two of the three programs have been designed by DOD to operate within the requirements of current regulations. While sharing common strategies in such areas as electronic data collection, the pilot programs implemented them differently. As the challenges of the 21st century grow, it will become increasingly important for DOD to consider how it can maximize performance and results through the improved collaboration of its organizations. Officials from the programs and DOD’s EEO pilot program oversight entities have indicated their willingness to share information and strategies. To better ensure that it will provide useful results, DOD needs to make changes to its evaluation plan. Although DOD’s evaluation plan had some strengths, the plan’s shortcomings may impede DOD’s ability to produce sound results that can inform both program and policy decisions regarding the overall pilot program. The lack of key evaluation features, such as clear and measurable objectives, measures linked to these objectives, and established criteria for determining pilot program performance may limit confidence in pilot program results. Recommendations for Executive Action To improve the performance and results of the pilot program, we recommend that the Secretary of Defense direct the Deputy Undersecretary of Defense for Civilian Personnel Policy, the Deputy Undersecretary for Equal Opportunity, and the Civilian Personnel Management Service to take the following actions: Establish regular intra-agency exchanges of information on outreach strategies, training, and electronic data collection from which the pilot programs could achieve potential benefits that would not be available if working separately. Develop a sound evaluation plan to accurately and reliably assess the pilot programs’ results, including such key features as well-defined, clear, and measurable objectives; measures that are directly linked to the program objectives; criteria for determining pilot program performance; a way to isolate the effects of the pilot programs; a data analysis plan for the evaluation design; and a detailed plan to ensure that data collection, entry, and storage are reliable and error-free. Agency Comments We provided a draft of this report to the Secretary of Defense for his review and comment. The Principal Deputy Undersecretary of Defense provided written comments, which are included in appendix II. DOD generally agreed with our recommendations. Regarding the establishment of regular intra-agency exchanges of information among the pilot programs to leverage potential benefits, DOD stated that it will hold quarterly meetings with pilot program managers. Concerning the development of an evaluation plan that accurately and reliably assesses the pilot programs’ results, DOD partially concurred with the recommendation and stated that it would consider and incorporate the recommended key features into the evaluation plan as appropriate. However, DOD also stated that the purpose of the plan was to assist pilot program evaluators in their work by specifying those procedures, tools, and objectives that would be unique to the pilot programs. In its comments, DOD reasons that because all pilot program officials agreed on a particular objective, which was common to both the pilot and traditional EEO complaint procedures, that it was not necessary to link data collection efforts to that objective or incorporate either the objective or the data collection effort in the evaluation plan. Because the plan is the long-term guide for the pilot program evaluation process and because staff changes occur, it is important that DOD include all objectives and methods they intend to use in the plan, allowing the evaluation process to be more transparent and provide clearer guidance to the pilot program officials on evaluation procedures. In its response, DOD also commented on our observation that to a large extent two of the three pilot programs were designed and are operating within existing EEOC requirements. DOD noted that this was due in large part to a presidential memorandum issued when the legislation was signed. The memorandum, which addressed the implementation of the pilot program, required that a complaining party be allowed to opt out of the pilot program at any time. According to DOD, adhering to this requirement necessitated using a similar design to the current EEO process so that complaining parties who decided to opt out would not be penalized by having to start at the very beginning of the current EEO complaint process. It is not clear to us that ensuring the ability to opt out at any point necessitates returning the complaining party to the very beginning of the current EEO process in all cases. Rather, the complaining party would be returned to the current EEO process at an appropriate point based on what was achieved through the pilot program process. Overall, we see nothing in the presidential memorandum that would limit DOD’s legitimate use of the procedural flexibility granted by Congress through the pilot program authority. We will send copies of this report to other interested congressional parties, the Secretary of Defense, and the Chair of EEOC. We also will make copies available to others upon request. In addition, the report is available on GAO’s home page at http://www.gao.gov. If your staff have questions about this report, please contact me on (202) 512-9490. Key contributors to this report are listed in appendix III. EEO Laws and Regulations Applicable to Federal Employees Title VII of the Civil Rights Act of 1964, as amended, makes it illegal for employers, including federal agencies, to discriminate against their employees or job applicants on the basis of race, color, religion, sex, or national origin. The Equal Pay Act of 1963 protects men and women who perform substantially equal work in the same establishment from sex- based wage discrimination. The Age Discrimination in Employment Act of 1967, as amended, prohibits employment discrimination against individuals who are 40 years of age or older. Sections 501 and 505 of the Rehabilitation Act of 1973, as amended, prohibit discrimination against qualified individuals with disabilities who work or apply to work in the federal government. Federal agencies are required to provide reasonable accommodation to qualified employees or applicants for employment with disabilities, except when such accommodation would cause an undue hardship. In addition, a person who files a complaint or participates in an investigation of an equal employment opportunity (EEO) complaint or who opposes an employment practice made illegal under any of the antidiscrimination statutes is protected from retaliation. The Equal Employment Opportunity Commission (EEOC) is responsible for enforcing all of these laws. Federal employees or applicants for employment who believe that they have been discriminated against by a federal agency may file a complaint with that agency. The EEOC has established regulations providing for the processing of federal sector employment discrimination complaints. This complaint process consists of two stages, informal, or precomplaint counseling, and formal. Before filing a complaint, the employee must consult an EEO counselor at the agency in order to try to informally resolve the matter. The employee must contact an EEO counselor within 45 days of the matter alleged to be discriminatory or, in the case of a personnel action, within 45 days of the effective date of the action. Counselors are to advise individuals that, when the agency agrees to offer alternative dispute resolution (ADR) in the particular case, they may choose to participate in either counseling or in ADR. Counseling is to be completed within 30 days from the date the employee contacted the EEO office for counseling unless the employee and agency agree to an extension of up to an additional 60 days. If ADR is chosen, the parties have 90 days in which to attempt resolution. If the matter is not resolved within these time frames, the counselor is required to inform the employee in writing of his or her right to file a formal discrimination complaint with the agency. The written notice must inform the employee of the (1) right to file a discrimination complaint within 15 days of receipt of the notice, (2) appropriate agency official with whom to file a complaint, and (3) duty to ensure that the agency is informed immediately if the complainant retains counsel or a representative. After a complainant files a formal discrimination complaint, the agency must decide whether to accept or dismiss the complaint and notify the complainant. If the agency dismisses the complaint, the complainant has 30 days to appeal the dismissal to EEOC. If the agency accepts the complaint, it has 180 days to investigate the accepted complaint and provide the complainant with a copy of the investigative file. Within 30 days of receipt of the copy of the investigative file, the complainant must choose between requesting (1) a hearing and decision from an EEOC administrative judge (AJ) or (2) a final decision from the agency. When a hearing is not requested, the agency must issue a final decision within 60 days. A complainant may appeal an agency’s final decision to EEOC within 30 days of receiving the final decision. In cases where a hearing is requested, the AJ has 180 days to issue a decision and send the decision to the complainant and the agency. If the AJ issues a finding of discrimination, he or she is to order appropriate relief. After the AJ decision is issued, the agency has 40 days to issue a final order notifying the complainant whether or not the agency will fully implement the decision of the AJ, and the employee has 30 days to file an appeal with EEOC of the agency’s final order. If the agency issues an order notifying the complainant that the agency will not fully implement the decision of the AJ, the agency also must file an appeal with EEOC at the same time. Parties have 30 days in which to request reconsideration of an EEOC decision. Figure I illustrates the EEO complaint process. If a complaint is one that can be appealed to the Merit Systems Protection Board (MSPB) such as a removal, reduction in grade or pay, or suspension for more than 14 days, the complaint is a “mixed-case complaint.” EEOC regulations provide that an individual may raise claims of discrimination in a mixed case, either as a mixed-case EEO complaint with the agency or a direct appeal to MSPB, but not both. A complainant may file a civil action in federal district court at various points during and after the administrative process. The filing of a civil action will terminate the administrative processing of the complaint. A complainant may file a civil action within 90 days of receiving the agency’s final decision or order, or EEOC’s final decision. A complainant may also file a civil action after 180 days from filing a complaint with his or her agency, or filing an appeal with EEOC, if no final action or decision has been made. Comments from the Department of Defense GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgements In addition to the individual named above, Belva M. Martin, Assistant Director; Karin K. Fangman; Cindy Gilbert; Emily Hampton-Manley; Anthony Patterson; Rebecca Shea; Linda Sidwell (detailee); and Kiki Theodoropoulos made key contributions to this report.
Delays in processing of equal employment opportunity (EEO) complaints have been a long-standing concern. In 2000, as part of the Department of Defense's (DOD) fiscal year 2001 authorization act, Congress authorized DOD to carry out a 3-year pilot program for improving processes to resolve complaints by civilian DOD employees by testing procedures that would reduce EEO complaint processing times and eliminate redundancy, among other things. The act requires two reports from GAO--90 days after the first and last fiscal years of the pilot program's operation. In December 2005 and January 2006, we provided briefings on our initial review of the pilot program. This report (1) describes key features and status of the three programs and (2) assesses DOD's plan for evaluating the effectiveness of the pilot program. In August 2004, the Secretary of Defense authorized 2-year programs in (1) Defense Logistics Agency (DLA), (2) the Defense Commissary Agency (DeCA), and (3) components of the U.S. Air Force (USAF) which became operational in fiscal year 2005. While the legislation stated that the pilot program is exempt from procedural requirements of current Equal Employment Opportunity Commission (EEOC) regulations, to a large extent two of the three programs were designed and are operating within existing EEOC requirements, with a specific emphasis on alternative dispute resolution (ADR) as encouraged in DOD's memo soliciting pilot program proposals. ADR techniques include, but are not limited to, conciliation, facilitation, mediation, or arbitration and usually involve the intervention or facilitation by a neutral third party. After the first year, program officials reported low case activity and stated that they plan to request approval from the Secretary to continue their respective programs for a third year. To carry out the programs, officials used similar strategies--outreach to inform eligible staff about the pilot programs, staff training, and the use of electronic data collection--but implemented them differently. Our assessment of DOD's evaluation plan for the pilot program found both strengths and limitations. A sound evaluation plan contains such features as criteria for determining program performance and measures that are directly linked to program objectives. Such key features increase the likelihood that the evaluation will yield sound results, thereby supporting effective program and policy decisions. Lacking these key features, DOD is limited in its ability to conduct an accurate and reliable assessment of the program's results, and Congress is limited in its ability to determine whether features of the overall program have governmentwide applicability. Officials from DOD's pilot program oversight entities have acknowledged shortcomings and have indicated a willingness to modify the plan.
GAO_GAO-13-414T
CBP Has Reported Progress in Stemming Illegal Cross-Border Activity, but Could Strengthen Assessment of Its Efforts Border Patrol Has Reported Some Success in Reducing Illegal Migration, but Challenges Remain in Assessing Efforts Since fiscal year 2011, DHS has used changes in the number of apprehensions on the southwest border between POEs as an interim measure for border security, as reported in its annual performance reports. As we reported in December 2012, our data analysis showed that apprehensions across the southwest border decreased 69 percent from fiscal years 2006 through 2011. These data generally mirrored a decrease in estimated known illegal entries in each southwest border sector. As we testified in February 2013, data reported by Border Patrol following the issuance of our December 2012 report show that total apprehensions across the southwest border increased from over 327,000 in fiscal year 2011 to about 357,000 in fiscal year 2012.assess whether this increase indicates a change in the trend for Border Patrol apprehensions across the southwest border. Through fiscal year 2011, Border Patrol attributed decreases in apprehensions across sectors in part to changes in the U.S. economy, achievement of strategic objectives, and increased resources for border security. It is too early to In addition to collecting data on apprehensions, Border Patrol collects other types of data that are used by sector management to help inform assessment of its efforts to secure the border against the threats of illegal migration and smuggling of drugs and other contraband. These data show changes, for example, in the (1) percentage of estimated known illegal entrants who are apprehended, (2) percentage of estimated known illegal entrants who are apprehended more than once (repeat offenders), Our analysis and (3) number of seizures of drugs and other contraband.of these data show that the percentage of estimated known illegal entrants apprehended from fiscal years 2006 through 2011 varied across southwest border sectors. The percentage of individuals apprehended who repeatedly crossed the border illegally declined by 6 percent from fiscal years 2008 through 2011. Further, the number of seizures of drugs and other contraband across the border increased from 10,321 in fiscal year 2006 to 18,898 in fiscal year 2011. Border Patrol calculates an overall effectiveness rate using a formula in which it adds the number of apprehensions and turn backs in a specific sector and divides this total by the total estimated known illegal entries—determined by adding the number of apprehensions, turn backs, and got aways for the sector. Border Patrol views its border security efforts as increasing in effectiveness if the number of turn backs as a percentage of estimated known illegal entries has increased and the number of got aways as a percentage of estimated known illegal entries has decreased. were not apprehended because they crossed back into Mexico) and got away data (entrants who illegally crossed the border and continued traveling into the U.S. interior) used to calculate the overall effectiveness rate preclude comparing performance results across sectors. Border Patrol headquarters officials stated that until recently, each Border Patrol sector decided how it would collect and report turn back and got away data, and as a result, practices for collecting and reporting the data varied across sectors and stations based on differences in agent experience and judgment, resources, and terrain. Border Patrol headquarters officials issued guidance in September 2012 to provide a more consistent, standardized approach for the collection and reporting of turn back and got away data by Border Patrol sectors. Each sector is to be individually responsible for monitoring adherence to the guidance. According to Border Patrol officials, it is expected that once the guidance is implemented, data reliability will improve. This new guidance may allow for comparison of sector performance and inform decisions regarding resource deployment for securing the southwest border. Border Patrol is in the process of developing performance goals and measures for assessing the progress of its efforts to secure the border between ports of entry and for informing the identification and allocation of resources needed to secure the border, but has not identified milestones and time frames for developing and implementing them. Since fiscal year 2011, DHS has used the number of apprehensions on the southwest border between ports of entry as an interim performance goal and measure for border security as reported in its annual performance report. Prior to this, DHS used operational control as its goal and outcome measure for border security and to assess resource needs to accomplish this goal. Operational control—also referred to as effective control—was defined as the number of border miles where Border Patrol had the capability to detect, respond to, and interdict cross-border illegal activity. DHS last reported its progress and status in achieving operational control of the borders in fiscal year 2010. At that time, DHS reported achieving operational control for 1,107 (13 percent) of 8,607 miles across U.S. northern, southwest, and coastal borders. Along the southwest border, DHS reported achieving operational control for 873 (44 percent) of the about 2,000 border miles. At the beginning of fiscal year 2011, DHS transitioned from using operational control as its goal and outcome measure for border security. We testified in May 2012 that the interim goal and measure of number of apprehensions on the southwest border between POEs provides information on activity levels but does not inform program results or resource identification and allocation decisions, and therefore until new goals and measures are developed, DHS and Congress could experience reduced oversight and DHS accountability. Further, studies commissioned by CBP have found that the number of apprehensions bears little relationship to effectiveness because agency officials do not compare these numbers with the amount of cross-border illegal activity. Border Patrol officials stated that the agency is in the process of developing performance goals and measures, but has not identified milestones and time frames for developing and implementing them. According to Border Patrol officials, establishing milestones and time frames for the development of performance goals and measures is contingent on the development of key elements of its new strategic plan, such as a risk assessment tool, and the agency’s time frames for implementing these key elements—targeted for fiscal years 2013 and 2014—are subject to change. We recommended that CBP establish milestones and time frames for developing a performance goal, or goals, for border security between ports of entry that defines how border security is to be measured, and a performance measure, or measures, for assessing progress made in securing the border between ports of entry and informing resource identification and allocation efforts. DHS concurred with our recommendations and stated that it plans to set milestones and timeframes for developing goals and measures by November 2013. CBP Has Taken Action to Strengthen POE Inspection Programs and Officer Training, and Has Additional Actions Planned or Underway As part of its homeland security and legacy customs missions, CBP inspects travelers arriving at POEs to counter threats posed by terrorists and others attempting to enter the country with fraudulent or altered travel documents and to prevent inadmissible aliens, criminals, and inadmissible goods from entering the country. In fiscal year 2012, CBP inspected about 352 million travelers and over 107 million cars, trucks, buses, trains, vessels, and aircraft at over 329 air, sea, and land POEs. We have previously identified vulnerabilities in the traveler inspection program and made recommendations to DHS for addressing these vulnerabilities, and DHS implemented these recommendations. We reported in January 2008 on weaknesses in CBP’s inbound traveler inspection program, including challenges in attaining budgeted staffing levels because of attrition and lack of officer compliance with screening procedures, such as those used to determine citizenship and admissibility of travelers entering the country as required by law and CBP policy. Contributing factors included a lack of focus, complacency, lack of supervisory presence, and lack of training. We recommended that CBP enhance internal controls in the inspection process, implement performance measures for apprehending inadmissible aliens and other violators, and establish measures for training provided to CBP officers and new officer proficiency. DHS concurred with these recommendations and has implemented them. Specifically, in January 2008, CBP reported, among other things, that all land port directors are required to monitor and assess compliance with eight different inspection activities using a self- inspection worksheet that is provided to senior CBP management. At that time, CBP also established performance measures related to the effectiveness of CBP interdiction efforts. Additionally, in June 2011, CBP began conducting additional classroom and on-the-job training, which incorporated ongoing testing and evaluation of officer proficiency. In December 2011, we reported that CBP had revised its training program for newly hired CBP officers in accordance with its own training development standards.convened a team of subject-matter experts to identify and rank the tasks that new CBP officers are expected to perform. As a result, the new curriculum was designed to produce a professional law enforcement officer capable of protecting the homeland from terrorist, criminal, biological, and agricultural threats. Consistent with these standards, CBP We also reported that CBP took some steps to identify and address the training needs of its incumbent CBP officers but could do more to ensure that these officers were fully trained. For example, we examined CBP’s results of covert tests of document fraud detection at POEs conducted over more than 2 years and found weaknesses in the CBP inspection process at the POEs that were tested. In response to these tests, CBP developed a “Back to Basics” course in March 2010 for incumbent officers, but had no plans to evaluate the effectiveness of the training. Additionally, CBP had not conducted an analysis of all the possible causes or systemic issues that may have contributed to the test results. We recommended in December 2011 that CBP evaluate the “Back to Basics” training course and analyze covert tests, and DHS concurred with these recommendations. In April 2012, CBP officials notified GAO that it had completed its evaluation of the “Back to Basics” training course and implemented an updated, subsequent training course. In November 2012, CBP officials stated they had analyzed the results of covert tests prior to and since the implementation of the subsequent course. GAO is currently reviewing CBP’s analysis of the covert test results and other documentation CBP has provided to determine the extent to which CBP has addressed this recommendation. Further, in July 2012 CBP completed a comprehensive analysis of the results of its document fraud covert tests from fiscal years 2009 to 2011. In addition, we reported that CBP had not conducted a needs assessment that would identify any gaps between identified critical skills and incumbent officers’ current skills and competencies. We recommended in December 2011 that CBP conduct a training needs assessment. DHS concurred with this recommendation. In January 2013, CBP notified GAO it had developed a survey of incumbent officers to seek feedback on possible gaps in training. CBP is currently analyzing the survey results and preparing a report, which will recommend a path forward to address training needs. According to CBP, if an additional training need is identified and funding is available, CBP will develop or revise the current training program. In February 2013, CBP officials stated it plans to complete this process by April 15, 2013. DHS Law Enforcement Partners Reported Improved Results for Interagency Coordination and Oversight of Intelligence and Enforcement Operations, but Gaps Remain DOI and USDA Reported Improved DHS Coordination to Secure Federal Borderlands, but Critical Gaps Remained in Sharing Intelligence and Communications for Daily Operations Illegal cross-border activity remains a significant threat to federal lands protected by DOI and USDA law enforcement personnel on the southwest and northern borders and can cause damage to natural, historic, and cultural resources, and put agency personnel and the visiting public at risk. We reported in November 2010 that information sharing and communication among DHS, DOI, and USDA law enforcement officials For example, interagency forums were had increased in recent years.used to exchange information about border issues, and interagency liaisons facilitated exchange of operational statistics. However, gaps remained in implementing interagency agreements to ensure law enforcement officials had access to daily threat information to better ensure officer safety and an efficient law enforcement response to illegal activity. For example, in Border Patrol’s Spokane sector on the northern border, coordination of intelligence information was particularly important because of sparse law enforcement presence and technical challenges that precluded Border Patrol’s ability to fully assess cross-border threats, such as air smuggling of high-potency marijuana. We recommended DHS, DOI, and USDA provide oversight and accountability as needed to further implement interagency agreements for coordinating information and integrating operations. These agencies agreed with our recommendations, and in January 2011, CBP issued a memorandum to all Border Patrol division chiefs and chief patrol agents emphasizing the importance of USDA and DOI partnerships to address border security threats on federal lands. While this is a positive step, to fully satisfy the intent of our recommendation, DHS would need to take further action to monitor and uphold implementation of the existing interagency agreements to enhance border security on federal lands. Northern Border Partners Reported Interagency Forums Improved Coordination, but DHS Oversight Was Needed to Resolve Interagency Conflict in Roles and Responsibilities DHS has stated that partnerships with other federal, state, local, tribal, and Canadian law enforcement agencies are critical to the success of northern border security efforts. We reported in December 2010 that DHS efforts to coordinate with these partners through interagency forums and joint operations were considered successful, according to a majority of these partners we interviewed. In addition, DHS component officials reported that federal agency coordination to secure the northern border was improved. However, DHS did not provide oversight for the number and location of forums established by its components, and numerous federal, state, local, and Canadian partners cited challenges related to the inability to resource the increasing number of forums, raising concerns that some efforts may be overlapping. In addition, federal law enforcement partners in all four locations we visited as part of our work cited ongoing challenges between Border Patrol and ICE, Border Patrol and Forest Service, and ICE and DOJ’s Drug Enforcement Administration in sharing information and resources that compromised daily border security related to operations and investigations. DHS had established and updated interagency agreements to address ongoing coordination challenges; however, oversight by management at the component and local levels has not ensured consistent compliance with provisions of these agreements. We also reported in December 2010 that while Border Patrol’s border security measures reflected that there was a high reliance on law enforcement support from outside the border zones, the extent of partner law enforcement resources that could be leveraged to fill Border Patrol resource gaps, target coordination efforts, and make more efficient resource decisions was not reflected in Border Patrol’s processes for assessing border security and resource requirements. We recommended that DHS provide guidance and oversight for interagency forums and for component compliance with interagency agreements, and develop policy and guidance necessary to integrate partner resources in border security assessments and resource planning documents. DHS agreed with our recommendations and has reported taking action to address them. For example, in June 2012, DHS released a northern border strategy, and in August 2012, DHS notified us of other cross-border law enforcement and security efforts taking place with Canada. However, in order to fully satisfy the intention of our recommendation, DHS would need to develop an implementation plan that specifies the resources and time frames needed to achieve the goals set forth in the strategy. Opportunities Exist to Improve DHS’s Management of Border Security Assets DHS Has Deployed Assets to Secure the Borders, but Needs to Provide More Information on Plans, Metrics, and Costs In November 2005, DHS launched the Secure Border Initiative (SBI), a multiyear, multibillion-dollar program aimed at securing U.S. borders and reducing illegal immigration. Through this initiative, DHS planned to develop a comprehensive border protection system using technology, known as the Secure Border Initiative Network (SBInet), and tactical infrastructure—fencing, roads, and lighting. Under this program, CBP increased the number of southwest border miles with pedestrian and vehicle fencing from 120 miles in fiscal year 2005 to about 650 miles presently. We reported in May 2010 that CBP had not accounted for the impact of its investment in border fencing and infrastructure on border security. Specifically, CBP had reported an increase in control of southwest border miles, but could not account separately for the impact of the border fencing and other infrastructure. In September 2009, we recommended that CBP determine the contribution of border fencing and other infrastructure to border security. DHS concurred with our recommendation and, in response, CBP contracted with the Homeland Security Studies and Analysis Institute to conduct an analysis of the impact of tactical infrastructure on border security. CBP reported in February 2012 that preliminary results from this analysis indicate that an additional 3 to 5 years are needed to ensure a credible assessment. Since the launch of SBI in 2005, we have identified a range of challenges related to schedule delays and performance problems with SBInet. SBInet was conceived as a surveillance technology to create a “virtual fence” along the border, and after spending nearly $1 billion, DHS deployed SBInet systems along 53 miles of Arizona’s border that represent the highest risk for illegal entry. In January 2011, in response to concerns regarding SBInet’s performance, cost, and schedule, DHS canceled future procurements. CBP developed the Arizona Border Surveillance Technology Plan (the Plan) for the remainder of the Arizona border. In November 2011, we reported that CBP does not have the information needed to fully support and implement its Plan in accordance with DHS and Office of Management and Budget (OMB) guidance. In developing the Plan, CBP conducted an analysis of alternatives and outreach to potential vendors. However, CBP did not document the analysis justifying the specific types, quantities, and deployment locations of border surveillance technologies proposed in the Plan. Specifically, according to CBP officials, CBP used a two-step process to develop the Plan. First, CBP engaged the Homeland Security Studies and Analysis Institute to conduct an analysis of alternatives beginning with ones for Arizona. Second, following the completion of the analysis of alternatives, the Border Patrol conducted its operational assessment, which included a comparison of alternative border surveillance technologies and an analysis of operational judgments to consider both effectiveness and cost. While the first step in CBP’s process to develop the Plan—the analysis of alternatives—was well documented, the second step—Border Patrol’s operational assessment—was not transparent because of the lack of documentation. As we reported in November 2011, without documentation of the analysis justifying the specific types, quantities, and deployment locations of border surveillance technologies proposed in the Plan, an independent party cannot verify the process followed, identify how the analysis of alternatives was used, assess the validity of the decisions made, or justify the funding requested. We also reported that CBP officials have not yet defined the mission benefits expected from implementing the new Plan, and defining the expected benefit could help improve CBP’s ability to assess the effectiveness of the Plan as it is implemented. In addition, we reported that CBP’s 10-year life cycle cost estimate for the Plan of $1.5 billion was based on an approximate order-of-magnitude analysis, and agency officials were unable to determine a level of confidence in their estimate, as best practices suggest. Specifically, we found that the estimate reflected substantial features of best practices, being both comprehensive and accurate, but it did not sufficiently meet other characteristics of a high-quality cost estimate, such as credibility, because it did not identify a level of confidence or quantify the impact of risks. GAO and OMB guidance emphasize that reliable cost estimates are important for program approval and continued receipt of annual funding. In addition, because CBP was unable to determine a level of confidence in its estimate, we reported that it would be difficult for CBP to determine what levels of contingency funding may be needed to cover risks associated with implementing new technologies along the remaining Arizona border. We recommended in November 2011 that, among other things, CBP document the analysis justifying the technologies proposed in the Plan, determine its mission benefits, and determine a more robust life cycle cost estimate for the Plan. DHS concurred with these recommendations, and has reported taking action to address some of the recommendations. For example, in October 2012, CBP officials reported that, through the operation of two surveillance systems under SBInet’s initial deployment in high-priority regions of the Arizona border, CBP has identified examples of mission benefits that could result from implementing technologies under the Plan. Additionally, CBP initiated action to update its cost estimate for the Plan by providing revised cost estimates in February and March 2012 for the Integrated Fixed Towers and Remote Video Surveillance System, the Plan’s two largest projects. We currently have ongoing work for congressional requesters to assess CBP’s progress in this area and expect to issue a report with our final results in the fall of 2013. In March 2012, we reported that the CBP Office of Air and Marine (OAM)—which provides aircraft, vessels, and crew at the request of its customers, primarily Border Patrol—had not documented significant events, such as its analyses to support its asset mix and placement across locations, and as a result, lacked a record to help demonstrate that its decisions to allocate resources were the most effective ones in fulfilling customer needs and addressing threats. OAM issued various plans that included strategic goals, mission responsibilities, and threat information. However, we could not identify the underlying analyses used to link these factors to the mix and placement of resources across locations. OAM did not have documentation that clearly linked the deployment decisions in the plan to mission needs or threats. For example, while the southwest border was Border Patrol’s highest priority for resources in fiscal year 2010, it did not receive a higher rate of air support than the northern border. Similarly, OAM did not document analyses supporting the current mix and placement of marine assets across locations. OAM officials said that while they generally documented final decisions affecting the mix and placement of resources, they did not have the resources to document assessments and analyses to support these decisions. However, we reported that such documentation of significant events could help the office improve the transparency of its resource allocation decisions to help demonstrate the effectiveness of these resource decisions in fulfilling its mission needs and addressing threats. We recommended in March 2012 that CBP document analyses, including mission requirements and threats, that support decisions on the mix and placement of OAM’s air and marine resources. DHS concurred with our recommendation and stated that it plans to provide additional documentation of its analyses supporting decisions on the mix and placement of air and marine resources by 2014. DHS US-VISIT Program Technology Provides an Opportunity to Identify Illegal Migration through Overstays DHS took action in 2004 to better monitor and control the entry and exit of foreign visitors to the United States by establishing the U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) program, which tracks foreign visitors using biometric information (such as fingerprints) and biographic information. DHS has incrementally delivered US-VISIT capabilities to track foreign entries, and a biometrically enabled entry capability has been fully operational at about 300 air, sea, and land POEs since December 2006. Since 2004, however, we have identified a range of DHS management challenges to fully deploying a biometric exit capability intended, in part, to track foreigners who had overstayed their visas and remained illegally in the United States. For example, in November 2009, we reported that DHS had not adopted an integrated approach to scheduling, executing, and tracking the work needed to deliver a comprehensive exit solution. In August 2010, we reported that the DHS pilot programs to track the exit of foreign visitors at air POEs had limitations curtailing the ability to inform a decision for a long-term exit solution at these POEs. GAO-10-13. GAO, Overstay Enforcement: Additional Mechanisms for Collecting, Assessing, and Sharing Data Could Strengthen DHS’s Efforts but Would Have Costs, GAO-11-411 (Washington, D.C. Apr. 15, 2011). System—a database that contains information on aliens’ entry, exit, and change of status—and electronically and manually comparing Arrival and Departure Information System records with information in other databases to find matches that demonstrate that a nonimmigrant may have, for instance, departed the country or filed an application to change status and thus is not an overstay. Additionally, DHS shares overstay information among its components through various mechanisms, such as alerts that can inform a CBP primary inspection officer at a POE of a nonimmigrant’s history as an overstay violator, at which point the officer can refer the nonimmigrant to secondary inspection for a more in-depth review of the alien’s record and admissibility. Furthermore, ICE’s Counterterrorism and Criminal Exploitation Unit uses data provided by US-VISIT and various databases to identify leads for overstay cases, take steps to verify the accuracy of the leads, prioritize leads to focus on those identified as most likely to pose a threat to national security or public safety, and conduct field investigations on priority, high-risk leads. From fiscal years 2006 through 2010, ICE reported devoting a relatively constant percent of its total field office investigative hours to Counterterrorism and Criminal Exploitation Unit overstay investigations, ranging from 3.1 to 3.4 percent. We reported in April 2011 that DHS was creating electronic alerts for certain categories of overstays, such as those who overstay by more than 90 days, but was not creating alerts for those who overstay by less than 90 days to focus efforts on more egregious overstay violators, as identified by CBP. We recommended in April 2011 that DHS assess the costs and benefits of creating additional alerts, and DHS concurred with this recommendation. DHS has since reported that it would begin creating additional alerts, which could improve the chance that these individuals are identified as overstays during subsequent encounters with federal officials. We have additional work ongoing for congressional requesters in this area regarding DHS’s identification of and enforcement actions against overstays and expect to issue a report with our final results in the summer of 2013. This concludes my statement for the record. GAO Contact and Staff Acknowledgments For further information about this statement, please contact Rebecca Gambler at (202) 512-8777 or [email protected]. In addition, contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Individuals making key contributions to this statement included Kathryn Bernet, Lacinda Ayers, and Jeanette Espinola, Assistant Directors; as well as Frances A. Cook, Alana Finley, Barbara Guffy, Lara Miklozek, and Ashley D. Vaughan. Related GAO Products Border Patrol: Goals and Measures Not Yet in Place to Inform Border Security Status and Resource Needs. GAO-13-330T. Washington, D.C.: February 26, 2013. Border Patrol: Key Elements of New Strategic Plan Not Yet in Place to Inform Border Security Status and Resource Needs. GAO-13-25. Washington, D.C.: December 10, 2012. Border Patrol Strategy: Progress and Challenges in Implementation and Assessment Efforts. GAO-12-688T. Washington, D.C.: May 8, 2012. Border Security: Opportunities Exist to Ensure More Effective Use of DHS’s Air and Marine Assets. GAO-12-518. Washington, D.C.: March 30, 2012. Border Security: Additional Steps Needed to Ensure Officers are Fully Trained. GAO-12-269. Washington, D.C.: December 22, 2011. Arizona Border Surveillance Technology: More Information on Plans and Costs Is Needed before Proceeding. GAO-12-22. Washington, D.C.: November 4, 2011. Overstay Enforcement: Additional Mechanisms for Collecting, Assessing, and Sharing Data Could Strengthen DHS’s Efforts but Would Have Costs. GAO-11-411. Washington, D.C.: April 15, 2011. Border Security: Preliminary Observations on Border Control Measures for the Southwest Border. GAO-11-374T. Washington, D.C.: February 15, 2011. Border Security: Enhanced DHS Oversight and Assessment of Interagency Coordination is Needed for the Northern Border. GAO-11-97. Washington, D.C.: December 17, 2010. Border Security: Additional Actions Needed to Better Ensure a Coordinated Federal Response to Illegal Activity on Federal Lands. GAO-11-177. Washington, D.C.: November 18, 2010. Homeland Security: US-VISIT Pilot Evaluations Offer Limited Understanding of Air Exit Options. GAO-10-860. Washington, D.C.: August 10, 2010. Secure Border Initiative: DHS Has Faced Challenges Deploying Technology and Fencing Along the Southwest Border. GAO-10-651T. Washington, D.C.: May 4, 2010. Homeland Security: Key US-VISIT Components at Varying Stages of Completion, but Integrated and Reliable Schedule Needed, GAO-10-13. Washington, D.C.: November 19, 2009. Secure Border Initiative: Technology Deployment Delays Persist and the Impact of Border Fencing Has Not Been Assessed. GAO-09-896. Washington, D.C.: September 9, 2009. Border Security: Despite Progress, Weaknesses in Traveler Inspections Exist at Our Nation’s Ports of Entry. GAO-08-329T. Washington, D.C.: January 3, 2008. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
At the end of fiscal year 2004, DHS had about 28,100 personnel assigned to patrol U.S. land borders and inspect travelers at air, land, and sea POEs, at a cost of about $5.9 billion. At the end of fiscal year 2011, DHS had about 41,400 personnel assigned to air, land, and sea POEs and along the border, at a cost of about $11.8 billion. DHS has reported that this stronger enforcement presence was one of several reasons why fewer people were attempting to illegally cross the border. However, challenges remain in securing the border. In recent years, GAO has reported on a variety of DHS border security programs and operations. As requested, this statement addresses some of the key issues and recommendations GAO has made in the following areas: (1) DHS's efforts to secure the border at and between POEs; (2) DHS interagency coordination and oversight of border security information sharing and enforcement efforts; and (3) DHS management of infrastructure, technology, and other assets used to secure the border. This statement is based on prior products GAO issued from January 2008 through February 2013, along with selected updates conducted in February 2013. For the selected updates, GAO reviewed information from DHS on actions it has taken to address prior GAO recommendations. U.S. Customs and Border Protection (CBP), part of the Department of Homeland Security (DHS), has reported progress in stemming illegal cross-border activity, but it could strengthen the assessment of its efforts. For example, since fiscal year 2011, DHS has used the number of apprehensions on the southwest border between ports of entry (POE) as an interim measure for border security. GAO reported in December 2012 that apprehensions decreased across the southwest border from fiscal years 2006 to 2011, which generally mirrored a decrease in estimated known illegal entries in each southwest border sector. CBP attributed this decrease in part to changes in the U.S. economy and increased resources for border security. Data reported by CBP's Office of Border Patrol (Border Patrol) show that total apprehensions across the southwest border increased from over 327,000 in fiscal year 2011 to about 357,000 in fiscal year 2012. It is too early to assess whether this increase indicates a change in the trend. GAO reported in December 2012 that the number of apprehensions provides information on activity levels but does not inform program results or resource allocation decisions. Border Patrol is in the process of developing performance goals and measures for assessing the progress of its efforts to secure the border between POEs, but it has not identified milestones and time frames for developing and implementing them, which GAO recommended that it do. DHS agreed and said that it plans to set a date for establishing such milestones and time frames by November 2013. DHS law enforcement partners reported improvements in interagency coordination and oversight of intelligence and enforcement operations, but gaps remain. GAO reported in November 2010 that information sharing and communication among federal law enforcement officials had increased; however, gaps remained in ensuring law enforcement officials had access to daily threat information. GAO recommended that relevant federal agencies determine if more guidance is needed for federal land closures and that they ensure interagency agreements for coordinating information and integrating operations are further implemented. These agencies agreed and in January 2011, CBP issued a memorandum affirming the importance of federal partnerships to address border security threats on federal lands. While this is a positive step, to fully satisfy the intent of GAO's recommendation, DHS needs to take further action to monitor and uphold implementation of the existing interagency agreements. Opportunities exist to improve DHS's management of border security assets. For example, DHS conceived the Secure Border Initiative Network as a surveillance technology and deployed such systems along 53 miles of Arizona's border. In January 2011, in response to performance, cost, and schedule concerns, DHS canceled future procurements, and developed the Arizona Border Surveillance Technology Plan (Plan) for the remainder of the Arizona border. GAO reported in November 2011 that in developing the new Plan, CBP conducted an analysis of alternatives, but it had not documented the analysis justifying the specific types, quantities, and deployment locations of technologies proposed in the Plan, which GAO recommended that it do. DHS concurred with this recommendation. GAO has ongoing work in this area and expects to issue a report in fall 2013.
GAO_GAO-01-67
Background Benefit and loan programs provide cash or in-kind assistance to individuals who meet specified eligibility criteria. Temporary Assistance for Needy Families (TANF), SSI, Food Stamps, housing assistance, and student loans are representative of such programs. Some programs are administered centrally by federal agencies (such as SSI), while others are administered by states and localities (such as TANF). Benefit and loan programs often have difficulty making accurate eligibility and payment amount decisions because applicants and recipients provide much of the information needed to make these decisions, and the programs do not always have effective ways to verify that these individuals are fully disclosing all relevant information. The symposium, entitled “Data Sharing: Initiatives and Challenges Among Benefit and Loan Programs,” was sponsored by GAO and the Chairman and Ranking Minority Member of the Senate Committee on Governmental Affairs. It was an impartial and balanced forum to explore the successes, problems, and possible future directions of data sharing among benefit and loan programs. The symposium consisted of an opening address by Sally Katzen, Deputy Director for Management of OMB, and four panels composed of four to six speakers each. Ms. Katzen’s talk highlighted both the importance of data sharing and the need to protect individual privacy in the course of such sharing. Panel speakers then discussed how data sharing has benefited their programs, how technology offers new data- sharing possibilities, the privacy and security concerns that arise in a data- sharing environment, and how data sharing can be advanced among benefit and loan programs governmentwide. Panel speakers, who came from a variety of federal and state benefit and loan programs and the private sector, included officials from SSA, the Department of Labor, OCSE, the Department of the Treasury, and state human services departments, as well as representatives from the financial services industry and privacy advocates. Appendix I contains the symposium agenda, including the names and complete titles of the speakers. Many of the symposium speakers and audience participants referred to the National Directory of New Hires (NDNH). The Congress mandated that OCSE create this database as part of welfare reform primarily to aid in collection of interstate child support payments. The NDNH is maintained by OCSE and, to a large extent, is derived from reports that private employers and states are required to file containing information on newly hired employees, quarterly wage information, and quarterly unemployment insurance (UI) information. In addition, this database contains information on newly hired federal employees and quarterly wage information on all federal employees. OCSE matches these data against information it has on parents who are involved in child support cases and forwards the matched results to the state child support offices responsible for collecting the payments. The Social Security Act limits access to the NDNH to specific agencies for specific purposes. For example, Treasury (including the Internal Revenue Service ) has access to the NDNH to administer federal tax laws and to verify claims for the Earned Income Tax Credit. SSA also has access to help it administer the SSI and Old-Age, Survivors, and Disability Insurance (OASDI) programs. More recently, the Department of Education was granted access for purposes of obtaining the addresses of individuals who have defaulted on student loans or who owe grant repayments to Education. Data Sharing Has Enhanced the Payment Controls of Programs Sally Katzen, Deputy Director for Management of OMB, kicked off the data- sharing symposium by highlighting the importance of data sharing in achieving one of the top objectives contained in the administration’s 2001 budget proposal: verifying that the right person is getting the right benefit at the right time. This objective is being accomplished in part by data sharing among agencies to identify when improper benefit and loan payments have been made to program recipients. Several symposium participants representing major benefit and other programs reported that shared information is predominantly used in computer matches. That is, an agency compares the information it has on its program recipients against a file from another agency containing similar information to detect discrepancies, such as undisclosed income or assets. Once such discrepancies are detected, the agency investigates to determine if improper payments have been made and, if so, takes action to collect any overpayments and, sometimes, to remove the individual in question from the program. Agencies find such computer matches cost-effective because computers do most of the work. According to one symposium speaker, Pete Monaghan, an SSA official, the cost-benefit ratios of matches range from $20 to $40 of savings for every $1 spent to perform the match. Symposium speakers estimated that substantial savings accrue to programs that use computer matches to detect improper payments. According to Mr. Monaghan, SSA saves about $675 million annually by matching its OASDI and SSI program rolls against data from 10 to 12 federal agencies and 4,000 state and local jails to identify ineligible or overpaid individuals. (See table 1.) Mr. Monaghan also explained that SSA provides data that it maintains on U.S. workers and SSA program recipients to 10 to 12 federal agencies and all states and U.S. territories, and that the use of these data results in annual savings of $1.5 billion. Finally, many states have begun to participate in multistate matches, known as Public Assistance Report Information System (PARIS) matches, to identify welfare recipients who receive simultaneous benefits in more than one state. At the time of the symposium, two PARIS matches had been conducted, and 13 states and the District of Columbia had participated in the most recent one. Although comparable match results among participating states do not exist, Elliot Markovitz, from the Pennsylvania Department of Public Welfare, provided indications of the matches’ effectiveness by reporting the results for the District of Columbia and selected states, including Pennsylvania. Pennsylvania and the District of Columbia determine results by estimating their annual savings for such public assistance programs as TANF and Food Stamps as a result of removing individuals from their rolls because they were found to be receiving benefits in another state. Pennsylvania estimated its annual savings at $2.8 million and removed 566 individuals from the rolls. These individuals accounted for nearly 16 percent of all cases that Pennsylvania county workers investigated as a result of the two matches. The District of Columbia put its annual savings at about $1 million and the number of individuals removed from the rolls as a result of one match at 382. These individuals accounted for about 18 percent of all PARIS cases investigated by the District of Columbia. Another agency that obtains a substantial amount of data from outside sources, OCSE, also made a presentation at the symposium. Although OCSE does not make benefit or loan payments, it is responsible for helping state child support offices collect child support payments from parents who are obligated to make such payments. In some cases, the law requires that these payments be used to offset public assistance benefits that the custodial parents received during periods when their ex-partners owed them child support. OCSE has data from two sources that are instrumental in collecting child support payments: the NDNH and financial account information on individuals from financial institutions. Donna Bonar, Acting Associate Commissioner at OCSE, reported that in Texas, the amount of child support payments collected increased $4 million (32.6 percent) the month after that state automated wage withholding and began using the results from the NDNH match and in Virginia, child support collections increased by an estimated $13 million (33 percent) in 1 year as a result of the NDNH match. For the financial institution match, OCSE submits electronic files containing the names of individuals who are delinquent in their child support payments to about 3,000 financial institutions, and these institutions respond to OCSE when such individuals have accounts with them. Over a three-quarter period (July 1999 through March 2000), OCSE received information pertaining to more than 879,000 individuals with accounts totaling approximately $3 billion. Child support offices are able to collect lump-sum payments from delinquent child support obligators on the basis of these accounts. Ms. Bonar reported that the highest lump-sum payment collected was $74,000, of which $34,000 went to the state to reimburse the TANF program and $40,000 went to the custodial parent, and lump-sum payments commonly range from $20,000 to $30,000. Technologies Are Expanding Data- Sharing Opportunities Symposium speakers also discussed technologies that are expanding data- sharing opportunities and that offer new possibilities for data security. Three of the data-sharing applications discussed involve computer applications that make direct communication among computer systems possible. All three of these applications offer benefits to the government and the public, including the ability to verify program participant information and thereby detect improper payments sooner, or perhaps prevent them altogether. Integral to these discussions was how access to, and use of, shared information could be appropriately limited to official personnel for authorized reasons related to program administration. Another technological advancement discussed at the symposium was biometric identification systems, which are used to help ensure data security and prevent improper payments. These automated systems scan parts of the human body and, through a comparison with a previous scan, verify a person’s identity. Internet-Based Technology Promotes the Interoperability of Computer Systems Three presentations focused on how technology has enabled government agencies to request information from and transmit it among different types of computer systems via the Internet or other network. These exchanges are possible because new types of software can facilitate communications between computers, translating information from one system into a format that is understandable by another system and end user, a capability known as interoperability. With interoperability, clusters of related computer systems can be linked, allowing information to be accessed and shared by many programs with similar purposes. In one presentation, Marty Hansen, with SSA, and Ian Macoy, with NACHA—The Electronic Payments Association, focused on how agencies might access financial account information electronically from financial institutions. For benefit programs whose payments are based on need, agencies must know about the assets of applicants and recipients to determine what payment, if any, individuals are entitled to receive. In 1999 alone, according to SSA quality assurance reviews, unreported bank account balances resulted in approximately $240 million in overpayments in the SSI program. Historically, obtaining timely and accurate bank account information from the 20,000 financial institutions in the United States has not been cost-effective for agencies administering needs-based benefit programs; thus, such checks have been done only under certain circumstances. However, automating the process would greatly reduce the burden of requesting and retrieving such information for both the agencies and the financial institutions. A network that provided secure access, delivery, and storage for financial account information could enable benefit programs to prevent hundreds of millions of dollars in overpayments. The speakers proposed two technological alternatives for devising such a system. One possibility would be to “piggyback” on the previously discussed matching being done by financial institutions with OCSE. Another would be to set up a centralized list of beneficiaries and ask financial institutions to match their account holders against the list via network connections. This alternative could be made more attractive to financial institutions in two ways. First, if the information was shared by all the agencies needing account information, the financial institutions could avoid responding repeatedly to similar inquiries communicated through different avenues. Second, if financial institutions could also use the network to exchange information among themselves for commercial purposes, they would be motivated to participate. In presenting these alternatives, the speakers acknowledged that privacy is an issue that must be addressed. A second presentation focused on how the model for DOD’s health care benefit delivery system could be adapted to meet the data-sharing needs of benefit programs. According to William Boggess, an official with the DMDC, the DOD system provides a broad range of information on the 23 million beneficiaries of the military health care system. The system consists of a central computer system containing identifying information on beneficiaries linked to a network of “satellite” computer systems containing databases of other information about the beneficiaries, including medical, dental, immunization, and pharmaceutical records; benefit entitlement; and security clearances, among others. With this network of databases, Mr. Boggess said that DOD is able to respond, on average, within 4 seconds to over a million information requests each day from more than 1,400 locations in 13 countries. Mr. Boggess then described how government agencies might improve their payment accuracy and program integrity if they created a nationwide network of benefit programs based on the DOD approach. A central database containing identifying information about the individual could be linked to the computer systems used by such programs as TANF, Food Stamps, SSI, Medicaid, and Medicare. Each agency could access the information it needed from any of the databases in the network, and each agency would have responsibility for maintaining the data in its own database. If agencies shared their data in this manner, individuals applying for or receiving benefits from multiple agencies could provide much of the information that these agencies needed only one time, to one agency. In addition, access to the databases of other agencies would make it possible for an agency to verify information provided by applicants and recipients to help ensure that benefits are provided only to those who are entitled to them. David Temoshok, with GSA’s Office of Governmentwide Policy,explained how GSA is helping the Department of Education pilot a project involving a system of linked databases containing information on postsecondary educational and financial opportunities. These databases contain information on scholarships, loans, and grants; admission; registration; and student financial aid accounts. The pilot project uses interoperability technology to provide a Web-based exchange of the information among many different computer systems. This system is intended to help student and financial aid administrators by presenting useful information in one place. In particular, agencies and lenders should be able to make better decisions because they will be able to access integrated student accounts via this system. Guaranteeing the Security of Data in an Interoperable Environment A number of speakers pointed out that while interoperability technology has improved the ease and efficiency of broad-based data sharing, it has also greatly increased the need for security in data sharing.When information can be accessed or exchanged at numerous locations by many users, it is critical to have security measures in place that can control and track access. Mr. Temoshok described four basic elements that the federal government requires for the secure electronic exchange of information over networks: user identification and validation, secure transmission of data, assurance that the data are not changed in transmission, and assurance that parties to a transaction cannot later repudiate the transaction. To provide these elements, the federal government, under the leadership of OMB, is encouraging federal agencies to incorporate public key infrastructure (PKI) into their computer environments when warranted. Richard Guida, Chairman of the Federal PKI Steering Committee, explained that PKI is a method whereby an individual generates a pair of digital keys, which are very large numbers, about 150 digits in length. One of these keys is called the private key because the individual keeps it to him- or herself. The other key is called the public key, and it is provided to anyone with whom the individual wishes to interact electronically. This latter key is made publicly available in the form of a digital certificate, which is an electronic credential that binds an individual’s identity to the public key. Using these public and private keys, it is possible to electronically place and then verify a person’s identity and ensure that electronic files do not get changed before, during, or after electronic transmissions. It is also possible to encrypt the information to ensure its privacy. Biometric identification, which can be used both to prevent unlawful access to government records and to help identify improper benefit and loan payments, was also discussed at the symposium. Biometric identification systems scan unique physical features, such as fingers, eyes, faces, or hands, and convert the information to a digital format that can be stored in a computer or on an identification card. That information can be compared to earlier scans to verify a person’s identity. The symposium speaker on this subject, David Mintie, an automated systems manager with the Connecticut Department of Social Services, said that human services departments around the country have begun using this technology (primarily finger imaging) as it has become affordable and practical to reduce and deter fraud and abuse. Mr. Mintie explained that when the identity of an individual can be readily established and verified, benefit recipients are much less likely to obtain benefits under false or duplicate identities in more than one city or state. Moreover, because the individual’s identity can be verified before benefits are paid out, biometric identification can prevent improper payments from being made, not merely identify instances in which improper payments have already been made. In 3 years of operation, one type of biometric identification, finger imaging, prevented $23 million in improper payments in Connecticut and $297 million in New York. Texas estimates that the Food Stamp program avoided over $5 million in improper payments in that state in fiscal year 1999 as a result of finger imaging, and California estimates having saved $86 million in seven counties in the first 2 years of using finger imaging. At the time of Mr. Mintie’s presentation, 8 states were using biometric identification systems, and 21 others were either planning biometric systems or pursuing legislation to use biometrics. As a “next step,” some of these states are working on developing standards for sharing and matching biometric fingerprint files among states. Such sharing, according to Mr. Mintie, could be a valuable tool for identifying individuals who receive duplicate welfare benefits in more than one state and for enforcing the nationwide 5-year time limit for receipt of welfare benefits. This sharing would enable welfare agencies not only to verify an individual’s identity, but also to check an individual’s welfare history when that person applied for benefits. In the absence of a nationwide system to track receipt of benefits, a welfare recipient nearing the end of the 5-year eligibility could simply relocate to another state and make a new application for benefits. Privacy Is a Concern in a Data-Sharing Environment Perhaps the single most important concern about sharing personal information among government programs is whether it can be done without sacrificing an individual’s right to personal privacy. Although symposium speakers and audience participants who discussed privacy issues agreed that it is important to protect this right, they disagreed about the extent to which data sharing threatens it. Opinions also varied among symposium speakers and audience participants on how the nation’s privacy laws should be changed. Data Sharing Can Be a Risk to Personal Privacy According to symposium speakers who discussed risks to privacy, the first risk to individuals is that their personal information may be wrongfully disclosed and perhaps misused. Such disclosure and misuse can occur when agency staff access data obtained from outside sources either without authorization to do so or, if authorized, for purposes unrelated to that authorization. Although this same type of abuse can occur with an agency’s own data, the unease about data sharing is that, as the number of agencies and individuals who have access to personal information increases, so do the chances of wrongful disclosure and misuse of that information. Although privacy advocates acknowledged that technologies exist that make wrongful disclosure and misuse of information somewhat more difficult and less likely, they believed that such tools have not, and cannot, always prevent such abuses. Others believed, however, that existing and new technologies have successfully managed this risk and will continue to do so. They cited such techniques as sending electronic data to other agencies over dedicated, secure computer lines; installing software that authenticates users and gives them access to only data that they are authorized to examine; establishing anomaly detection that notifies officials when a user has accessed something out of the ordinary; and using PKI. The second risk to privacy that symposium speakers and audience participants described is that it is becoming more difficult for the public to know what personal information government agencies are maintaining in databases and how they are using it. The speakers viewed this limited public awareness as important because it inhibits society’s ability to monitor what the government is doing with personal information. It also means that society’s views about how the government is using such information are not being factored into political and public policy decisions. Finally, the speakers characterized the limited public awareness about the wealth of information contained in databases as an increasing problem, given that technology has made it much easier to amass large amounts of information and to share it with others. The NDNH was frequently used to illustrate these concerns during the symposium. Section 453 of the Social Security Act specifies the agencies that may use this database for purposes unrelated to the collection of child support payments and the purposes for which this use is permissible.Privacy advocates were concerned about these “secondary” uses of the NDNH because they saw them as conflicting with a fundamental privacy principle, embodied in the Privacy Act, that data acquired for one purpose should not be used for a different purpose without the consent of the data subject. The Privacy Act provides 12 exceptions to this prohibition against disclosure without written consent, 1 of which benefit and loan agencies use to justify most of their data-sharing activities. This exception is called “routine use.” Under routine use, an agency may not disclose data unless the use of the data is compatible with the purpose for which the data were collected. Privacy advocates said that it is hard to see how using the NDNH data for such secondary purposes as the administration of SSA, IRS, and Education programs is compatible with the original purpose of the NDNH: helping collect child support payments. Moreover, because the NDNH database is the most comprehensive and centralized information source that exists on the earnings of U.S. workers, privacy advocates fear that it will be sought by many other agencies for uses that the database subjects never contemplated. Other symposium participants also saw the NDNH database as a valuable source of information for benefit and loan programs but did not see sharing this information as a threat to personal privacy. One audience participant mentioned, for example, that this information already exists in each of the states and that collecting it in a single federal file does not necessarily violate an individual’s privacy. Some participants also believe that the public does have an opportunity to learn about, and comment on, new data- sharing initiatives involving NDNH data. For example, the Privacy Act requires that such initiatives be posted in the FederalRegisterfor the purpose of public review and comment. Moreover, the public can learn about proposals for expanded access to NDNH data because such access is controlled to a large extent by legislation. Symposium Participants Suggest That Privacy Laws May Need to Be Revisited Symposium speakers discussed two key privacy laws that govern data sharing among benefit and loan agencies: section 6103 of the Internal Revenue Code and the Privacy Act, which includes the Computer Matching and Privacy Protection Act amendment. These laws were enacted in part to control whether and how tax return and personal information maintained by federal agencies could be shared. The laws describe situations in which an agency may disclose personal data. Section 6103 does this by specifically naming agencies that may have access to certain items of tax return information and specifying the conditions under which such access may be granted. The Privacy Act does this in part through the routine use provision described above. The Privacy Act also requires that agencies enter into written agreements when they share information that is protected by the Privacy Act for the purpose of conducting computer matches. These agreements, referred to as matching agreements, detail the information that will be exchanged, how the exchanges will occur, and how the receiving agency will verify the results of the match and keep the data secure. The Privacy Act and section 6103 were written in the 1970s, when many of today’s advanced data-sharing capabilities did not exist. For example, according to Robert Veeder, a former OMB official who was responsible for overseeing the implementation of the Privacy Act, much of the data that were covered by this act existed on paper; thus, electronically sharing this information was relatively difficult. Mr. Veeder also said that it was much harder for agencies to share information electronically in those few cases in which there were electronic files of data because interoperability among computer systems did not yet exist. Privacy advocates believe that the technological changes that have occurred since the 1970s warrant that we as a society reexamine the type of data that we would like shared among government agencies and the extent to which such sharing should occur. In the absence of such a debate, these individuals believe that data sharing on the scale of the NDNH database will become the norm. Although other symposium speakers and audience participants also felt that the privacy laws should be changed, their comments focused on amending specific provisions that they felt make data sharing overly cumbersome yet do little to ensure that personal privacy is protected. One frequently cited provision that benefit and loan officials would like to see changed concerns the time limits on computer-matching agreements. Currently, under the Privacy Act, an initial computer-matching agreement between two agencies may remain in effect for only 18 months. After that, an extension must be negotiated between the agencies, and this extension may remain in effect for only 12 months. Once this 12-month period expires, the agencies must negotiate an entirely new agreement. The time limits on computer-matching agreements were intended to cause agencies to periodically reassess the matches they conduct. Although officials believe that having time limits is valuable, they also argue that the limits are too short. Officials believe, for example, that the renegotiations can be time-consuming and burdensome and that the newly negotiated agreements often add no value to the data-sharing efforts because substantive changes are not often made to the computer matches themselves. Mr. Monaghan reported, for example, that most of the time of his staff is spent renegotiating these agreements, but that in reality this work is little more than a paper exercise. He also stated that SSA is drafting proposed legislation that would increase the time limit on new agreements to 5 years with a 3-year extension. We also suggested in a recent report on data sharing that the time limits on computer matching agreements be extended.We reported that the appropriate time periods for new and renewed agreements are subject to debate, but that they range from 3 to 5 years for new agreements and 2 to 3 years for existing agreements. Participants Made Various Suggestions for Advancing Data Sharing Another topic discussed during the symposium was how data sharing should be advanced among benefit and loan agencies. An integral part of these discussions was the concern that any enhancements to data sharing be weighed against the need to protect personal privacy. Many of those who discussed such enhancements advocated that they include the necessary technological and legal protections to safeguard personal privacy. Some of these discussions focused on methods for facilitating data sharing governmentwide, while others addressed specific data-sharing initiatives. Some Participants Suggested Methods for Facilitating Data Sharing Governmentwide Data sharing is not always an agency priority because program officials feel they do not have enough staff and resources to handle additional data- sharing projects while still handling the work of their programs. Two speakers mentioned, for example, that some state human services departments might not be participating in interstate computer matches designed to detect recipients receiving benefits in more than one state because their current priority is to seek out potentially eligible recipients. Another speaker, Mr. Monaghan of SSA, mentioned that his agency would need additional resources to respond to every outside request for information because it is fully occupied with managing and operating its programs and enhancing its own matching activities. Given that agencies are not always willing or able to take on data-sharing projects, some symposium speakers felt a need for an oversight body with authority to initiate and manage such projects. Thomas Stack, Director of Human Resources with Maximus Incorporated and until recently the Senior Advisor for Credit and Cash Management at OMB, described his vision of a board or committee composed of officials from various levels of government and the private sector. Such a group could be headed by OMB and include an equal number of members from key federal and state benefit and loan programs. It could develop a working group to support data sharing and establish software and hardware standards for agencies wishing to participate in data exchanges. The board could evaluate data- sharing proposals, addressing issues such as financing, management, timing, assigning the work, and examining the privacy implications. The board could also have some authority to decide which agencies should have access to the data of other agencies, and to what extent, and establish the required security controls for agencies wishing to access the data. In discussing the funding of a network that could support such broad-based data sharing, Mr. Stack pointed out that the federal government made an estimated $19 billion dollars in improper payments in fiscal year 1998. Estimating that such a network would cost about $100 million to create, he proposed funding it with a portion of the program dollars that would be saved as a result of the reduced overpayments achieved through data sharing. Estimated program savings from current data sharing reported by symposium speakers amounts to more than $2 billion annually (see table 1). A second suggestion for improving data sharing governmentwide was to create incentives for agencies themselves to take on more data-sharing projects. One idea proposed by Mr. Stack and others would be to allow agencies to use some of the program dollars saved through data-sharing efforts to expand such efforts and to pursue cases in which data exchanges have indicated possible overpayments. Other Participants Focused on Specific Data-Sharing Initiatives Several officials from benefit and loan programs mentioned that access to the NDNH database maintained by OCSE would greatly aid in the administration of their programs. Patricia Dalton, the Acting Inspector General for the Department of Labor, gave several examples of how access to this database would help improve the payment accuracy and assess the effectiveness of Labor programs. Labor is engaged in a proactive effort to investigate potentially fraudulent cases involving the $32 billion UI program. This program provides partial wage replacement for those who lose their jobs through no fault of their own. Many fraudulent schemes concerning UI payments involve fictitious claimants or claimants with nonexistent employers. In one case investigated by Labor, over $625,000 in fraudulent UI benefits were paid. Ms. Dalton believes that routine and expeditious access to centralized wage databases, such as the NDNH, would enable Labor to more efficiently verify wage data submitted by program applicants and thereby identify potential overpayments before they occur. Symposium participants from other benefit programs, including TANF, Food Stamps, and Medicaid, also mentioned that NDNH data would be useful in controlling payment accuracy. These programs all depend on knowing the earnings of applicants and recipients to make correct initial and continuing eligibility decisions. In the cases of the Food Stamp, Medicaid, and Labor programs, the Congress would have to pass legislation granting access. The TANF program, however, has legislatively authorized access to the NDNH data, and it was envisioned that OCSE would ask the state agencies administering this program to go through their state child support agencies to get access. However, the state child support agencies often do not respond to TANF requests for information because of staff and resource concerns. According to Donna Bonar, OCSE Acting Associate Commissioner, OCSE intends to remedy this situation by developing a system under which the state TANF programs can obtain the information directly from OCSE. Another commonly suggested enhancement to data sharing frequently mentioned during the symposium was that, when possible, agencies use the data they obtain from outside sources during the application process. For example, agencies might query outside databases at the time of application to verify that applicants have disclosed their earnings accurately. This access to information could help prevent some overpayments from ever being made, as opposed to the current practice of using computer matches to identify such payments after they have occurred. Agencies could take this initiative without slowing down the application process by using electronic connections to outside databases to obtain the information immediately on-line or within a short period of time through a batched process. Several of the symposium participants believe this should be the future of data sharing. They believe that it would not only help ensure proper payments from the start but also enhance customer service, because the agency would obtain official verifications rather than requiring applicants to provide official documents, as is currently the case. While acknowledging these advantages of querying data sources, other participants think their programs need to evaluate it more thoroughly before deciding whether and how to implement it. One concern expressed by officials of various agencies, including OMB, is that querying data sources be done in such a way that individual privacy and data security are protected. Another concern is that the staff who make eligibility decisions are often overextended. Thus, before adding the requirement that they check outside databases, officials want to make sure it is cost-effective for the program as a whole. Direct connections between government agencies do exist and in certain situations are being used to verify applicant-reported information in an effort to ensure that the correct payments are made at the outset. SSA has a network of dedicated, secure lines to most federal agencies and all 50 states. SSA uses this network to electronically transfer data used in computer matches and to receive and respond to queries at periodic intervals. SSA is also using this network for on-line, direct access. SSA plans to have on-line access to OCSE’s NDNH data in January of 2001 and hopes to stop many SSI overpayments stemming from undisclosed wages by requiring its field staff to check the NDNH database for undisclosed wages before issuing the first check to newly eligible SSI recipients. SSA is also providing data on the recipients of its programs’ benefits on-line to seven state human services departments that administer TANF benefits. According to an SSA official, some of these states are using SSA’s data at the time of application to prevent overpayments to TANF recipients who failed to disclose that they were also receiving SSA benefits.SSA hopes to eventually expand on-line access to human services departments nationwide. We are sending copies of this report to relevant congressional committees and other interested parties. We will make copies available to others upon request. If you have any questions about this report, please contact me on (202) 512- 7215. See appendix II for other GAO contacts and staff acknowledgments. Symposium Agenda—Data Sharing: Initiatives and Challenges Among Benefit and Loan Programs Wednesday, June 7, 2000 Panel I—Data Sharing Has Improved Benefit and Loan Programs, but Barriers Remain Panel II—Technology Offers New Data-Sharing Possibilities Thursday, June 8, 2000 Panel III—Security and Privacy in a Data-Sharing Environment Panel IV—Where Do We Go From Here? Moderator—Sigurd Nilsen, Director, Education, Workforce, and Income Security Issues, GAO This final session was a series of discussions led by congressional staff and representatives from the states, the private sector, the General Services Administration, and the Department of Agriculture. GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to those named above, the following individuals made important contributions to this report: Roland Miller III, Jill Yost, Christopher Morehouse, Jeremy Cox, James Lawson, and Inez Azcona. Ordering Information The first copy of each GAO report is free. Additional copies of reports are $2 each. A check or money order should be made out to the Superintendent of Documents. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. Ordersbymail: U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Ordersbyvisiting: Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Ordersbyphone: (202) 512-6000 fax: (202) 512-6061 TDD (202) 512-2537 Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists. To Report Fraud, Waste, or Abuse in Federal Programs Web site: http://www.gao.gov/fraudnet/fraudnet.htm e-mail: [email protected] 1-800-424-5454 (automated answering system)
Data sharing among federal agencies that run federal benefit and loan programs is important for determining the eligibility of applicants and beneficiaries. A GAO symposium on data sharing highlighted various issues facing federal agencies in their efforts to prevent abuse of federal programs. Symposium speakers focused on the number of program dollars saved by interagency data exchanges. Agencies using computer matching have detected undisclosed income and welfare recipients who receive benefits from more than one state. Improved technologies offer agencies the opportunity to expand their data sharing efforts. Such technologies include computer systems that can communicate directly with other systems and computer networks that can obtain information directly from financial institutions. Symposium speakers agreed that applicants' privacy should be protected when personal information is shared among agencies, but they disagreed about the extent to which data sharing threatens it. Privacy laws and security-related technology provide individuals with some protection against the possible misuse of personal information, but symposium participants differed on whether these protections are adequate.
GAO_GAO-06-914
Background At DOD’s request, Congress approved legislative authority in 1997 for privatizing utility systems at military installations. In defining a utility system, the authority included systems for the generation and supply of electric power; the treatment or supply of water; the collection or treatment of wastewater; the generation or supply of steam, hot water, and chilled water; the supply of natural gas; and the transmission of telecommunications. Included in a utility system are the associated equipment, fixtures, structures, and other improvements as well as easements and rights-of-way. The authority stated that the Secretary of a military department may convey a utility system to a municipal, private, regional, district, or cooperative utility company or other entity and the conveyance may consist of all right, title, and interest of the United States in the utility system or such lesser estate as the Secretary considers appropriate to serve the interests of the United States. Among other things, the 1997 authority also included two requirements for utility privatization. First, DOD was required to submit a report to congressional defense committees and wait 21 days before allowing a conveyance. For each conveyance, the report was to include an economic analysis, based on acceptable life-cycle costing procedures, demonstrating that (1) the long-term economic benefit of the conveyance to the United States exceeds the long-term economic cost of the conveyance to the United States, and (2) the conveyance will reduce the long-term costs of the United States for utility services provided by the utility system concerned. Second, the Secretary was required to receive as consideration for a conveyance an amount equal to the fair market value, as determined by the Secretary, of the right, title, or interest of the United States conveyed. The consideration could take the form of a lump sum payment or a reduction in charges for utility services. Before and after approval of the specific authority for privatizing utilities, the services have used other authorities for utility privatization. For example, the Army had privatized some systems after obtaining congressional authority for each specific case. Also, the services have privatized systems by modifications to natural gas services agreements administered by the General Services Administration and by conveyances of some systems on the basis of authorities related to base realignment and closure and the military housing privatization program. DOD’s Office of the Deputy Under Secretary of Defense for Installations and Environment provides overall policy and management oversight for the utility privatization program. However, primary management and implementation responsibility for the program is delegated to the individual services, their major commands, and individual installations. In addition, Defense Logistics Agency’s Defense Energy Support Center is responsible for providing the military services with utility privatization contracting, technical, and program management support. DOD Made Utility Privatization a Department Policy In December 1997, DOD issued Defense Reform Initiative Directive Number 9, which made utility system privatization a DOD policy. The directive instructed the military departments to develop a plan that would result in privatizing all installation electric, natural gas, water, and wastewater utility systems by January 1, 2000, unless exempted for unique security reasons or if privatization would be uneconomical. Under the program, privatization normally involves two transactions with the successful contractor—the conveyance of the utility system infrastructure and the acquisition of utility services for upgrades, operations, and maintenance under a long-term contract of up to 50 years. Normally, the conveyances do not include title to the land beneath the utility system infrastructures. A year later, in December 1998, DOD issued another directive to establish program management and oversight responsibilities and provide guidance for performing economic analyses for proposed projects, exempting systems from the program, and using competitive procedures to conduct the program. The directive also stated that the objective was for DOD to get out of the business of owning, managing, and operating utility systems by privatizing them and that exemptions from privatization should be rare. The directive reset the privatization implementation goal to September 30, 2003. Implementation Goals Reset and Program Guidance Revised In October 2002, DOD issued revised program guidance and again reset implementation goals. The guidance noted DOD’s contention that many installation utility systems had become unreliable and in need of major improvements because the installations historically had been unable to upgrade and maintain reliable utility systems due to inadequate funding caused by the competition for funds and DOD’s budget allocation decisions. DOD officials stated that owning, operating, and maintaining utility systems was not a core DOD function and the guidance stated that privatization was the preferred method for improving utility systems and services by allowing military installations to benefit from private sector financing and efficiencies. The revised implementation goals directed the military departments to reach a privatization or exemption decision on all systems available for privatization by September 30, 2005. The October 2002 guidance also reemphasized that utility privatization was contingent upon the services demonstrating through an economic analysis that privatization will reduce the long-term costs to the government for utility services. The guidance included details for conducting the economic analyses, stating that the services’ analyses should compare the long-term estimated costs of proposed privatization contracts with the estimated long-term costs of continued government ownership assuming that the systems would be upgraded, operated, and maintained at accepted industry standards, as would be required under privatization. GAO Report Identified Weaknesses in Program Implementation In May 2005, we issued a report that identified management weaknesses in DOD’s implementation of the utility privatization program. The report noted that utility privatization implementation had been slower than expected, the services’ economic analyses supporting utility privatization decisions provided an unrealistic sense of savings to a program that generally increases government utility costs, DOD’s funding obligations would likely increase faster than they would under continued government ownership, DOD did not require that the services’ economic analyses be subjected to an independent review for accuracy and compliance with guidance, implementation of the fair market value requirement in some cases resulted in higher contract costs for utility services, the services had not issued specific contract administration guidance for the program, and DOD’s preferred approach of permanently conveying utility system ownership to contractors may give the contractor an advantage when negotiating service contract changes or renewals. The report made several recommendations for DOD to address these concerns. Program Legislative Authority Modified The National Defense Authorization Act for Fiscal Year 2006, enacted in January 2006, made several modifications to the legislative authority for the utility privatization program. The act did the following: Reinstated a requirement that the Secretary of Defense must submit to congressional defense committees an economic analysis and wait 21 days after the analysis is received by congressional defense committees, or 14 days if in electronic form, before conveying a utility system. The economic analysis must demonstrate among other things that the conveyance will reduce the long-term costs to the United States of utility services provided by the utility system. The report and wait requirement had been replaced with a requirement for a quarterly report of conveyances by the National Defense Authorization Act for Fiscal Year 2004. Added a requirement that the economic analyses incorporate margins of error in the estimates, based upon guidance approved by the Secretary of Defense, that minimize any underestimation of the costs resulting from privatization or any overestimation of the costs resulting from continued government ownership. Eliminated the requirement that DOD must receive as consideration for a conveyance an amount equal to the system’s fair market value. Limited contract terms to 10 years, unless the Secretary concerned determines that a longer term contract, not to exceed 50 years, will be cost-effective and provides an explanation of the need for the longer term contract, along with a comparison of costs between a 10-year contract and the longer term contract. Placed a temporary limitation on conveyance authority stating that during each of fiscal years 2006 and 2007, the number of utility systems for which conveyance contracts may be entered into under this authority shall not exceed 25 percent of the total number of utility systems determined to be eligible for privatization under this authority as of January 6, 2006. Required DOD to submit, not later than April 1, 2006, to congressional defense committees a report describing the use of section 2688 of title 10, United States Code (10 U.S.C. 2688), to convey utility systems. The report was to address several specified aspects of the utility privatization program. DOD’s Response to GAO’s Report and Modifications to the Program’s Authority Although DOD initially disagreed with our May 2005 report, after further review of the report, it subsequently reported to Congress that the report had brought some significant issues to light and that the department had decided to issue new guidance to address the key issues in the report in order to improve program management. On November 2, 2005, DOD issued the new guidance, which among other things required the services to complete the remaining evaluations of utility system potential for privatization in a timely and efficient manner, perform an independent review of the economic analyses supporting proposed projects, consider and plan for increased costs for utility services resulting from potential privatization projects, and take steps to improve the administration and oversight of awarded privatization projects. DOD issued additional supplemental guidance on March 20, 2006, to implement the modifications to the legislative authority made by the Fiscal Year 2006 National Defense Authorization Act; and on March 31, 2006, DOD submitted to congressional defense committees the utility privatization report required by the act. Even before DOD issued new guidance to improve the program in November 2005, the services had implemented several program improvements, including the requirement for independent reviews of project economic analyses. Utility Privatization Milestones Have Slipped and Implementation Costs Continue to Climb DOD’s progress in implementing the utility privatization program has been slower than expected and implementation costs have continued to climb. None of the services met DOD’s September 2005 implementation goal and the program’s estimated completion date has now slipped to September 2011. In addition to increasing implementation costs, program delays have also resulted in the cancellation of privatization solicitations because of concern that conditions had changed or might change before a decision would be made whether to privatize. Services Did Not Meet Program Implementation Milestone None of the services met DOD’s goal of making a privatization or exemption decision on all systems available for privatization by September 30, 2005. Since the program began, DOD officials have attributed delays in program implementation to privatization evaluation, solicitation, and contracting processes that were more complex and time consuming than originally anticipated. Service officials stated that additional delays occurred because the services decided to suspend the program between October 2005 and March 2006. According to the officials, the suspension was provided to allow DOD and the services time to review concerns noted in our May 2005 report, develop and issue supplemental guidance for the program, and implement program changes necessitated by modifications in the program’s legislative authority made by the National Defense Authorization Act for Fiscal Year 2006. The services now estimate that their program completion dates—the date when a privatization or exemption decision has been made on all available systems—are October 2007 for the Navy and Marine Corps, December 2008 for the Air Force, and September 2011 for the Army. Among other things, the Army attributed the extension in its completion date to the privatization process being more complicated than envisioned and a recognition that the Army’s past estimates for completing the program were unrealistic. Table 1 shows progress as of March 31, 2006, compared to DOD’s goal, as well as the current estimated program completion dates. Services Have Awarded Contracts for a Fraction of the Total Systems Available for Privatization After spending about $268 million on program implementation costs through fiscal year 2005, the services had awarded contracts for a fraction of the 1,496 utility systems available for privatization. Between May 31, 2005, and September 30, 2005, the services privatized 14 utility systems using 10 U.S.C. 2688 authority bringing the total number of awarded projects to 81. However, the services have awarded no projects under this authority since DOD issued supplemental program guidance in November 2005. In addition to the projects awarded under 10 U.S.C. 2688 authority, DOD privatized 36 systems under other programs, such as DOD’s housing privatization program. The services also have exempted 147 additional systems, bringing the total systems exempted from privatization to 458. Table 2 shows program status as of March 31, 2006. Program Delays Have Resulted in Increased Implementation Costs With program delays, the services’ estimated program implementation costs have increased from about $268 million through fiscal year 2005 to about $285 million through fiscal year 2006. Additional implementation funds will be needed before the services complete their programs between October 2007 and September 2011. According to service officials, the funds used to implement the program primarily paid for consultants hired to help the services in conducting an inventory of their utility systems, assessing the systems’ condition, preparing economic analyses, and soliciting and contracting for proposed projects. Program implementation costs did not include funds used to pay the costs of awarded privatization contracts. Table 3 shows program implementation costs by service and the Office of the Secretary of Defense. Program delays also caused the Defense Energy Support Center to cancel solicitations to privatize 42 Army utility systems in May 2006. These solicitations had been closed from 1 to 4 years with no award decision and there were concerns that conditions, such as the accuracy of the inventory and needed improvements, had changed or might change before an award decision would be made. The Army plans to resolicit these systems over the next few years. Further, Defense Energy Support Center officials stated that program delays and the resulting decrease in assistance requested by the services have made it difficult to retain qualified staff to support the utility privatization program. Consequently, the center will need to train new staff once the program’s pace begins to increase again. Services Have Estimated the Number and Cost of Potential Privatization Contracts In addition to revising their program completion dates since our previous report, the services also estimated the additional number of systems that might be privatized by the completion of their programs and the funds needed to pay the costs of these anticipated contracts. The Army estimated that 41 additional systems might be privatized with the associated contract costs totaling about $212 million; the Navy and the Marine Corps estimated that 40 additional systems might be privatized with the associated contract costs totaling about $139 million; and the Air Force estimated that 210 additional systems might be privatized with the associated contract costs totaling about $602 million (see table 4). Air Force officials stated that its estimated 210 additional systems was a “worst case” estimate used to determine the maximum funding needed for possible additional privatization contracts. The officials stated that the more likely number of systems that might be privatized was about 105 systems. However, the officials did not provide an estimate of the contract costs associated with the smaller number of systems. DOD’s Changes to Improve Utility Privatization Implementation Have Addressed Many Areas but Have Not Eliminated All Program Concerns DOD has made many changes to improve the management and oversight of the utility privatization program since our May 2005 report. To improve the reliability of the economic analyses supporting privatization decisions, DOD now requires that the analyses undergo an independent review to assess the inputs and assumptions, ensure that cost estimates for the government-owned and privatization options are treated in a consistent manner, and verify that all relevant guidance has been met. Also, in supplemental program guidance issued in November 2005, DOD reminded the services to consider and plan for increased costs for utility services contracts resulting from potential privatization projects and prepare operation and maintenance budgets based upon the expected costs under privatization. The guidance also emphasized the importance of contract oversight and directed a number of actions designed to ensure adequate contract administration and oversight. Among other things, the guidance directed the Defense Energy Support Center to develop specific preaward and postaward procurement procedures for the effective management of utilities services contracts, directed contracting agencies to adequately train and prepare personnel involved in the utility privatization contracts, noted that DOD components are responsible for ensuring that the acquisition plan adequately addresses cost growth control, and stated that DOD components are responsible for ensuring that resources required to properly administer the contracts have been identified and provided. In March 2006, DOD also issued guidance implementing modifications in the program’s legislative authority made by the Fiscal Year 2006 National Defense Authorization Act, which among other things addresses our concern that some utility privatization contracts had allowed contractors to recover more than they paid as the fair market value for system conveyances. If fully implemented, the changes should result in more reliable economic analyses supporting proposed privatization projects, improved budgetary consideration of increased utility costs from privatization, enhanced oversight of privatization contracts, and reduced instances where contractors recover more than the amounts they paid as the fair market value for system conveyances. Although DOD has made many changes to improve implementation of the utility privatization program, the changes have addressed some concerns but have not eliminated all concerns noted in our prior report, such as ensuring the reliability of project economic analyses and ensuring effective contract oversight. We found that changes to address some issues have not been effectively implemented, some changes were not sufficient to totally eliminate the concerns, and DOD did not make changes to address some concerns causing continued questions about the reliability of the economic analyses, the availability of funds to pay for the remaining projects that might be privatized, the adequacy of contract oversight in projects awarded prior to DOD’s changes, and the control of long-term cost growth in utility privatization contracts. We also have concerns that the program may continue to provide an unrealistic sense of savings and decision makers may have incomplete information on the financial effect of privatization decisions. DOD Has Taken Steps to Improve the Reliability of Project Economic Analyses but Implementation Is a Concern Although DOD has made changes to improve the reliability of the analyses supporting proposed utility privatization projects, we found issues with the services’ implementation of the changes. In November 2005, DOD issued supplemental program guidance requiring DOD components to ensure that independent reviews were conducted for all economic analyses supporting a proposed conveyance. The guidance stated that the independent review should verify that all relevant guidance has been met and that privatization is in the best interest of the government. In March 2006, DOD reported to Congress that the independent review included procedures to review the general inputs and assumptions, verify that the inventory in the economic analysis is identical to the inventory in the solicitation, and ensure that the government and the contractor treat the renewal and replacement cost estimates in a consistent manner. Even before DOD issued the guidance requiring independent reviews, Army and Air Force officials stated that they had implemented such reviews to help ensure reliability of their project analyses. The officials stated that independent reviews were performed on the analyses supporting 12 utility privatization projects that were awarded in September 2005—after our previous report—but before DOD’s issuance of the guidance requiring independent reviews. As an additional step to help ensure reliable economic analyses, DOD’s March 2006 report to Congress stated that the services must conduct postconveyance reviews that compare actual project costs with the estimated costs included in the projects’ economic analyses. DOD stated that the postconveyance reviews are conducted 2 to 3 years after contract award, or 1 year after the first periodic price adjustment, whichever is later, and that the results of these reviews will be compiled until such time as the analysis of all conveyances is complete. DOD stated that the reviews are to include an analysis of the system’s inventory, changes in requirements and contract costs, and a comparison of actual contract costs with estimates from the economic analyses. Although DOD’s changes are key steps in the right direction to improve the reliability of the economic analyses, we found issues with the implementation of the changes. First, we reviewed the analyses associated with 10 Army and Air Force projects awarded in September 2005. Although these analyses were prepared prior to the issuance of DOD’s supplemental guidance, the services had already implemented an independent review process and these analyses underwent an independent review. Service officials noted that the independent reviews had just begun and expected that the thoroughness of the reviews would improve as experience was gained and DOD’s supplemental guidance was implemented. We found that the reviews did identify some questionable items and that some changes were made to improve the reliability of the economic analyses. Yet, we also found questionable items in each analysis that were not identified during the independent review. For example: The economic analysis for the natural gas system privatization at Minot Air Force Base did not treat estimates of renewal and replacement costs for the government-owned and privatization options in a consistent manner. The analysis estimated that the Air Force would spend $7.1 million on renewals and replacements during the first year of continued government ownership. Under the first year of privatization, the analysis estimated that the contractor would spend about $0.2 million on renewals and replacements. When we asked about this difference, Air Force officials stated that the contractor is not required to perform the same renewals and replacements identified in the government estimate and that the government found the contractor’s proposal to be acceptable. Because the analysis was not based on performing the same work, the cost estimates were not consistently developed and resulted in favoring the privatization option. This issue was not identified in the independent review. The economic analyses for the water and wastewater privatization projects at Andrews Air Force Base were based on the systems’ inventory (i.e., the wells, pumps, water treatment equipment, valves, fire hydrants, water distribution mains, meters, storage tanks, reservoirs, and other components that constitute the systems) and condition 2 years prior to contract award. The Air Force stated that adjustments to the contract could be made after contract award, if needed, to reflect changes in the inventory. However, because the analyses were not updated to reflect inventory changes before contract award, the reliability of the analyses is less certain. This issue was not noted in the independent review. The economic analyses for privatization of the electric distribution system at Fort Leavenworth and the water and wastewater systems at three Army installations in the Tidewater Virginia area incorrectly included financing costs under the government option. Although this favored the privatization option, the amount was not enough to change the outcome of the analyses. This issue was not identified in the independent review. However, Army officials told us that they would ensure that this did not occur in future analyses. Second, although DOD noted in its March 2006 report to Congress the importance of postconveyance reviews as an additional measure to help ensure reliable economic analyses, DOD has not issued guidance that requires the services to perform the reviews. Service officials stated that they had performed only a limited number of postconveyance reviews and do not have plans to perform the reviews in the manner or frequency described in DOD’s report to Congress. Also, DOD’s report cited seven Army Audit Agency postconveyance reviews, four additional Army postconveyance reviews, and one Air Force postconveyance review. However, only three of the Army Audit Agency reviews included a comparison of actual contract costs with estimates from the economic analyses. DOD Has Taken Steps to Address Some Funding Issues but Concerns Remain Although DOD has taken steps to help ensure that the services adequately consider the increased costs from utility privatization projects during budget preparation, questions remain over the availability of the additional funds needed to complete the program. The services estimate that they potentially will need $453 million more than is currently programmed for continuing government utility operations to pay implementation and contract costs associated with the remaining number of utility systems that might be privatized through 2010 for the Air Force, the Navy, and Marine Corps, and through 2011 for the Army. As a result, in view of competing needs and budget priorities, the Deputy Assistant Secretary of the Air Force (Installations) stated in an April 2006 memorandum that the Air Force could not afford to award further utility privatization contracts unless additional resources are provided. Utility Costs Increase with Privatization Our May 2005 report noted that installation utility costs under privatization typically increase significantly above historical levels because the systems are being upgraded and the contractors recoup their investment costs through the utility services contracts. Essentially, under the privatization program, the services leverage private sector capital to achieve utility system improvements that otherwise would not be feasible in the short term because of limited funding caused by the competition for funds and budget allocation decisions. The services pay for the improvements over time through the utility services contracts, which are “must pay” bills. As a result, if an installation’s funds were not increased sufficiently, then funds provided for other installation functions where there was more discretion in spending might be used to pay the higher utility bills. This, in turn, could negatively affect those other functions, such as the maintenance of installation facilities. We recommended that DOD provide program guidance emphasizing the need to consider increased utility costs under privatization as the military services prepare their operation and maintenance budget requests and that DOD direct the service Secretaries to ensure that installation operations and maintenance budgets are adjusted as necessary to reflect increased costs from utility privatization projects. In November 2005, DOD issued supplemental program guidance that reminded DOD components to consider the increase in utility costs from privatization. Specifically, the guidance directed the components to consider and plan for increased costs for utility services contracts resulting from potential privatization projects and system conveyance and prepare operation and maintenance budgets based upon the expected costs under privatization. Funds Not Programmed for All Potential Utility Privatization Projects DOD’s guidance addresses the recommendations from our May 2005 report and, if implemented, should result in the increased costs from utility privatization projects being adequately considered during budget preparation. However, in view of competing needs and budget priorities, questions remain over availability of the additional funds needed to complete the program. To illustrate, DOD’s November 2005 supplemental guidance also directed DOD components to advise the Deputy Under Secretary of Defense (Installations and Environment) if significant shortfalls are anticipated that will affect utilities privatization efforts. In response to that direction, each service estimated the remaining number of utility systems that might be privatized, calculated the associated implementation and contract costs, compared these costs with the funds already programmed for continued government operation of the systems that might be privatized, and determined whether any potential funding shortfalls existed. The Army’s estimate was through fiscal year 2011 and the other services’ estimates were through fiscal year 2010. As a result of this review, each service determined that funding shortfalls existed to pay for potential future privatization contracts (see table 5). Air Force officials stated that the increased costs from potential future utility privatization contracts had reached a critical point. The officials stated that because funds are limited and funding needs for some Air Force programs are greater than the funding needs for utility upgrades, the Air Force has concluded that it will not solicit new utility privatization contracts until additional resources are identified to specifically cover any potential increase in future costs. Air Force officials further explained that privatization results in improving utility systems to an industry standard level by creating “must pay” contracts. However, without additional resources, funding these contracts must come from other base operating support funds, which would result in diverting critical resources from remaining facilities and infrastructure. Also, the officials noted that the utility privatization program drives system recapitalization to an industry standard level that may be questionable when compared to historical Air Force requirements and, furthermore, reflects a funding level that is not affordable in light of current fiscal constraints and differing Air Force modernization priorities. When we questioned a cognizant DOD official in June 2006 about the potential funding shortfall, the official stated that each service has competing priorities and the cost of awarding contracts to privatize utility infrastructure is just one of many. However, the official also stated that the funding issue and alternatives were under discussion but conclusions had not yet been reached. DOD Directed Actions to Improve Utility Privatization Contract Oversight but Some Concerns Remain DOD has made a number of changes designed to improve utility privatization contract administration and oversight since our May 2005 report. However, it may take some time for the improvements to be fully implemented as the changes are applied to new privatization contract awards and efforts may be needed to ensure that the changes are applied, where needed, to previously awarded contracts. DOD Has Taken Steps to Address Oversight Concerns To address privatization contract oversight concerns, DOD issued supplemental program guidance in November 2005 that emphasized to the services the importance of contract oversight and directed a number of actions designed to ensure adequate contract administration and oversight. Among other things, the guidance directed the Defense Energy Support Center to develop specific preaward and postaward procurement procedures for the effective management of utilities services contracts resulting from a utility conveyance, and coordinate with the Defense Acquisition University to develop a training program for all contracting officers and DOD components involved in utilities privatization efforts; directed contracting agencies to adequately train and prepare personnel involved in the administration of the utilities services contracts resulting from a utilities conveyance; stated that contracting officers must be able to use guidance for postaward contract management and contract provisions to ensure that the government’s interests are protected in the long-term utility service contracts and associated real estate documents; stated that prior to awarding a services contract resulting from a utility conveyance, DOD components are responsible for ensuring, among other things, that resources required to properly administer the contract have been identified; and directed that transfers of contract administration responsibilities from the procuring contract office to the contracting administration office should include an on-site transfer briefing with government and contractor personnel that includes, among other things, a clear assignment of responsibilities. During our visit to the Defense Energy Support Center in April 2006, officials stated that in accordance with the guidance, the center had already issued the preaward and postaward procurement procedures that would help ensure the effective management of utilities services contracts. The officials stated that they had also begun developing a training program for all contracting officers and other DOD personnel involved in utilities privatization efforts and had developed procedures for transferring contract responsibilities that should help ensure effective contract oversight. During our visits to the services, officials stated that, in addition to working with the Defense Energy Support Center, further efforts were underway to ensure that postaward management is effective. For example, Air Force officials stated that they had developed their own postaward plan, which defines the responsibilities and standards by which the government could ensure that utility services are provided in accordance with requirements. Navy officials stated that the Navy plans to prepare a quality assurance plan for each utility privatization contract awarded. Some Contract Oversight Concerns Identified at the Four Installations We Visited Although the steps taken by DOD, the Defense Energy Support Center, and the services are significant improvements, implementation will be the key to ensuring effective oversight of all utility privatization contracts, and it may take some time to fully implement improvements as new privatization contracts are awarded. From the time DOD’s supplemental guidance was issued and other improvement measures were put into place through the time of our review in June 2006, the services awarded no new utility privatization contracts. Thus, to assess contract oversight, we were unable to visit installations with utility privatization contracts awarded after DOD’s changes were implemented. Instead, we assessed contract oversight at four installations with five utility privatization projects that were awarded prior to our May 2005 report. We found continuing concerns about the adequacy of oversight because no additional resources were provided to oversee the contracts at all four installations and mandatory written plans for overseeing contractor performance were not prepared at two installations. For example, officials at each of the four installations we visited noted that no additional resources were provided at the installation level to perform contract oversight once their utility systems were privatized. The contract officials stated that the extra work associated with the contracts was added to their workload of overseeing other contracts. Some officials stated that they did not have sufficient personnel to perform the level of detailed monitoring of contractor performance that they believed was needed. According to Fort Eustis officials, when the electric system was privatized, they requested three additional people to oversee the contract based on the magnitude of the workload associated with this contract. Yet, no additional people were provided and the extra workload was added to the workload of the staff responsible for overseeing other contracts. Also, our review of the electric distribution system privatization projects at Fort Eustis and the Army’s Military Ocean Terminal Sunny Point found that neither installation had a quality assurance surveillance plan in place for overseeing contractor performance. Such plans are required by the Federal Acquisition Regulation. Officials at both installations stated that although a formal surveillance plan had not been prepared, they were performing oversight to ensure that the contractors met contract requirements. Nevertheless, formal contractor performance monitoring plans are an important tool for ensuring adequate contract oversight. Containing Utility Privatization Contract Cost Growth May Be a Challenge Because contractors own installation utility systems after privatization and, therefore, may have an advantage when negotiating contract changes and renewals, containing utility privatization contract cost growth may become a challenge as contracts go through periodic price adjustments and installations negotiate prices for additional needed capital improvement projects and other changes. In March 2006, DOD stated that although it recognizes that privatization may limit the government’s options during contract negotiations, the department continues to prefer privatization with permanent conveyance and believes that safeguards are in place to adequately protect the government’s interests. Although it is too early in the program’s implementation to know to what extent DOD’s efforts will be successful in ensuring equitable contract price adjustments and limiting long-term cost growth in the utility privatization program, our review found indications that containing cost growth may become a concern. DOD Continues to Prefer Permanent Conveyance but Has Taken Steps to Control Costs In our prior report, we noted that, according to DOD consultant reports, DOD’s approach to utility privatization differs from typical private sector practices in that private sector companies may outsource system operations and maintenance but normally retain system ownership. As a result, the consultant reports note that DOD’s preferred approach of permanently conveying utility system ownership to contractors may give the contractor an advantage when negotiating service contract changes or renewals. This occurs because DOD must deal with the contractor or pay significant amounts to construct a new utility distribution system to replace the one conveyed to the contractor, attempt to purchase the system back from the contractor, or institute legal action to reacquire the system through condemnation proceedings. Because of concern that contractors may have an advantage when it comes time to negotiate contract changes and renewals, we recommended that DOD reassess whether permanent conveyance of utility systems should be DOD’s preferred approach to obtaining improved utility services. DOD stated that it has reassessed its position and continues to believe that owning, operating, and maintaining utility systems is not a core mission of the department and that permanent conveyance of systems under utilities privatization enables the military installations to benefit from private sector innovations, economies of scale, and financing. Although DOD contends that private industry can normally provide more efficient utility service than can the government, DOD has not provided any studies or other documentation to support its contention. Given that the private sector faces higher interest costs than the government and strives to make a profit whereas the government does not, it is not certain that utility services provided by the private sector would be less costly than utility services provided by the government through the use of up-front appropriations. Although DOD continues to prefer privatization with permanent conveyance of the utility systems, DOD has recognized that privatization may limit the government options during contract renegotiations and has taken steps to help control contract cost growth. First, DOD stated in its March 2006 report to Congress that a contractor also may have limited options under privatization because the contractor typically cannot use the installation’s utility system to service other customers. DOD reported that privatization creates a one-to-one relationship between the installation and the contractor. In this relationship, DOD stated that both parties must work together to execute fair and equitable contract changes, both parties have significant vested interests in successful negotiations, and both parties retain substantial negotiation leverage. Second, DOD noted that service contracts awarded as part of a privatization transaction are contracts subject to the Federal Acquisition Regulation and applicable statutes. Because it is recognized that privatization will as a practical matter limit future opportunities to recompete this service, DOD stated that all contracts will include appropriate provisions to protect the government’s interest while allowing the contractor reasonable compensation for the services provided. DOD’s report further stated that fixed price contracts with prospective price adjustment provisions have been determined to be the most appropriate contract in most situations and that this type of a contract will mitigate cost risk and hopefully result in a satisfactory long-term relationship for both the contractor and the government. Third, DOD noted that utility services contracts resulting from a utility conveyance may include a contract clause that provides an option for the government to purchase the system at the end of the contract period. According to Defense Energy Support Center officials, the center has developed language for future Army and Air Force contracts that would provide an option for the government to buy back a system at the end of the contract period. Center officials stated that this clause may help the government in negotiations at the end of the contract term. Navy officials stated that the Navy does not plan to include a buy back clause in its future utility contracts because a system could be taken back, if necessary, through condemnation procedures. Fourth, in its November 2005 supplemental guidance, DOD emphasized the importance of controlling contract cost growth. Specifically, the guidance noted that prior to awarding a services contract resulting from a utility conveyance, DOD components are responsible for ensuring that the acquisition plan adequately addresses cost growth control, which includes specifying the appropriate price adjustment methodology and postaward contract administration. Cost Growth in Utility Privatization Contracts May Become a Concern Although DOD has policies, guidance, and procedures to help control contract costs and ensure that price adjustments are equitable, cost growth may still become a concern as utility privatization contracts go through periodic price adjustments and, in some cases, installations negotiate changes for additional capital improvement projects or other needs. According to DOD, most utility privatization contracts include provisions for periodic price adjustments. The price adjustment process allows contract price changes based on changes in market prices, generally to cover inflation, and changes to the service requirement from system additions or modifications resulting from capital upgrades. Under this process, the contractor is required to submit sufficient data to support the accuracy and reliability of the basis for service charge adjustments. If the contractor’s data is determined to be fair and reasonable, the contracting officer negotiates a service charge adjustment. Utility privatization contracts normally provide for price adjustments after an initial 2-year period and every 3 years thereafter. In addition to cost increases from service charge adjustments, contract costs can also increase as a result of contract modifications to pay for additional capital improvement projects not included in the initial contract. According to the services, utility privatization contracts for 22 systems are currently undergoing, or will be subject to, their first periodic price adjustment before the end of calendar year 2007. Although it is too early to know the extent of cost changes that might occur in these contracts, our review of six contracts—one that completed a periodic price adjustment, one that was undergoing periodic price adjustment, and four that had not yet undergone a periodic price adjustment—found conditions that indicate that cost growth in utility privatization contracts may become a concern. Changes in contract costs could result in privatization costs increasing above the levels estimated in the economic analyses. To illustrate: The Fort Rucker natural gas distribution system privatization contract was issued on April 24, 2003. The contract provided for a price adjustment after the initial 2 years of the contract and then every 3 years thereafter. In February 2005, the contractor submitted a proposal for a price adjustment and requested an increase in the price paid to the contractor for operations and maintenance, associated overhead, and renewals and replacements. According to a government memorandum that summarized the results of the price adjustment process, the requested increases were based on the contractor’s actual labor hours and material costs and additional overhead costs which resulted from a change in the way the contractor calculated overhead costs. The change in overhead calculations included costs that were not included in the original proposal submission or in the contract. When queried, the contractor responded that the costs were not originally submitted but should have been. After review, the government team responsible for the price adjustment process determined that the requested increases were allowable and reasonable and approved the price increase. The change increased the government’s annual utility service charge costs from about $87,000 to about $124,000, an increase of about $36,000, or 41 percent. In approving the increase, the government team noted that although the estimated cost avoidance from privatization would be reduced, the contract was still economical compared to the estimated costs of government ownership. The Sunny Point electric distribution system privatization contract was issued on September 30, 2003. In January 2006, the contractor submitted a proposal for a price adjustment and requested an increase in the utility service charge based on the contractor’s actual labor hours and material costs associated with operating and maintaining the system, including the installation’s emergency generators. According to installation officials, the costs to operate and maintain the system were significantly higher than originally anticipated by the contractor because of errors in the system’s inventory used to develop the solicitation, such as not including all of the installation’s emergency generators. When queried about the requested price increase, the contractor responded that the initial contract bid would have been higher if the true inventory of the system had been known. Although the price adjustment process was not final at the time of our visit in June 2006, installation officials stated that the government team responsible for the process had determined that the requested increases were allowable and reasonable and had approved the price increase. As a result of the price adjustment, the government’s annual utility service costs are expected to increase from about $415,000 to $798,000 in the third year of the contract, an increase of about $383,000, or 92 percent. The Fort Eustis electric distribution system privatization contract was issued on June 24, 2004. While this contract is not scheduled for a periodic price adjustment until December 2006, the contract costs have increased by about $431,000, or 26 percent, since the contract was signed. The increase is the result of two factors. First, the annual service charge was increased by about $73,000 as the result of correcting errors to the system’s inventory described in the privatization solicitation. Second, the contract’s cost was increased by about $358,000 to pay for capital improvement projects that were added to the original contract. Fort Eustis officials stated that funding for the capital improvement projects added to the contract did not have to compete for funding against other needed installation improvement projects because project costs were added to the privatization contract. The officials stated that it was unclear whether these projects would have been approved for funding had the privatization contract not been in place. The remaining three contracts we reviewed—the water and wastewater privatization contracts at Bolling Air Force Base and the electric distribution system privatization contract at Dobbins Air Reserve Base— were not yet eligible for, or not subject to, a periodic price adjustment. At the time of our visits in May 2006, actual contract costs in these cases approximated the estimates in the projects’ economic analyses. DOD Has Not Made Changes to Provide More Realistic Savings Estimates from Utility Privatization Because DOD has not changed the guidance for performing the economic analyses or taken any other steps to change the perception that the utility privatization program results in reduced costs to the government, the program may continue to provide an unrealistic sense of savings for a program that generally increases annual government utility costs in order to pay contractors for enhanced utility services and capital improvements. The concern was caused by the methodology DOD uses to determine whether a proposed privatization contract would meet the statutory requirement for reduced long-term costs. In our previous report, we noted that DOD’s guidance directs the services to compare the estimated long- term costs of the contract with the estimated long-term “should costs” of continued government ownership assuming that the service would upgrade, operate, and maintain the system in accordance with accepted industry standards as called for in the proposed contract. This estimating method would be appropriate, if in the event the system is not privatized, the service proceeded to upgrade, operate, and maintain the system as called for in the estimate. However, this generally is not the case. According to DOD and service officials, if a system is not privatized, then the anticipated system improvements would probably be delayed because of DOD’s budget allocation decisions, which have limited funds for utility improvements. Because of the time value of money, a future expense of a given amount is equivalent to a smaller amount in today’s dollars. Thus, if reduced costs to the government are expected to be a key factor in utility privatization decision making, then it would appear more appropriate for the services to compare the cost of a proposed privatization contract with the cost of continued government ownership on the basis of the actual planned expenditures and timing of these expenditures. Since May 2005, DOD has not changed the guidance for performing the economic analyses nor has DOD taken other steps, such as showing current utility system costs in the economic analyses, to change the perception that the utility privatization program results in reduced costs to the government. DOD’s November 2005 supplemental program guidance directed the services to continue to prepare economic analyses based on the “should costs,” which is defined as an independent government estimate of the costs required to bring the system up to and maintain it at current industry standards. Further, DOD’s March 2006 report to Congress stated that the “should cost” estimate is the government’s best tool for predicting the future requirement for individual systems and is the most realistic methodology. Yet, the report also acknowledged that the department had done an inadequate job of defining industry standards and then subsequently programming, budgeting, and executing to that requirement. Because DOD has not programmed funds to do the work described in the “should cost” estimate if the system is not privatized, DOD’s estimates of the reduced costs to the government that would result from privatization are not based on realistic cost differences. Information that DOD reported to Congress in March 2006 illustrates our concern. DOD’s report stated that the department’s total cost avoidance from utility conveyances is expected to exceed $1 billion in today’s dollars and, as shown in table 6, the report included information showing that the 81 contracts awarded under 10 U.S.C. 2688 will result in about $650 million in reduced costs to the government in today’s dollars compared to DOD’s “should cost” estimate. DOD’s reported cost avoidance amounts provide an unrealistic sense of savings for several reasons: First, as previously stated, the estimated costs under government ownership are not based on the actual expected costs if the system is not privatized but rather on a higher “should cost” amount. As a result, estimated costs under government ownership are overstated and, therefore, DOD’s estimated cost avoidance is overstated, at least in the short term. Second, the government’s costs for utility services increase with privatization. Army officials estimated that average annual cost increase for each privatized Army system was $1.3 million. Also, the services estimate that they will need $453 million more than is currently programmed for continuing government ownership to pay for the contract and other costs associated with the remaining number of utility systems that might be privatized through 2010 for the Air Force and the Navy and Marine Corps, and through 2011 for the Army. Third, DOD’s reported cost avoidance does not consider the program’s one-time implementation costs. Through fiscal year 2005, about $268 million was spent to implement the program. Fourth, the economic analyses used to estimate the cost avoidance between the government-owned and privatization options for several of the 81 projects included in DOD’s report to Congress are unreliable. As noted in our previous report, our review of seven project analyses identified inaccuracies, unsupported cost estimates, and noncompliance with guidance for performing the analyses. The cost estimates in the analyses generally favored the privatization option by understating long- term privatization costs or overstating long-term government ownership costs. When we made adjustments to address the issues in these analyses, the estimated cost avoidance with privatization was reduced or eliminated. Also, as discussed in another section of this report, although DOD has taken steps to improve reliability, we found questionable items in 10 economic analyses supporting projects awarded after our May 2005 report. Fifth, cost growth in privatization contracts might reduce or eliminate the amount of the estimated cost avoidance from privatization. We reviewed the analysis supporting the Navy’s one privatization project under 10 U.S.C. 2688, awarded in 1999, and compared actual contract costs to the estimated contract costs included in the analysis. The analysis showed that if contract costs continue to increase at the same rate experienced since the contract was awarded, then the project’s estimated cost avoidance would be reduced from about $92.7 million to about $18 million. This analysis also did not include consideration of privatization contract oversight costs. Consideration of these costs would further reduce the estimated cost avoidance to about $4 million. As discussed in another section of this report, we found contract cost growth concerns in 3 of 6 additional utility privatization projects we reviewed, which will reduce the estimated cost avoidance for those projects. In addition to providing an unrealistic sense of savings by providing only the “should cost” estimates, the economic analyses do not include other information that would provide decision makers with a clearer picture of the financial effect of privatization decisions. If the analyses included information showing the amount that the government currently spends on operating, maintaining, and upgrading the utility systems being evaluated for privatization, decision makers could better consider the increase in costs that will result from privatization as they assess the merits of proposed projects. However, DOD’s guidance does not require that the services’ economic analyses include current utility system cost information. The National Defense Authorization Act for Fiscal Year 2006 modified the program’s legislative authority by requiring that project economic analyses incorporate margins of error in the estimates that minimize any underestimation of the costs resulting from privatization of the utility system or any overestimation of the costs resulting from continued government ownership and management of the utility system. This step could help improve the reliability of the cost differences between the government-owned and privatization options. The modified authority stated that incorporating margins of error in the estimates was to be based upon guidance approved by the Secretary of Defense. However, as of June 2006, DOD had only issued general guidance in this area with no details on how the services were to comply with the new requirement. Specifically, on March 20, 2006, DOD issued guidance directing the services to include in the economic analysis an explanation as to how margin of error considerations were addressed in developing the independent government cost estimate and carried forward in the price analysis report and cost realism report. Although the guidance referenced Office of Management and Budget Circular A-94, dated October 29, 1992; DOD Instruction 7041.3, dated November 7, 1995; and Deputy Secretary of Defense memorandum and guidance dated October 9, 2002; none of these documents provide details on how margins of error should be incorporated into the economic analyses. At the time of our review in June 2006, Army and Navy officials stated that they were evaluating how to include margins of error into future economic analyses. Air Force officials stated that their economic analyses already included margins of error calculations but that no formal rules existed on how to use the results of the calculations. Without detailed DOD guidance, there is little assurance that the services will include margins of error considerations in an appropriate and consistent manner in future project economic analyses. Changes in Legislative Authority and DOD’s Implementation of the Change Address Fair Market Value Concerns DOD’s changes to implement a modification to the legislative authority for the utility privatization program have addressed the fair market value concerns discussed in our May 2005 report. Our report noted that in some cases implementation of a previous legislative requirement that the government receive fair market value for systems conveyed to privatization contractors had resulted in higher contract costs for utility services. To address this concern, we recommended that DOD place greater scrutiny on the implementation of the fair market value requirement in proposed contracts to minimize cases where contractors recover more than the amounts they paid for system conveyances. Subsequent to our report, in January 2006, the National Defense Authorization Act for Fiscal Year 2006 was enacted. The act changed the legislative language from stating that fair market value from a conveyance must be received to stating that fair market value from a conveyance may be received. In March 2006, DOD issued guidance to implement modifications in the legislative authority made by the act. With regard to fair market value, DOD’s guidance to the services noted that military departments are no longer required to obtain fair market value exclusively through cash payments or rate credits. The military departments now have the flexibility to seek consideration in a manner other than a payment of the fair market value when the economic analysis demonstrates it is in the best interest of the government. The guidance also stated that the military departments may not dispose of the government’s property without receiving an appropriate return, but the amount and nature of that return may be determined and represented in a number of ways, depending on the negotiated deal. The change in legislative authority and the additional guidance issued by DOD address our concern with receipt of fair market value for system conveyances. Our review of 10 economic analyses for projects awarded after our May 2005 report showed that the fair market value paid by the contractor and the amount recovered were the same. Thus, according to these analyses, the receipt of the fair market value for the conveyances in these cases did not result in any increased costs to the government. Conclusions DOD has made many changes to improve the management and oversight of the utility privatization program since our previous report. If fully implemented, the changes should result in more reliable economic analyses supporting proposed privatization projects, improved budgetary consideration of increased utility costs from privatization, enhanced oversight of privatization contracts, and reduced instances where contractors recover more than the amounts they paid as the fair market value for system conveyances. However, a number of program concerns remain because DOD’s changes to address some issues noted in our previous report have not been effectively implemented, some changes were not sufficient to totally eliminate the concerns, and DOD did not make changes to address some concerns. Specifically, implementation of DOD’s changes to improve the reliability of the economic analyses, such as requiring independent reviews and noting the importance of postconveyance reviews to compare actual contract costs with estimates from the analyses, could be improved. The reliability of the analyses could continue to be questionable until DOD requires independent reviewers to report to decision makers on the thoroughness of the economic analyses and any significant anomalies between the ownership options, estimated costs, inventories, and assumptions and also issues guidance requiring the services to perform the postconveyance reviews as noted in its March 2006 report to Congress. An additional concern is the services’ estimated shortfall in the funds needed to pay contract costs associated with the remaining number of utility systems that might be privatized by the end of their programs. Unless DOD addresses the potential funding shortfall in view of all DOD and service funding and priority needs, questions will remain over the availability of the additional funds needed to complete the program. Also, although DOD’s changes designed to improve utility privatization contract administration and oversight are key steps in the right direction, it may take some time to fully implement improvements as new privatization contracts are awarded and oversight of older contracts is assessed. Until DOD ensures that the contracts awarded prior to the program changes have adequate resources and contractor performance surveillance plans, the adequacy of contract oversight will remain a concern. Further, because contractors own installation utility systems after privatization, they may have an advantage when negotiating contract changes and renewals. Unless DOD places additional emphasis on monitoring contract cost growth as utility privatization contracts undergo periodic price adjustments and other changes are negotiated, concern will continue that containing utility privatization contract cost growth may become a challenge. Because DOD did not change guidance to require that project economic analyses show the actual costs of continued government ownership if the system is not privatized, or take any other steps to change the perception that the utility privatization program results in reduced costs to the government, DOD continues to provide an unrealistic sense of savings to a program that generally increases government utility costs in order to pay contractors for enhanced utility services and capital improvements. Until DOD requires that each economic analysis includes information on the system’s current costs and the actual expected costs if the system is not privatized, decision makers will have incomplete information on the financial effect of privatization decisions. In addition, unless the Secretary of Defense issues detailed guidance explaining how the services should incorporate margins of error in the economic analyses, as required by the National Defense Authorization Act for Fiscal Year 2006, there is little assurance that the full benefit from this requirement will be achieved. Recommendations for Executive Action We recommend that the Secretary of Defense direct the Deputy Under Secretary of Defense (Installations and Environment) to take the following seven actions: require independent reviewers to report to decision makers on the thoroughness of each economic analysis and any significant anomalies in the assumptions used and estimated costs for each ownership option; issue guidance requiring the services to perform the postconveyance reviews as noted in DOD’s March 2006 report to Congress; address the utility privatization program potential funding shortfall in view of all DOD and service funding and priority needs; ensure that utility privatization contracts awarded prior to the November 2005 supplemental guidance have adequate resources and contractor performance surveillance plans; place additional emphasis on monitoring contract cost growth as utility privatization contracts undergo periodic price adjustments and other changes are negotiated; require, in addition to the “should cost” estimate, that each project economic analysis include the system’s current annual costs and the actual expected annual costs if the system is not privatized; and issue detailed guidance explaining how the services should incorporate margins of error in the economic analyses. Agency Comments and Our Evaluation In comments on a draft of this report, the Deputy Under Secretary of Defense (Installations and Environment) generally agreed with six of our seven recommendations and outlined a plan of action to address each recommendation. The Deputy Under Secretary noted that the utility privatization systems evaluated in our report were approved prior to DOD’s November 2005 program guidance and that the guidance will be fully implemented prior to awarding additional contracts. We recognize that issues identified in this report pertain to contracts awarded before supplemental program guidance was issued in November 2005. Nevertheless, we believe the issues identified in this report highlight areas that merit increased attention as the program continues—and this is reflected in the department’s response to each recommendation. The Deputy Under Secretary indicated disagreement with our recommendation to require, in addition to the “should cost” estimate, that each project economic analysis include the system’s current annual costs and the actual expected annual costs if the system is not privatized, and also stated that full implementation of DOD’s November 2005 guidance will provide further reassurance that every conveyance will reduce the long-term costs of the department compared to the costs of continued ownership. However, as noted in our May 2005 report and again in this report, we believe that in the short term it is clear that the utility privatization program increases annual costs to the government where contractors make system improvements and recoup their costs from the department through their service contracts. DOD’s sole use of “should costs” as a basis for comparing its long-term costs with those contained in contractor proposals provides a less clear picture of savings to the government since, as our reports have shown, the government’s “should costs” do not provide a realistic portrayal of the planned government expenditures. Accordingly, we believe our recommendation continues to have merit. DOD’s comments and our detailed response to specific statements in those comments are presented in appendix II. We are sending copies of this report to other interested congressional committees; the Secretaries of Defense, the Army, the Navy, and the Air Force; and the Director, Office of Management and Budget. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (202) 512-5581 or e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. The GAO staff members who made key contributions to this report are listed in appendix III. Appendix I: Scope and Methodology To update the status of the Department of Defense’s (DOD) utility privatization program, we summarized program implementation status and costs and compared the status to DOD’s past and current goals and milestones. We discussed with DOD and service officials issues affecting implementation of the program, such as the services’ suspension of the program between October 2005 and March 2006, and inquired about the effects of implementation delays on program completion plans. Using data from the services’ quarterly program status reports to DOD, we summarized the program implementation status by service and compared the status to program status reported in our prior report. We confirmed the quarterly reports’ status data on five privatization projects at the four installations we visited but did not otherwise test the reliability of the data. We also reviewed and summarized the services’ estimates of the additional number of systems that might be privatized by the completion of their programs and the funds needed to pay the costs associated with these anticipated projects. To assess the effect of DOD’s changes on the program management and oversight concerns noted in our May 2005 report, we documented the changes made by interviewing DOD and service officials and reviewing pertinent policies, guidance, memorandums, and reports, discussed with DOD and service officials the intended objective for each of the changes, and compared the changes with the concerns identified in our prior report. To assess the effect of DOD’s changes on the reliability of the economic analyses supporting privatization decisions, we reviewed the economic analyses supporting 10 privatization projects that were awarded after our May 2005 report and that had been subjected to the services’ new independent review processes. The analyses were judgmentally selected to obtain examples from both the Army and the Air Force. For each analysis, we evaluated the basis for the estimates and assumptions used and assessed consistency and compliance with DOD guidance. We did not otherwise attempt to independently determine estimates of long-term costs for the projects. We shared the results of our analyses with service officials and incorporated their comments as appropriate. To assess the effect of DOD’s changes on consideration of increased costs from utility privatization, we summarized the services’ estimates of the additional funds that would be needed to pay costs associated with the remaining number of utility systems that might be privatized and inquired about DOD’s plans for dealing with a potential program funding shortfall. To assess the effect of DOD’s changes on the administration and oversight of utility privatization projects, we visited four installations with five utility privatization projects awarded prior to our May 2005 report: Fort Eustis, Virginia; the Army’s Military Ocean Terminal Sunny Point, North Carolina; Bolling Air Force Base, Maryland; and Dobbins Air Reserve Base, Georgia. These installations were judgmentally selected because they represented a cross section of typical utility privatization projects, as corroborated with service officials. At each installation, we discussed resources available for contract oversight and plans for contractor performance monitoring. Also, to assess the effect of DOD’s changes on controlling cost growth in utility privatization contracts, we reviewed cost changes in the five utility privatization contracts at the installations we visited, discussed the reasons for the changes with local officials, and compared the actual contract costs with estimates from the projects’ economic analyses. We also reviewed cost changes in the Fort Rucker natural gas privatization contract because, according to the services, it was the only contract awarded under the legislative authority specifically provided for utility privatization that had completed a periodic price adjustment. To assess the effect of DOD’s changes on cost avoidance estimates from privatization, we reviewed the estimates DOD reported to Congress to determine whether the estimates reflected the actual changes expected in the government’s utility costs. We conducted our review from March through July 2006 in accordance with generally accepted government auditing standards. Appendix II: Comments from the Department of Defense The following is our detailed response to the Department of Defense’s (DOD) comments provided on August 21, 2006. GAO’s Response to the Department of Defense’s Comments Our responses to DOD’s comments are numbered below to correspond with the department’s various points. 1. As noted in this report, we identified concerns with the independent review performed on each of the 10 economic analyses we reviewed. We did not attempt in this report to prove that the questionable items we identified with each analysis would have changed the proposed outcomes but noted that improvements are needed in the thoroughness of the independent reviews that will be performed on future projects. Until DOD requires independent reviewers to report to decision makers on the thoroughness of the economic analyses and any significant anomalies, we continue to believe the reliability of the analyses could be questioned. As outlined in our May 2005 report and this report, to ensure a valid comparison of costs we continue to believe that the government’s “should cost” estimate should be closely based on performing the same work that the contractor would perform. 2. Our report does not suggest that postconveyance reviews should be conducted prematurely as indicated by DOD in its comments. The fact is that the utility privatization contracts under 10 U.S.C. 2688 authority began to be awarded in 1999, about 7 years ago, and postconveyance reviews do not appear to have been performed on many ongoing utility privatization contracts since that time. Although DOD noted in its March 2006 report the importance of postconveyance reviews as an additional measure to help ensure reliable economic analyses, it has not issued guidance to require the services to perform such reviews. 3. Our report clearly shows that Air Force officials, not GAO, stated that without additional resources, funding for utility privatization contracts must come from other base operating support funds, which would result in diverting critical resources from remaining facilities and infrastructure. Furthermore, DOD’s comment that utility sustainment funds have been used for other base support operations in the past only reinforces the need to address the utility privatization program potential funding shortfall. We have completed a number of reviews in which we have identified examples where the shifting of operation and maintenance funds from one account to other accounts to fund must-pay bills and other priorities contributes to management problems and funding shortfalls. For example, in February 2003, we reported that the services withheld facilities sustainment funding to pay must-pay bills, such as civilian pay, emergent needs, and other nonsustainment programs, throughout the year and transferred other funds back into facilities sustainment at fiscal year’s end. Still, the amounts of funds spent on facilities sustainment were not sufficient to reverse the trend in deterioration. In June 2005, we reported that hundreds of millions of dollars originally designated for facilities sustainment and base operations support had been redesignated by the services to pay for the Global War on Terrorism. While installations received additional funds at the end of the fiscal year to help offset shortfalls endured during the year, the timing made it difficult for the installations to maintain facilities and provide base support services efficiently and effectively. Similarly, unless the potential funding shortfall in the utility privatization program is addressed, funding will likely have to be redesignated to fund the utility privatization program rather than be used for its intended purpose. 4. Our report raises concerns about the adequacy of the services’ oversight of several privatization contracts that were awarded prior to DOD’s November 2005 supplemental guidance. Given that the Office of the Deputy Under Secretary of Defense (Installations and Environment) has overall policy and management oversight responsibilities for the utility privatization program, we continue to believe that this office is the appropriate level for providing direction and assurance that utility privatization contracts awarded prior to the supplemental guidance have adequate resources and contractor performance surveillance plans, as we recommend. 5. Our report highlights the importance of monitoring cost growth because contractors have ownership of the utility systems after privatization and, therefore, may have an advantage when negotiating contract changes and renewals. In addition, controlling the potential growth in the cost of ongoing utilities privatization contracts is important to the services in their planning for the adequate funding of the program. We did not review the effect of contract cost growth on the government estimate because the government estimate is not a relevant factor in controlling costs once a system has been privatized. Although a comparison of actual costs of a privatization project with the estimates included in the project’s economic analysis is a useful tool to help improve the reliability of analyses of future privatization projects, it is unlikely that such comparisons would assist in controlling cost growth. Furthermore, DOD’s comment refers to a “savings delta.” As noted in our May 2005 report and again in this report, in the short term it is clear that the utility privatization program increases annual costs to the department where contractors make system improvements and recoup their costs through the service contracts. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Barry W. Holman, (202) 512-5581 or ([email protected]) Acknowledgments In addition to the person named above, Susan C. Ditto, Harry A. Knobler, Katherine Lenane, Mark A. Little, Gary W. Phillips, Sharon L. Reid, and John C. Wren also made major contributions to this report.
Department of Defense (DOD) installations have about 2,600 electric, water, wastewater, and natural gas utility systems valued at about $50 billion. In 1997, DOD decided that privatization was the preferred method for improving utility systems, and Congress approved legislative authority for privatizing DOD's utility systems with Public Law No. 105-85. DOD estimates that some utility privatization contracts will cost over $100 million. In a May 2005 report, GAO identified several management weaknesses in DOD's implementation of the program. The Fiscal Year 2006 National Defense Authorization Act required GAO to evaluate and report on changes to the utility privatization program since May 2005. Accordingly, this report updates the status of the program and discusses the effect of DOD's changes on the concerns noted last year. To conduct this review, GAO summarized program status and costs, assessed DOD's changes to program guidance and in other areas, and reviewed the services' implementation of the changes. DOD's progress in implementing the utility privatization program has been slower than expected and the estimated completion date has slipped from the department's target of September 2005 to September 2011. DOD attributed the delays to the complexity of the program and to the services' decision to suspend and reassess the management of the program between October 2005 and March 2006. Since May 2005, the services privatized 14 utility systems under the legislative authority for the program, bringing the total number of awarded projects to 81. However, the services have awarded no projects since DOD issued new program guidance in November 2005. Meanwhile, the services' total estimated program implementation costs through fiscal year 2006 have increased to $285 million, and more funds will be required before the program is completed in 2011. Since GAO's May 2005 report, DOD has issued new guidance and required changes in procedures. If fully implemented, these changes should result in more reliable economic analyses, improved budgetary consideration of increased utility costs, enhanced oversight of privatization contracts, and reduced instances where contractors recover more than the fair market value paid for system conveyances. However, a number of concerns from the May 2005 report remain. For example, although DOD made changes to improve the reliability of project economic analyses by requiring independent reviews, GAO reviewed 10 economic analyses and found reliability issues that had not been identified during the independent reviews. DOD directed the services to adequately consider in their budgets the increased costs resulting from utility privatization. However, questions remain over the availability of the funds needed to complete the program because the services estimate that they will need $453 million more than is currently programmed to pay costs associated with remaining utility systems that might be privatized. Although DOD made many changes to improve contract administration and oversight, it may take some time to fully implement the changes as new privatization contracts are awarded. GAO's review of five projects awarded prior to DOD's changes found continuing questions about the adequacy of resources provided to perform oversight and the lack of required plans for overseeing contractor performance. It is too early in the program's implementation to know to what extent DOD's efforts will be successful in ensuring equitable periodic contract price adjustments and limiting long-term cost growth in the utility privatization program. However, GAO found indications that cost growth may become a challenge. DOD did not change its guidance to require that project economic analyses depict the actual expected costs of continued government ownership if the systems are not privatized. Therefore, DOD's reported $650 million in long-term cost reductions is unrealistic.
GAO_GAO-14-65
Background Information technology should enable government to better serve the American people. However, OMB stated in 2010 that the federal government had achieved little of the productivity improvements that private industry had realized from IT, despite spending more than $600 billion on IT over the past decade. Too often, federal IT projects run over budget, behind schedule, or fail to deliver promised functionality. Both OMB and federal agencies have key roles and responsibilities for overseeing IT investment management. OMB is responsible for working with agencies to ensure investments are appropriately planned and justified. Federal agencies are responsible for managing their IT investment portfolio, including the risks from their major information system initiatives, in order to maximize the value of these investments to the agency. Additionally, each year, OMB and federal agencies work together to determine how much the government plans to spend on IT projects and how these funds are to be allocated. For fiscal year 2014, federal agencies plan to spend about $82 billion. GAO Has Previously Reported on Opportunities to Reduce Duplication and Achieve Cost Savings in Critical IT-Related Areas We have previously reported on the challenges associated with agencies’ efforts to identify duplicative IT investments. For example, in September 2011 we reported that there were hundreds of investments providing similar functions across the federal government, including 1,536 information and technology management investments, 781 supply chain management investments, and 661 human resource management investments. Further, we found that OMB guidance to agencies on how to report their IT investments did not ensure complete reporting or facilitate the identification of duplicative investments. Specifically, agencies differed on what investments they included as an IT investment, and OMB’s guidance requires each investment to be mapped to a single functional category. As a result, agencies’ annual IT investments were likely greater that the $79 billion reported in fiscal year 2011 and OMB’s ability to identify duplicative investments was limited. Further, we found that several agencies did not routinely assess operational systems to determine if they were duplicative. We recommended, among other things, that OMB clarify its guidance to help agencies better identify and categorize their IT investments and require agencies to report the steps they take to ensure that their IT investments are not duplicative. OMB agreed with these recommendations. More recently, we reported on efforts at the Departments of Defense, Energy, and Homeland Security to identify duplicative IT investments. More specifically, we noted that although Defense, Energy, and Homeland Security use various investment review processes to identify duplicative investments, 37 of our sample of 810 investments were potentially duplicative at Defense and Energy. These investments accounted for about $1.2 billion in spending for fiscal years 2007 through 2012. We also noted that investments were, in certain cases, misclassified by function, further complicating agencies’ ability to identify and eliminate duplicative investments. We recommended that Defense and Energy utilize transparency mechanisms, such as the IT Dashboard to report on the results of their efforts to identify and eliminate potentially duplicative investments. The agencies generally agreed with this recommendation. We have also reported on the value of portfolio management in helping to identify duplication and overlap and opportunities to leverage shared services. For example, we have reported extensively on various agencies’ IT investment management capabilities by using GAO’s IT Investment Management Framework, in which stage 3 identifies best practices for portfolio management, including (1) creating a portfolio which involves, among other things, grouping investments and proposals into predefined logical categories so they can be compared to one another within and across the portfolio categories, and the best overall portfolio can then be selected for funding, and (2) evaluating the portfolio by monitoring and controlling it to ensure it provides the maximum benefits at a desired cost and an acceptable level of risk. OMB Established PortfolioStat to Help Agencies Reduce Duplication and Achieve Cost Savings Recognizing the proliferation of duplicative and low-priority IT investments within the federal government and the need to drive efficiency, OMB launched the PortfolioStat initiative in March 2012, which requires 26 agencies to conduct an annual agency-wide IT portfolio review to, among other things, reduce commodity IT spending and demonstrate how their IT investments align with the agency’s mission and business functions. Toward this end, OMB defined 13 types of commodity IT investments in three broad categories: (1) Enterprise IT systems, which include e-mail; identity and access management; IT security; web hosting, infrastructure, and content; and collaboration tools. (2) IT infrastructure, which includes desktop systems, mainframes and servers, mobile devices, and telecommunications. (3) Business systems, which include financial management, grants- related federal financial assistance, grants-related transfer to state and local governments, and human resources management systems. PortfolioStat is designed to assist agencies in assessing the current maturity of their IT investment management process, making decisions on eliminating duplicative investments, and moving to shared solutions (such as cloud computing) in order to maximize the return on IT investments across the portfolio. It is also intended to assist agencies in meeting the targets and requirements under other OMB initiatives aimed at eliminating waste and duplication and promoting shared services, such as the Federal Data Center Consolidation Initiative, the Cloud Computing Initiative, and the IT Shared Services Strategy. PortfolioStat is structured around five phases: (1) baseline data gathering in which agencies are required to complete a high-level survey of their IT portfolio status and establish a commodity IT baseline; (2) analysis and proposed action plan in which agencies are to use the data gathered in phase 1 and other available agency data to develop a proposed action plan for consolidating commodity IT; (3) PortfolioStat session in which agencies are required to hold a face-to-face, evidence-based review of their IT portfolio with the Federal Chief Information Officer (CIO) and key stakeholders from the agency to discuss the agency’s portfolio data and proposed action plan, and agree on concrete next steps to rationalize the agency’s IT portfolio that would result in a final plan; (4) final action plan implementation, in which agencies are to, among other things, migrate at least two commodity IT investments; and (5) lessons learned, in which agencies are required to document lessons learned, successes, and challenges. Each of these phases is associated with more specific requirements and deadlines. OMB has reported that the PortfolioStat effort has the potential to save the government $2.5 billion through fiscal year 2015 by consolidating and eliminating duplicative systems. Agencies Addressed PortfolioStat Requirements, but Baselines and Consolidation Plans Were Not All Complete In its memo on implementing PortfolioStat, OMB established several key requirements for agencies: (1) designating a lead official with responsibility for implementing the process; (2) completing a high-level survey of their IT portfolio; (3) developing a baseline of the number, types, and costs of their commodity IT investments; (4) holding a face-to-face PortfolioStat session with key stakeholders to agree on actions to address duplication and inefficiencies in their commodity IT investments; (5) developing final action plans to document these actions; (6) migrating two commodity IT areas to shared services; and (7) documenting lessons learned. In addition, in guidance supporting the memo, agencies were asked to report estimated savings and cost avoidance associated with their consolidation and shared service initiatives through fiscal year 2015. All 26 agencies that were required to implement the PortfolioStat process took actions to address OMB’s requirements. However, there were shortcomings in their implementation of selected requirements, such as addressing all required elements of the final action plan and migrating two commodity areas to a shared service by the end of 2012. Table 1 summarizes the agencies’ implementation of the requirements in the memo, which are discussed in more detail below. In the memo for implementing the PortfolioStat initiative, OMB required each agency’s chief operating officer (COO) to designate and communicate within 10 days of the issuance of the memo an individual with direct reporting authority to the COO to lead the agency’s PortfolioStat implementation efforts. Consistent with a recent OMB memo requiring chief information officers (CIO) to take responsibility for commodity IT, 19 of the 26 agencies designated the CIO or chief technology officer to lead their PortfolioStat efforts. The remaining 7 agencies designated the Assistant Attorney General for Administration (Department of Justice), the deputy CIO (Department of Transportation), the Assistant Secretary for Management (Department of the Treasury), the Office of Information and Technology Chief Financial Officer (Department of Veterans Affairs), the Director, Office of Information Resources Management, Chief Human Capital Officer (National Science Foundation), the Senior Advisor to the Deputy Commissioner/Chief Operating Officer (Social Security Administration), and the Senior Deputy Assistant Administrator (U.S. Agency for International Development). Portfolio Survey Provided Status of CIO Authority and Other Issues As part of the baseline data-gathering phase, OMB required agencies to complete a high-level survey of the status of their IT portfolio. The survey asked agencies to provide information related to implementing OMB guidance, including information on the CIO’s explicit authority to review and approve the entire IT portfolio, the percentage of IT investments that are reflected in the agency’s EA (required in OMB Circular A-130), and the percentage of agency IT investments (major and non-major) that have gone through the TechStat process, both agency-led and OMB-led (required in OMB M-11-29). While all 26 agencies completed the survey, the survey responses highlighted that agencies varied in the maturity of their IT portfolio management practices. In particular, 6 agencies reported varying levels of CIO authority, 5 agencies reported that less than 100 percent of investments were reflected in the agency’s EA, and most agencies noted that less than 50 percent of their major and non-major investments had gone through the TechStat process. Following are highlights of their responses: CIO authority: Twenty of the 26 agencies stated that they either had a formal memorandum or policy in place explicitly noting the CIO’s authority to review and approve the entire agency IT portfolio or that the CIO collaborated with others (such as members of an investment review board) to exercise this authority. However, the remaining 6 agencies either reported that the CIO did not have this authority or there were limitations to the CIO’s authority: The Department of Energy reported that while its CIO worked with IT governance groups, by law, the department CIO has no direct authority over IT investments in two semi-autonomous agencies (the National Nuclear Security Administration and the Energy Information Administration). Although the Department of Health and Human Services reported having a formal memo in place outlining the CIO’s authority and ability to review the entire IT portfolio, it also noted that the CIO had limited influence and ability to recommend changes to it. The Department of State reported that its CIO currently has authority over just 40 percent of IT investments within the department. The National Aeronautics and Space Administration reported that its CIO does not have authority to review and approve the entire agency IT portfolio. The Office of Personnel Management reported that the CIO advises the Director, who approves the IT portfolio, but this role is not explicitly defined. The U.S. Agency for International Development reported that the CIO’s authority is limited to the portfolio that is executed within the office of the CIO. It is important to note that OMB’s survey did not specifically require agencies to disclose limitations their CIOs might have in their ability to exercise the authorities and responsibilities provided by law and OMB guidance. Thus it is not clear whether all those who have such limitations reported them or whether those who reported limitations disclosed all of them. We recently reported that while federal law provides CIOs with adequate authority to manage IT for their agencies, limitations exist that impede their ability to exercise this authority. We noted that OMB’s memo on CIO authorities was a positive step in reaffirming the importance of the role of CIOs in improving agency IT management, but did not require them to measure and report the progress of CIOs in carrying out these responsibilities. Consequently, we recommended that the Director of OMB establish deadlines and metrics that require agencies to demonstrate the extent to which their CIOs are exercising the authorities and responsibilities provided by law and OMB’s guidance. In response, OMB stated that it would ask agencies to report on the implementation of the memo. The high-level survey responses regarding CIO authority at agencies indicate that several CIOs still do not exercise the authority needed to review and approve the entire IT portfolio, consistent with OMB guidance. Although OMB has issued guidance and required agencies to report on actions taken to implement it, this has not been sufficient to ensure that agency COOs address the issue of CIO authority at their respective agencies. As a result, agencies are hindered in addressing certain responsibilities set out in the Clinger-Cohen Act of 1996, which established the position of CIO to advise and assist agency heads in managing IT investments. Until the Director of OMB and the Federal CIO require agencies to fully disclose limitations their CIOs may have in exercising the authorities and responsibilities provided by law and OMB’s guidance, OMB may lack crucial information needed to understand and address the factors that could prevent agencies’ from successfully implementing the PortfolioStat initiative. Investments reflected in agencies’ enterprise architecture: Twenty one of the 26 agencies reported that 100 percent of their IT investments are reflected in their agency’s EA, while the remaining 5 agencies reported less than 100 percent: Commerce (90 percent), Justice (97 percent), State (40 percent), National Aeronautics and Space Administration (17 percent), and U.S. Agency for International Development (75 percent). According to OMB guidance, agencies must support an architecture with a complete inventory of agency information resources, including stakeholders and customers, equipment, systems, services, and funds devoted to information resources management and IT, at an appropriate level of detail. Until these agencies’ enterprise architectures reflect 100 percent of their IT investments, they will be limited in their ability to use this tool as a mechanism to identify low-value, duplicative, or wasteful investments. TechStat process: Twenty-one of the 26 agencies reported that less than 50 percent of major and non-major investments had gone through the TechStat process and 1 reported that more than 50 percent of its investments had gone through the process. As we have previously reported, TechStat accountability sessions have the value of focusing management attention on troubled projects and establishing clear action items to turn the projects around or terminate them. In addition, the TechStat model is consistent with government and industry best practices for overseeing IT investments, including our own guidance on IT investment management processes. Consistent with these survey responses, in June 2013 we reported that the number of TechStat sessions held to date was relatively small compared to the current number of medium- and high-risk IT investments at federal agencies. Accordingly, we recommended that OMB require agencies to conduct TechStat sessions on certain IT investments, depending on their risk level. Holding TechStat sessions will help strengthen overall IT governance and oversight and will help agencies to better manage their IT portfolio and reduce waste. OMB generally concurred with our recommendation and stated that it was taking steps to address it. Commodity IT Baselines Were Not All Complete As part of the baseline data-gathering phase, each of the 26 agencies was also required to develop a comprehensive commodity IT baseline including information on each of the 13 types of commodity IT. Among other things, they were to include the fiscal year 2011 obligations incurred for commodity IT services and the number of systems providing these services. The 26 agencies reported that they obligated approximately $13.5 billion in fiscal year 2011 for commodity IT, with the majority of these obligations (about $8.1 billion) for investments related to IT Infrastructure. Agencies also classified approximately 71.2 percent of the commodity IT systems identified (about 1,937 of the 2,721 reported) as enterprise IT systems. Further, as illustrated in figure 1, of the total systems reported, most were related to IT security, whereas the fewest systems were related to grants- related transfer to state and local governments. When collecting data, it is important to have assurance that the data are accurate. We have previously reported on the need for agencies, when providing information to OMB, to explain the procedures used to verify their data. Specifically, agencies should ensure that reported data are sufficiently complete, accurate, and consistent, and also identify any significant data limitations. Explaining the limitations of information can provide a context for understanding and assessing the challenges agencies face in gathering, processing, and analyzing needed data. We have also reiterated the importance of providing OMB with complete and accurate data and the possible negative impact of that data being missing or incomplete. While all 26 agencies developed commodity IT baselines, these baselines were not all complete. Specifically, 12 agencies (the Departments of Agriculture, Commerce, Defense, Housing and Urban Development, Labor, and the Interior; the Environmental Protection Agency, Nuclear Regulatory Commission, Office of Personnel Management, Small Business Administration, Social Security Administration, and U.S. Agency for International Development) could not ensure the completeness of their commodity IT baseline, either because they did not identify a process for this or faced challenges in collecting complete information. These agencies reported they were unable to ensure the completeness of their information for a range of reasons, including that they do not typically capture the required data at the level of detail required by OMB, that they used service contracts which do not allow visibility into specifics on the commodity IT inventory, that they lacked visibility into bureaus’ commodity IT information, and that OMB’s time frames did not allow for verification of information collected from lower-level units of the organization. Until agencies develop a complete commodity IT baseline, they may not have sufficient information to identify further consolidation opportunities. While it is important that reported data are sufficiently complete, accurate, and consistent, OMB did not require agencies to verify their data or disclose any limitations on the data provided and does not plan to collect this information as agencies provide updated information in quarterly reporting. Until OMB requires agencies to verify their data and disclose any limitations in integrated data collection quarterly reporting, it may lack information it needs to more effectively oversee agencies’ investment in commodity IT and identify Portfolio cost savings. Key Stakeholders Generally Attended PortfolioStat Sessions All 26 agencies held a PortfolioStat session in 2012, consistent with OMB’s requirement. In addition, the agencies noted that the agency CIO, Chief Administrative Officer, Chief Financial Officer, and COO—the key stakeholders identified in OMB memorandum 12-10—in many instances attended this session. In the instances where key stakeholders did not attend, authorized representatives of those stakeholders generally attended in their place, according to agency officials. Involvement from key stakeholders in agencies’ PortfolioStat sessions is critical to ensuring agencies are maximizing their efforts to successfully implement PortfolioStat. Agencies’ Action Plans Did Not Always Address All Required Elements Agencies were required by OMB to complete a final action plan that addressed eight specific elements: (1) describe plans to consolidate authority over commodity IT spending under the agency CIO; (2) establish specific targets and deadlines for commodity IT spending reductions; (3) outline plans to migrate at least two commodity IT areas to shared services by December 31, 2012; (4) target duplicative systems or contracts that support common business functions for consolidation; (5) illustrate how investments within the IT portfolio align with the agency’s mission and business functions; (6) establish criteria for identifying wasteful, “low-value,” or duplicative investments; (7) establish a process to identify these potential investments and a schedule for eliminating them from the portfolio; and (8) improve governance and program management using best practices and, where possible, benchmarks. All 26 agencies completed an action plan as required by OMB, but the extent to which they addressed the required items varied. Specifically, 18 agencies fully addressed at least six of the eight required elements—with Commerce, Education, General Services Administration, and Social Security Administration addressing all of them—and the remaining 8 agencies fully addressed five requirements or fewer and either partially addressed or did not address others. The consolidation of commodity IT spending under the agency CIO and establishment of criteria for identifying low-value, wasteful, and duplicative investments were the elements that were addressed the least (12 and 9 agencies respectively); and the alignment of investments to the agency’s mission and improvement of governance and program management were addressed by all agencies. Table 2 shows the extent to which the 26 agencies addressed the required elements in their action plan. Until agencies address the items that were required in the PortfolioStat action plan in future OMB reporting, they will not be in a position to fully realize the intended benefits of the PortfolioStat initiative. Memorandum 12-10 required the 26 agencies to complete the migration of the two commodity IT areas mentioned in their action plan to shared services by December 31, 2012 (see app. II for the list of migration efforts by agency). However, 13 of the 26 agencies (the Departments of Housing and Urban Development, the Interior, Labor, State, Transportation and Veterans Affairs; the Environmental Protection Agency, General Services Administration, National Aeronautics and Space Administration, Office of Personnel Management, Social Security Administration, U.S. Agency for International Development, and the U.S. Army Corps of Engineers) reported that they still had not completed the migration of these areas as of August 2013. These agencies reported several reasons for this, including delays in establishing contracts with vendors due to the current budget situation, and delays due to technical challenges. While OMB has stated that this initial requirement to migrate two systems was to initiate consolidation activities at the agencies, and not necessarily an action which it was intending to track for compliance, tracking the progress of such efforts would help to ensure accountability for agencies’ results and the continued progress of PortfolioStat. OMB’s 2013 PortfolioStat memo includes a requirement for agencies to report quarterly on the status of consolidation efforts and the actual and planned cost savings and/or avoidances achieved or expected, but the guidance does not specify that agencies should report on the status of the two migration efforts initiated in 2012. Until agencies report on the progress in consolidating the two commodity IT areas to shared services and OMB requires them to report on the status of these two efforts in the integrated data collection quarterly reporting, agencies will be held less accountable for the results of all their PortfolioStat efforts. Lessons Learned Highlight Importance of CIO Authority and Value Engineering Memorandum 12-10 required agencies to document and catalogue successes, challenges, and lessons learned from the PortfolioStat process into a document which was to be submitted to OMB by February 1, 2013. Of the 26 agencies required to implement the PortfolioStat process, 23 agencies submitted lessons learned documentation. The 3 agencies that did not submit lessons learned in the format requested by OMB indicated that they did not submit this documentation because lessons learned had already been included in their final action plans. Several agencies identified lessons learned related to the CIO’s authority and the use of an IT valuation model (12 and 15, respectively). More specifically, 8 agencies noted that OMB’s requirements for a plan to consolidate commodity IT spending under the agency CIO and to identify the extent to which the CIO possesses explicit agency authority to review and approve the entire agency IT portfolio had enabled their agencies to improve the management of their commodity IT and portfolio. Further, 4 agencies stated that the requirements regarding CIO authority would help them identify opportunities to achieve efficiencies and reduce duplication or migrate areas to a shared service. In addition, 1 agency encouraged OMB to continue to provide guidance and issue directives related to CIO authority and empowerment. With respect to the agencies’ use of an IT valuation model, 8 agencies generally recognized the value of using such a model; however, they identified challenges in determining the appropriate model and the need to continue to refine processes and analyze the supporting cost data. Two agencies also stated that OMB should assist in facilitating and sharing IT valuation model best practices and other benchmarks among federal agencies. More specifically, 1 agency stated that OMB should assist in the development of a federal IT valuation model, and another agency suggested that best practices regarding IT valuation models should include those from private sector institutions. As part of the 2013 OMB memorandum on PortfolioStat, OMB generally identified the same broad themes from the lessons learned documents that agencies reported. OMB has also established a page related to the 2013 PortfolioStat implementation. Opportunities and Estimated Cost Savings Are Underreported In separate guidance supporting the PortfolioStat initiative, OMB asked agencies to report planned cost savings and avoidance associated with their consolidation and shared service initiatives through fiscal year 2015. While agencies included consolidation efforts for which they had cost savings numbers, six agencies also reported planned migration or consolidation efforts for which they had incomplete information on cost savings and avoidance. According to OMB, agencies reported a total of 98 consolidation opportunities and $2.53 billion in planned cost savings and avoidance for fiscal years 2013 through 2015. However, OMB’s overall estimate of the number of opportunities and cost savings is underreported. Among other things, it does not include the Departments of Defense and Justice because these agencies did not report their plans in the template OMB was using to compile its overall estimate. While OMB acknowledged that the $2.53 billion in planned cost savings and avoidance was underreported when it issued the estimate, it did not qualify the figure quoted. Identifying any limitations or qualifications to reported figures is important in order to provide a more complete understanding of the information presented. Until OMB discloses any limitations or qualifications to the data it reports on the agency’s consolidation efforts and associated savings and avoidance, the public and other stakeholders may lack crucial information needed to understand the current status of PortfolioStat and agency progress in meeting the goals of the initiative. Our analysis of data collected from the 26 agencies shows that they are reporting 204 opportunities and at least $5.8 billion in savings through fiscal year 2015, at least $3.3 billion more than the number initially reported by OMB. See table 3 for an overview of the number of opportunities and reported cost savings and avoidance by agency. See appendix III for a detailed list of opportunities and associated savings by agency. Selected Departments’ Plans Identified Billions in Potential Cost Savings Using Various Processes, but Support for These Savings Is Uneven In their portfolio improvement plans, the five agencies selected for our review—the Departments of Agriculture, Defense, the Interior, the Treasury, and Veterans Affairs—identified a total of 52 initiatives expected to achieve at least $3.7 billion in potential cost savings or avoidance through fiscal year 2015, as well as several improvements of processes for managing their IT portfolios. To identify these opportunities, the agencies used several processes and tools, including, to varying degrees, their EA and valuation model, as recommended by OMB in its PortfolioStat guidance. More consistently using the processes recommended by OMB could assist agencies in identifying further opportunities for consolidation and shared services. In addition, four agencies did not always provide support for their estimated savings or show how it linked to the estimates. Better support for the estimated savings would increase the likelihood that these savings will be achieved. Department of Agriculture The Department of Agriculture (Agriculture) identified two contract consolidations—the Cellular Phone Contract Consolidation and the Enterprise Contracts for Standardized Security Products and Services— as the commodity IT investments it planned to consolidate by December 2012. In addition to these two efforts, the department identified three efforts that it reported to OMB would yield cost savings or avoidance between fiscal years 2013 and 2015 (IT Infrastructure Consolidation/Enterprise Data Center Consolidation, Enterprise IT Systems: Tier 1 Helpdesk Consolidation, and Enterprise IT Systems: Geospatial Consolidation Initiative). In addition, Agriculture identified several other opportunities for which it had yet to identify associated cost savings or avoidance. According to officials from the Office of the CIO, several of the consolidation opportunities were identified prior to the PortfolioStat initiative being launched, as part of the Secretary’s initiative to streamline administrative processes. The department also identified several process improvement efforts which, while not all specific to commodity IT, would help better manage these types of investments. Examples of the process improvement efforts include (1) recommitting to internal TechStats as a tool for evaluating all IT investments, (2) acquiring a portfolio management tool, and (3) implementing a department-wide portfolio management program that reviews major and non-major investments on a continual basis. Agriculture officials stated that they used their EA process to identify consolidation and shared service opportunities, and that the department checks for architectural compliance throughout its governance process. For example, Agriculture’s Executive IT Investment Review Board is to ensure that the department integrates information systems investment decisions with its EA and that the department’s decisions comply with EA. In addition, Agriculture’s Information Priorities and Investment Council is responsible for reviews of architectural compliance and for using the EA as a framework for investment decision making. These officials also stated that while the department determines the value of its IT investments through evaluation, analyses, prioritization, and scoring, it does not have a formal, documented valuation model for doing so. Having such a model would enhance the department’s ability to identify additional opportunities to consolidate or eliminate low-value, duplicative, or wasteful investments. The department also uses other processes to help manage its IT investments. For example, Agriculture has an Executive IT Investment Review Board which is to use a scoring process in ensuring the alignment of investments with strategic goals and objectives. Further, the department noted the establishment of several governance boards, and processes, such as the EA, IT acquisition approval request, and capital planning and investment control, to ensure such alignment. Agriculture anticipates that its efforts will generate about $221 million in cost savings or avoidance for fiscal years 2012 through 2015 and provided varying degrees of support for these estimates. Specifically, for two of the four initiatives for which we requested support (Cellular Phone Contract Consolidation and the IT Infrastructure Consolidation/Enterprise Data Center Consolidation), it provided support for calculations for cost savings and avoidance estimates. However, these estimates did not match those provided to OMB for the 2012 PortfolioStat process. For the third initiative, Geospatial Consolidation, Agriculture did not provide support for the estimate reported to OMB as part of the 2012 PortfolioStat process; however, it noted that this current estimate is $58.76 million less than originally reported to OMB. For the fourth, a department official from the office of the Chief Information Officer said no savings were being anticipated. Documentation received from the department noted that this effort was not a cost savings initiative but a way to meet several programmatic needs: to streamline the work required for agencies procuring security services, to improve the quality and repeatability of the security products across the agencies, and to establish a process flow that ensured the department security were included in any delivered products. An Agriculture official noted challenges with calculating cost savings or avoidance but did not identify any plans to improve its cost estimating processes. A lack of support for its current estimates may make it difficult for Agriculture to realize these savings and for OMB and other stakeholders to accurately gauge its performance. Department of Defense The Department of Defense (Defense) identified its Unclassified Information Sharing Service/All Partner Network and the General Fund Enterprise Business System as the two commodity opportunities that would be consolidated by December 2012. In addition to these 2 efforts, Defense identified 24 other efforts that would be undertaken from 2012 to 2015 to consolidate commodity IT services. These consolidation efforts were mostly in the areas of Enterprise IT and IT infrastructure, though the department also identified a significant effort to move its major components to enterprise-wide business systems. In addition, Defense also identified several process improvements, including restructuring its IT governance boards, establishing a department IT Commodity Council, and optimizing IT services purchasing. Defense began its effort to consolidate and improve IT services in 2010 at the request of the Secretary, prior to the launch of PortfolioStat. The Defense CIO developed a 10-Point Plan for IT Modernization focused on consolidating infrastructure and streamlining processes in several commodity IT areas, including consolidating enterprise networks, delivering a department cloud environment, standardizing IT platforms, and taking an enterprise approach for procurement of common IT hardware and software. Each of the component CIOs, in coordination with the Defense CIO, was tasked with developing plans to achieve these efforts within their own component. As part of this process, Defense utilized its EA and valuation model to determine the list of IT improvements because, according to officials from the Office of the CIO, these processes were incorporated into its existing requirements, acquisition, and planning, programming, budget, and execution processes. In particular, Defense has taken a federated approach for developing and managing its EA that is based on enterprise- level guidance, capability areas, and component architectures and is currently in the process of drafting a new EA program management plan for improvement effectiveness and interoperability across missions and infrastructure. In addition, according to a Defense official, the department has done extensive work related to implementing a valuation model, and its value engineering process for IT investments has been integrated into the department’s acquisition process. Defense also has a department website devoted to providing guidance on its valuation model. Using the EA and valuation model increases the likelihood that the department will identify a comprehensive list of opportunities for consolidation. Defense’s CIO estimates that the consolidation efforts will save between $3.2 billion and $5.2 billion through fiscal year 2015, and result in efficiencies between $1.3 billion and $2.2 billion per year beginning in fiscal year 2016. Defense provided its most recent estimates for the four initiatives for which we requested support (Unclassified Information Sharing Service/All Partner Access Network, data center consolidation, enterprise software purchasing, and General Fund Enterprise Business System) but was unable to show how these estimates were calculated. For the first initiative, the issue paper showing the calculations of estimated savings was reportedly classified and we therefore decided not to obtain a copy. For the other three initiatives, an official from the Office of the CIO stated that there was not support available at the department level. Each component reportedly used its existing planning, programming, budget and execution process, and associated systems to determine a overall budget and then identified estimated cost savings or avoidance related to the commodity initiatives, which were then aggregated by the department. The official also reported that, because the department’s accounting systems do not collect information at the level of granularity required for reporting on the PortfolioStat initiative (e.g., by commodity IT type), it is difficult to show how numbers were calculated or how they changed over time. In addition, because component-level systems do not collect commodity IT data, it had generally been a challenge for the department to determine cost savings for commodity IT as OMB required. While we recognize the challenges the department faces in obtaining the support for consolidation opportunities identified by its components, obtaining it is critical to ensuring that planned savings and cost avoidance are realized. This is important considering the size of Defense’s projected savings. Department of the Interior The Department of the Interior (Interior) identified two commodity IT investments in its action plan and other supporting documentation— Financial and Business Management System (Deployments 7&8) and Enterprise Forms System—that it planned to consolidate by December 2012. For fiscal years 2013 to 2015, Interior identified four additional consolidation opportunities—cloud e-mail and collaboration services, enterprise eArchive system, circuit consolidation, and the Networx telecommunications contract. Interior also identified its “IT Transformation” initiative as a source of additional savings beyond 2015. This initiative is one of the management priorities which, according to officials, Interior has been focusing on to drive efficiency, reduce costs, and improve services. It is intended to streamline processes within the department, to include a single e-mail system for the department, telecommunications, hosting services, and an enterprise service desk (help desk). Interior has also identified efforts to improve processes for managing its portfolio. Specifically, it is working to fully implement its EA and to align the IT portfolio more closely with the department’s business priorities and performance goals. In addition, in fiscal year 2010, Interior centralized authority for the agency’s IT—which had previously been delegated to its offices and bureaus—under the CIO. This consolidation gave the CIO responsibilities for improving the operating efficiencies of the organizational sub-components and Interior as a whole. Interior is also establishing several new IT Investment governance boards to make recommendations to the CIO for review and approval. To identify its consolidation opportunities, Interior officials from the Office of the CIO stated they used their EA. Specifically, the department established an EA team and a performance-driven prioritization framework to measure its IT Transformation efforts. The EA team takes a “ruthless prioritization” approach to align the department’s priorities with the IT Transformation goals. The priorities are evaluated by IT Transformation goals and expected outcomes, and supported by successive versions of architectures, plans, and solutions. In addition to using the EA, officials from the Office of the CIO stated that the department leveraged a set of investment processes to identify wasteful, duplicative, and low-value investments, which includes the use of road maps it has developed for different functional areas. Collectively, these processes are integrated into the department’s capital planning and investment control process in order to ensure that the portfolio of IT investments delivers the desired value to the organization. Interior officials from the Office of the CIO also reported using its IT investment valuation process which it has been maturing while also balancing changes to its IT governance process. More specifically, the department uses the Value Measuring Methodology, recommended by the federal CIO Council, to score its bureaus’ budget requests. Based on these assessments, a risk-adjusted value score is assigned to each major investment. These scores are used to identify funding levels across Interior’s IT portfolio, with risk being viewed from the standpoint of the “probability of success” for the investment. By making use of the EA and investment valuation process as recommended by OMB, Interior has enhanced its ability to identify opportunities to consolidate or eliminate duplicative, low-value, and wasteful investments. Interior anticipates its PortfolioStat efforts will generate approximately $61.9 million in savings and cost avoidance through fiscal year 2015 and provided adequate support for these estimates. Specifically, for the Financial and Business Management System, Interior provided calculations for the savings for each year from fiscal year 2012 to fiscal year 2016. For the other three initiatives—Electronic Forms System, Networx Telecommunications, and Cloud E-mail and Collaboration Services—Interior provided estimated savings for fiscal year 2013, the first year in which savings are anticipated, which were based on the difference between the fiscal year 2012 baseline and lower costs that had been achieved through the department’s strategic sourcing initiative, and explained that these savings were expected to be realized each year after—through fiscal year 2015. Having well-supported estimates increases the likelihood that Interior will realize its planned savings and provides OMB and other stakeholders with greater visibility into the department’s performance. Department of the Treasury The Department of the Treasury (Treasury) identified two new shared service opportunities—the Invoice Processing Platform and the DoNotPay Business Center—as the two commodity IT investments it planned to consolidate by December 2012; Treasury also reported to OMB that these efforts would yield cost savings and avoidance for fiscal years 2013 through 2015. In addition, Treasury identified six consolidation opportunities it anticipated would generate savings between fiscal years 2012 and 2014, and two others which did not have associated cost savings. These consolidation opportunities include those classified as Business Systems, IT Infrastructure, and Enterprise IT. Treasury also described several process improvement efforts which, while not specific to commodity IT, will help better manage these types of investments. Examples of the process improvement efforts include establishing criteria for robust reviews of investments, refining the department’s valuation and risk models and incorporating these models into the business case template at Treasury’s departmental offices, and launching an IT cost model working group to refine Treasury’s IT cost model. Treasury has also proposed additional steps in its Departmental Offices’ IT governance process and investment life cycle to evaluate the alignment of investments with its strategic goals and objectives. With respect to EA, in July 2013, Treasury established a Treasury Departmental Technical Advisory Working Group. According to its charter, the working group will be responsible for, among other things, ensuring the alignment of programs and projects with Treasury’s existing technologies or EA. More specifically, all new and existing investments are expected to be reviewed and approved by the working group to ensure such compliance. Treasury officials from the Office of the CIO stated they had not used the EA or a valuation model to identify their consolidation opportunities. In addition, Treasury has yet to develop a valuation model for assessing the value of its IT investments. According to officials, Treasury’s efforts to develop a valuation model are 30 to 40 percent complete. Further, while it has efforts underway within its Departmental Offices to develop models for assessing value, cost, and risk, Treasury has not documented its value engineering process and associated models. According to the officials, the department’s consolidation opportunities were identified through innovative ideas from the bureaus that were driven by current budget constraints. While the identification of these opportunities is not centrally managed or controlled, Treasury reported that it is currently developing a systematic process for promoting innovative ideas from its bureaus. According to Treasury, it uses other processes to help manage IT investments, including a process for evaluating the alignment of investments with its strategic goals and objectives via its investment review boards at both the department-wide and departmental office levels. Further, Treasury has noted that investments’ alignment with the mission is considered during the annual planning cycle (for existing and new investments), and during individual investment/project reviews (for every major investment). While Treasury identified consolidation and shared service opportunities through innovative ideas from its bureaus, utilizing the EA and valuation model could assist Treasury in identifying additional opportunities for cost savings. Treasury anticipates it will generate $56.49 million in savings from fiscal years 2012 through 2014 and provided varying degrees of support for these estimates. Specifically, for two of the three initiatives that we reviewed supporting documentation for, one initiative (DoNotPay Business Center) had supporting assumptions and calculations; however, these calculations support earlier estimates Treasury reported for this initiative, and not its more recent estimates. Treasury did not provide documentation to support the cost estimates for the two remaining efforts (Fiscal IT Data Center Consolidation and Business Process Management Status). Without support for its estimates, Treasury may be challenged in realizing planned savings, and OMB and other stakeholders will be hindered in evaluating its progress. Department of Veterans Affairs The Department of Veterans Affairs (VA) identified its VA Server Virtualization and Elimination of Dedicated Fax Servers as the two commodity IT investments it planned to consolidate by December 2012. In its PortfolioStat submission to OMB, VA identified five additional consolidation opportunities it anticipated would generate savings between fiscal years 2013 and 2015 (enterprise license agreement, standardization of spend planning and consolidation of contract, voice over internet protocol, vista data feeds, and one CPU policy). VA also described several process improvement efforts in its action plan that, while not specific to commodity IT, are intended to help better manage these types of investments. These improvement efforts include updating its EA process and establishing a Project Management Accountability System that supports project planning and management control and responsibilities for IT investments. VA officials from the Office of the CIO stated that they did not use their EA (which the department is still maturing) or their valuation model to identify their consolidation opportunities. Instead, they stated that VA uses its Ruthless Reduction Taskforce as the main mechanism for identifying IT commodity consolidation opportunities. The task force’s function is to ensure redundant functionality is reduced or eliminated and to recommend the reallocation of funds from low-value projects to higher priorities. Through its operational analysis process, it looks for excessive expenditures to determine whether there are redundancies and therefore opportunities to consolidate into a larger contract or service. While the task force is the main mechanism used to identify consolidation opportunities, VA officials from the Office of the CIO stated that the department uses other OMB-recommended processes to help it identify and prioritize other IT investments. For example, VA has documented processes for evaluating the alignment of investments with its strategic goals and objectives via its multiyear planning process and its senior investment review boards. Specifically, the department’s multiyear planning process provides a framework for identifying near- and long-term priorities and opportunities for divestiture, reduction, re-investments, and expansion of IT priorities and capabilities and timetables. To support this and other planning processes, VA has established several IT Investment governance boards that are intended to provide a framework for investment decision making and accountability to ensure IT initiatives meet the department’s strategic and business objectives in an effective manner. While VA has identified many opportunities to consolidate commodity IT investments and move to shared services through its Ruthless Reduction Task Force and other processes, making use of its EA and valuation model could help identify additional opportunities. VA estimates that the consolidation opportunities it reported to OMB will generate about $196 million in savings from fiscal years 2013 through 2015. However, we could not verify the support for some of the estimates. In particular, for two of the four initiatives for which we requested support (Server Virtualization and Eliminate Dedicated Fax Servers Consolidation), VA provided support for calculations for cost savings and avoidance estimates. However, these estimates did not match those provided to OMB for the 2012 PortfolioStat process. For the third initiative, Renegotiate Microsoft Enterprise License Agreement, VA did not provide detailed support but instead provided a written explanation for an overall cost avoidance figure of $161 million that was agreed to by VA’s Deputy Chief Information Officer for Architecture, Strategy and Design and VA’s Deputy Assistant Secretary for Information Technology Management and Chief Financial Officer for the Office of Information Technology. For the fourth initiative (one CPU policy), VA stated that the initiative was no longer a stand-alone project but had been subsumed by the Field Office Mobile Workers and Telework Support Agreement and that the economic justification for this consolidation effort had not yet been completed. Officials reported that in general the lack of a strong cost estimation process is the main challenge the department faced in estimating cost savings, even though VA’s Ruthless Reduction Task Force does have a process in place for performing cost estimates for the initiatives that the task force reviews. VA officials stated that they plan to address improving their IT cost estimation process issue with VA’s executive leadership team, but did not provide a time frame for doing so. For the near term, VA recently hired an operations research analyst to assist IT staff who lack experience with cost and savings estimation activities and plans to hire two more analysts. Without support for its estimates, VA will have less assurance that it can realize planned cost savings and avoidance, and OMB and stakeholders will be hindered in evaluating its progress. OMB’s Plans Outline PortfolioStat Improvements but Do Not Address All Issues with Agencies’ Efforts OMB has outlined several planned improvements to the PortfolioStat process in a memo issued in March 2013 that should help strengthen federal IT portfolio management and address key issues we have identified with agencies’ efforts to implement the initiative. In particular, OMB has changed its reporting requirements, requiring agencies to report on progress made on a quarterly basis. In addition, agencies will also be held accountable for their portfolio management as part of annual PortfolioStat sessions. However, selective OMB efforts could be strengthened to improve the PortfolioStat process and ensure agencies achieve identified cost savings, including addressing issues related to existing CIO authority at federal agencies, and publically reporting on agency-provided data. OMB’s plans identify a number of improvements that should help strengthen IT portfolio management and address key issues we have identified: Agency reporting on PortfolioStat progress: OMB’s memorandum has consolidated previously collected IT plans, reports, and data calls into three primary collection channels—an information resources management strategic plan, an enterprise road map, and an integrated data collection channel. As part of this reporting requirement, agencies will be required to provide updates on their progress in meeting key OMB requirements related to portfolio management best practices, which address issues identified in this report. Agencies must describe how their investment review boards coordinate between investment decisions, portfolio management, EA, procurement, and software development methodologies to ensure that IT solutions meet business requirements, as well as identify areas of waste and duplication wherever consolidation is possible. Agencies are to describe the valuation methodology used in their governance process to comparatively evaluate investments, including what criteria and areas are assessed, to ensure greater consistency and rigor in the process of selecting, controlling, and evaluating investments an agency decides to fund, de-fund, or terminate. Agencies must report their actual and planned cost savings and avoidances, as well as other metrics, achieved or expected through the implementation of efforts such as agency migration to shared services and cloud solutions, the consolidation of commodity IT, and savings achieved through data center consolidation. In addition, agencies are to describe their plans to re-invest savings resulting from consolidations of commodity IT resources (including data centers). In addition, agencies will now be required to report the status of their progress in implementing PortfolioStat on a quarterly basis. Agency integrated data collections were first required to be submitted in May 2013 and will be updated quarterly beginning in August 2013, with subsequent updates on the last day of November, and February of each fiscal year. Requiring agencies to provide consolidated reports on their progress in meeting key initiatives should help OMB to better manage these initiatives. Holding agencies accountable for portfolio management in PortfolioStat sessions: Moving forward, the PortfolioStat sessions held with agency stakeholders and OMB officials are intended to involve discussions of agency efforts related to several ongoing initiatives and their plans to implement key OMB guidance, such as guidance on CIO authorities, in order to help agencies mature their management of IT resources. Specifically, OMB plans to use the documentation and data submitted by the agencies in May 2013 to determine the state of each agency’s IT portfolio management, such as the use of an EA and valuation methodology, and develop areas OMB identifies as the most appropriate opportunities for agencies to innovate, optimize, and protect systems and data. Based on the session, OMB and the agency are expected to identify and agree on actionable next steps and specific time frames for the actions to be taken, which OMB intends to formalize and transmit in a memorandum to the agency within 2 weeks of the completed session, and no later than August 31, 2013. Upon receipt of the action item memorandum, agency PortfolioStat leads are to work with OMB to establish follow-up discussions as appropriate to track progress against action items identified. Deviation from the committed schedule will trigger a requirement for follow-up briefings by the agency to the Federal CIO no less frequently than quarterly, until corrective actions have been implemented or the action item is back on schedule. OMB’s efforts to follow up with agencies on a regular basis are critical to ensuring the success of these efforts. We have previously reported that OMB-led TechStat sessions have enabled the government to improve or terminate IT investments that are experiencing performance problems by focusing management attention on troubled projects and establishing clear action items to turn the projects around or terminate them. By having similar sessions focusing on agency IT portfolios, OMB can hold agencies accountable for their ongoing initiatives to consolidate or eliminate duplicative investments and achieve significant cost savings. Improving analytical capabilities: OMB expects to collect information from agencies as part of PortfolioStat and use a variety of analytical resources to evaluate the data provided, track agency progress each quarter, and determine whether there are any areas for improvement to the process. In addition, OMB plans to provide this information to Congress as part of the quarterly report it is required to submit to the Senate and House Appropriations Committees on savings achieved by OMB’s government-wide IT reform efforts. Analyzing and reporting data on agencies’ efforts to implement the PortfolioStat initiative will help OMB to provide more oversight of these efforts and hold agencies accountable for information reported in the quarterly reports. Although OMB’s planned improvements should help strengthen the PortfolioStat initiative going forward, they do not address some of the shortcomings with efforts to implement the initiative identified in this report: Addressing issues with CIO authority: While OMB’s memorandum has indicated that agencies must now report on how their policies, procedures, and CIO authorities are consistent with OMB Memorandum 11-29, “Chief Information Officer Authorities,” as noted earlier, OMB’s prior guidance and reporting requirements have not been sufficient to address the implementation of CIO authority at all agencies. In addition, OMB’s 2013 PortfolioStat guidance does not establish deadlines or metrics for agencies to demonstrate the extent to which CIOs are exercising the authorities and responsibilities provided by the Clinger- Cohen Act and OMB guidance, which, as we have previously recommended, are needed to ensure accountability for acting on this issue, nor does it require them to disclose any limitations CIOs might have in their ability to exercise their authority. Until CIOs are able to exercise their full authority, they will be limited in their ability to implement PortfolioStat and other initiatives to improve IT management. Reporting on action plan items that were not addressed: In OMB’s 2013 memorandum, agencies are no longer required to submit separate commodity IT consolidation plans as in 2012 but are to identify the progress made in implementing portfolio improvements as part of the broader agency reporting requirement mentioned above. While OMB’s shift to requiring agencies to report on progress now is reasonable given the goals of PortfolioStat, it was based on the assumption that agencies would develop robust action plans as a foundation last year. However, as noted earlier, the submitted agency final action plans were incomplete in that they did not always address all the required elements. Going forward, it will be important for agencies to address the plan items required. In addition, until OMB requires agencies to report on the status of these items, it may not have assurance that these agencies’ plans for making portfolio improvements fully realize the benefits of the PortfolioStat initiative. Ensuring agencies’ commodity IT baselines are complete, and reporting on the status of 2012 migration efforts: OMB’s 2013 guidance does not require agencies to document how they verified their commodity IT baseline data or disclose any limitations of these data or to report on the completion of their two 2012 migration efforts. Without such requirements, it will be more difficult for OMB to hold agencies accountable for identifying and achieving potential cost savings. Publically reporting agency PortfolioStat data: Finally, we have previously reported that the public display of agencies’ data allows OMB, other oversight bodies, and the general public to hold the agencies accountable for results and progress. While OMB officials have stated that they intend to make agency-reported data and the best practices identified for the PortfolioStat effort publicly available, they have not yet decided specifically which information they will report. Until OMB publicly reports data agencies submit on their commodity IT consolidation efforts, including planned and actual cost savings, it will be more difficult for stakeholders, including Congress and the public, to monitor agencies’ progress and hold them accountable for reducing duplication and achieving cost savings. Conclusions OMB’s PortfolioStat initiative offers opportunities to save billions of dollars and improve the way in which agencies manage their portfolios. While agencies implemented key PortfolioStat requirements, including establishing a commodity IT baseline and documenting a final action plan to consolidate commodity IT, shortcomings in their implementation of these requirements could undermine the savings the PortfolioStat effort is expected to achieve. First, reported limitations in CIOs exercising authority over the IT portfolios at six of the agencies suggests that more needs to be done to empower CIOs to improve management and oversight of agency IT resources. Second, not including all IT investments in their EA or developing complete commodity IT baselines limits agencies’ ability to identify further opportunities for reducing wasteful, duplicative, or low-value investments. Third, not addressing key elements in action plans for implementing the PortfolioStat initiative increases the likelihood that agencies will not achieve all the intended benefits. Finally, following through on commitments to migrate or consolidate investments is critical to ensuring accountability for results. Regarding estimated savings and cost avoidance, the significant understatement—by at least $2.8 billion—of OMB’s reported figures highlights the importance of ensuring the accuracy of data and disclosing any limitations or qualifications on reported savings. The identification by five agencies—the Departments of Agriculture, Defense, the Interior, the Treasury, and Veterans Affairs—of 52 initiatives and more than $3.7 billion in potential cost savings or avoidance through fiscal year 2015 demonstrates the significant potential of portfolio improvements to yield ongoing benefits. Making greater use of their EA and valuation model to identify consolidation opportunities, as recommended by OMB, could assist agencies in identifying additional opportunities. In addition, better support for the estimates of cost savings associated with the opportunities identified would increase the likelihood that these savings will be achieved. OMB’s planned improvements to the PortfolioStat process outlined in its March 2013 guidance—such as streamlining agency reporting on progress in implementing the process and holding agencies accountable for these efforts in PortfolioStat sessions—should help the office provide better oversight and management of government-wide efforts to consolidate commodity IT. However, OMB’s plans do not address key issues identified in this report, which could strengthen the PortfolioStat process. In particular, addressing issues of CIO authority by working directly with agency leadership to establish time lines and metrics for implementing existing guidance, requiring agencies to report on the reliability of their commodity baseline data and the progress of all their consolidation efforts, and making data on agencies’ progress in consolidating commodity IT and achieving cost savings publicly available will be essential to PortfolioStat’s success in reducing duplication and maximizing the return on investment in federal IT. Recommendations for Executive Action To help ensure the success of PortfolioStat, we are making six recommendations to OMB. We recommend that the Director of the Office of Management and Budget and the Federal Chief Information Officer require agencies to fully disclose limitations their CIOs might have in exercising the authorities and responsibilities provided by law and OMB’s guidance. Particular attention should be paid to the Departments of Health and Human Services, and State; the National Aeronautics and Space Administration; the Office of Personnel Management; and the U.S. Agency for International Development, which reported specific limitations with the CIO’s authority. In addition, we recommend that the Director of the Office of Management and Budget direct the Federal Chief Information Officer to require that agencies (1) state what actions have been taken to ensure the completeness of their commodity IT baseline information and (2) identify any limitation with this information as part of integrated data collection quarterly reporting; require agencies to report on the progress of their two consolidation efforts that were to be completed by December 2012 as part of the integrated data collection quarterly reporting; disclose the limitations of any data reported (or disclose the parameters and assumptions of these data) on the agencies’ consolidation efforts and associated savings and cost avoidance; require that agencies report on efforts to address action plan items as part of future PortfolioStat reporting; and Improve transparency of and accountability for PortfolioStat by publicly disclosing planned and actual data consolidation efforts and related cost savings by agency. We are also making 58 recommendations to 24 of the 26 departments and agencies in our review to improve their implementation of PortfolioStat requirements. Appendix IV contains these recommendations. Agency Comments and Our Evaluation We provided a draft of this report to OMB and the 26 executive agencies in our review for comment and received responses from all 27. Of the 27, 12 agreed with our recommendations directed to them, 5 disagreed or partially disagreed with our recommendations directed to them, 4 provided additional clarifying information, and 6 (the Departments of Education, Labor, Transportation, and Treasury; the Small Business Administration; and the U.S. Agency for International Development) stated that they had no comments. Several agencies also provided technical comments, which we incorporated as appropriate. The agencies’ comments and our responses are summarized below. In e-mail comments from the Federal Chief Information Officer, OMB generally agreed with three of our recommendations and disagreed with three. Specifically, OMB agreed with the recommendation to require agencies to disclose limitations their CIOs might have in exercising the authorities and responsibilities provided by law and OMB guidance but stated that it had already addressed this issue as part of its fiscal year 2013 PortfolioStat process. Specifically, according to OMB, its fiscal year 2013 PortfolioStat guidance required agencies to describe how their policies, procedures, and authorities implement CIO authorities, consistent with OMB Memorandum 11-29, as part of either the information resources management plan or enterprise roadmap they were instructed to submit. OMB stated that it reviewed and analyzed agencies’ responses and discussed limitations to CIOs’ authorities directly with agencies during the PortfolioStat sessions in cases where it determined that such limitations existed. However, OMB did not provide documentation supporting its reviews or discussions with agencies. In addition, as we note in our report, requiring agencies to fully disclose limitations their CIOs may have in exercising the authorities and responsibilities provided by law and OMB guidance should provide OMB information crucial to understanding and addressing the factors that could prevent agencies from successfully implementing the PortfolioStat initiative. For these reasons, we are maintaining our recommendation. OMB stated that it agreed with our recommendation to require that agencies (1) state what actions have been taken to ensure the completeness of their commodity IT baseline information and (2) identify any limitations with this information as part of the integrated data collection quarterly reporting. It acknowledged the value in ensuring the completeness and in understanding the limitations of agency-produced artifacts and stated it would continue to dedicate resources to validating agency savings associated with federal IT reform efforts prior to presenting these savings to Congress. OMB also stated that it would modify its analytical process to cite these limitations when producing PortfolioStat reports in the future. OMB generally agreed with the recommendation to require agencies to report on the progress of the two consolidation efforts they were to complete by December 2012 and stated that, to the extent feasible, it would dedicate resources to analyzing this information. OMB disagreed with our recommendation to disclose the limitations of any data reported on the agencies’ consolidation efforts and associated cost savings and avoidance, stating that it had disclosed limitations on data reported and citing three instances of these efforts. While we acknowledge that OMB reported limitations of data regarding consolidation efforts in these cases, the information reported did not provide stakeholders and the public with a complete understanding of the information presented. For example, OMB did not disclose that information from the departments of Defense and Justice was not included in the consolidation estimates reported, which, considering the scope of Defense’s efforts in this area (at least $3.2 billion), is a major gap. As noted in our report, OMB’s disclosure of limitations of or qualifications to the data it reports would provide the public and other stakeholders with crucial information needed to understand the status of PortfolioStat and agency progress in meeting the goals of the initiative. Therefore, we stand by our recommendation. OMB also disagreed with our recommendation to require agencies to report on efforts to address action plan elements as part of future OMB reporting, stating that it had found that 24 of 26 agencies had completed their plans. OMB further stated that it continuously follows up on the consolidation efforts identified in the plans and, where savings have been identified, reports this progress to Congress on a quarterly basis. However, our review of the 26 agency action plans found 26 instances where a required element (e.g., consolidation of commodity IT spending under the CIO) was not addressed and 26 instances where a required element was only partially addressed--an assessment with which agencies agreed. As noted in our report, addressing all the required elements would better position agencies to fully realize the intended benefits of the PortfolioStat initiative, and they should therefore be held accountable for reporting on them as required in OMB memo M-12-10. Accordingly, we stand by our recommendation. Finally, OMB disagreed with our recommendation to improve transparency and accountability for PortfolioStat by disclosing consolidation efforts and related cost savings by agency. Specifically, OMB stated that this recommendation does not adequately account for the work it currently performs to ensure accountability for and transparency of the process through its quarterly reporting of identified savings to Congress. It further stated that some details are deliberative or procurement sensitive and it would therefore not be appropriate to disclose them. However, while OMB currently reports realized savings by agency on a quarterly basis, these savings are not measured against planned savings. Doing this would greatly enhance Congress’s insight into agencies’ progress and hold them accountable for reducing duplication and achieving planned cost savings and would not require reporting deliberative or procurement-sensitive information. Therefore, we stand by our recommendation. In written comments, the U.S. Department of Agriculture concurred with the content of our report. The department’s comments are reprinted in appendix V. In written comments, the Department of Commerce concurred with our recommendations but disagreed with our statement that the CIO only has explicit authority over major IT investments. Commerce cited a June 21, 2012, departmental memo on IT portfolio management that it believes provides the CIO with explicit authority to review any IT investment, whether major or non-major. Our statement regarding the limitations on the CIO’s authority was based on information reported by the department to OMB in May 2012 and confirmed with officials from the Commerce Office of the CIO during the course of our review. However, we agree that the June 2012 memo provides the CIO with explicit authority to review all IT investments. Accordingly, we have removed the original statement noting limitations from the report and also removed Commerce from the list of departments OMB should require to disclose CIO limitations. The Department of Commerce’s comments are reprinted in appendix VI. In its written response, the Department of Defense provided comments for both the department and the Army Corps of Engineers. It concurred with one of the three recommendations made to Defense, partially concurred with another and disagreed with the third. Specifically, the department concurred with our recommendation to obtain support from the relevant component agencies for the estimated savings for fiscal years 2013 to 2015 for the data center consolidation, enterprise software purchasing, and General Fund Enterprise Business System initiatives. It partially concurred with our recommendation to develop a complete commodity baseline, stating that the department has efforts under way to further refine the baseline. Since these efforts have not yet been completed, we are maintaining our recommendation. The department did not concur with our recommendation to fully describe the consolidation of commodity IT spending under the CIO in future OMB reporting. The department stated that it did not intend to follow OMB’s guidance to consolidate commodity IT spending under the CIO because this approach would not work within the department’s federated management process. However, our recommendation was not to implement OMB’s guidance, but rather to address the element in the plan as required by either describing the steps it will take to implement it or explaining why it will not or cannot implement it. DOD did neither of these and instead was silent on the subject. We therefore stand by our recommendation. The department concurred with both of the recommendations we made to the Army Corps of Engineers. The department’s comments are reprinted in appendix VII. In written comments, the Department of Energy concurred with our recommendation to fully describe PortfolioStat action plan elements in future OMB reporting and stated that the department was committed to increasing the CIO’s oversight and authority for federal commodity IT investments. The department also noted that our statement that the “department has no direct authority over IT investments in two semi- autonomous agencies (the National Nuclear Security Administration and the Energy Information Administration)” should be clarified to say that it is the department CIO who does not have this authority. We found support for this clarification in documentation we had already received and therefore made it as requested. The department’s comments are reprinted in appendix VIII. In written comments, the Environmental Protection Agency generally agreed with two of the three recommendations we made and generally disagreed with the third. Specifically, the Environmental Protection Agency generally agreed with our recommendations to (1) fully describe three PortfolioStat action plan elements and (2) report on the agency’s progress in consolidating the managed print services and strategic sourcing of end user computing to shared services as part of the OMB integrated data collection quarterly reporting until completed. The agency disagreed with our recommendation to develop a complete commodity IT baseline, stating that it had provided a complete baseline to OMB on August 31, 2012, and had also reported to us during our review that the information was current and complete at the time of submission. During our review, we found that the Environmental Protection Agency did not have a process in place to ensure the completeness of the information in the baseline. Without appropriate controls and processes in place to confirm this, the Environmental Protection Agency cannot be assured that its data are complete. We therefore stand by our recommendation. The Environmental Protection Agency’s comments are reprinted in appendix IX. In written comments, the General Services Administration agreed with our findings and recommendations and stated it would take action as appropriate. The agency’s comments are reprinted in appendix X. In written comments, the Department of Homeland Security disagreed with our recommendation to fully describe its efforts related to consolidating commodity IT spending under the CIO in future OMB reporting, stating that the department had already addressed this recommendation. Specifically, the department stated that it had included updated information on this topic in its fiscal year 2013 Information Resources Management Plan that was submitted to OMB in May 2013. We reviewed the Information Resources Management Plan and agree that it addresses our recommendation. We therefore removed the recommendation from the report. The department’s comments are reprinted in appendix XI. In written comments, the Department of Housing and Urban Development concurred with our recommendations and stated it would provide more definitive information with timelines once the final report had been issued. The department’s comments are reprinted in appendix XII. In e-mail comments, the Department of the Interior’s GAO Audit Liaison stated that the department generally concurred with our findings and recommendations. However, the department recommended revising the criteria we used to assess whether agencies met the requirement to develop a commodity IT baseline (depicted in table 1) to reflect whether or not an agency had developed a baseline instead of whether that baseline was complete. The department stated that a validation was not being performed on how all agencies responded to the question and agencies that answered truthfully were being penalized for responding honestly. We recognize that agencies were not required to report on the completeness of the commodity IT baseline information they submitted to OMB; for this reason, we have recommended that OMB require agencies to state what actions have been taken to ensure the completeness of their commodity IT baseline information and identify any limitations with this information as part of the integrated data collection quarterly reporting. In e-mail comments, an official from the Department of Justice’s Audit Liaison Group stated that all references to the department were factually correct. In written comments, the National Aeronautics and Space Administration concurred with our recommendations and noted the agency will take actions to address them. The agency’s comments are reprinted in appendix XIII. In written comments, the National Archives and Records Administration concurred with our recommendation and stated that it would include updated or new descriptions of the elements of the PortfolioStat action plan in future OMB reporting. The agency’s comments are reprinted in appendix XIV. In written comments, the National Science Foundation stated that it generally agreed with our characterization of the agency’s PortfolioStat status and would update its PortfolioStat action plan as appropriate to more fully describe the two elements that we noted were not fully addressed. Regarding our recommendation to complete the consolidation of e-mail services to shared services, the agency stated that this effort was completed in August 2013. After reviewing additional documentation provided, we agree that the agency has met the requirement. We modified the report as appropriate, and removed the recommendation. The National Science Foundation’s comments are reprinted in appendix XV. In e-mail comments, the U.S. Nuclear Regulatory Commission’s GAO Audit Liaison stated that the agency was generally in agreement with our report. In written comments, the Office of Personnel Management concurred with our recommendations and noted that the agency will provide updated information on efforts to address them to OMB on November 30, 2013. The agency’s comments are reprinted in appendix XVI. In written comments, the Social Security Administration agreed with one recommendation and disagreed with the other. The agency disagreed with our recommendation to develop a complete commodity IT baseline, stating that it believed its commodity baseline data to be complete and accurate. However, our review found that the Social Security Administration did not have a process in place to ensure the completeness of the information in the baseline. Without appropriate controls and processes in place to confirm the completeness of data, the Social Security Administration cannot be assured that its data are complete. The agency also acknowledged that it needed to document a process for demonstrating the completeness of its baseline data. Consequently, we stand by our recommendation. The Social Security Administration’s comments are reprinted in appendix XVII. In written comments, the Department of State stated that it concurred with our report and would develop specific responses to each of the three recommendations we made once the report is published. However, related to our recommendation to complete the consolidation of the Foreign Affairs Network and content publishing and delivery services, the department stated that it has already consolidated more than two commodity IT areas per OMB Memorandum M-11-29. While we acknowledge that it has made efforts in this area, during our review the department changed what it considered the two commodity areas to be consolidated by December 2012 several times before stating that the two efforts were the Foreign Affairs Network and content publishing and delivery services. Based on this determination, we assessed the status of these two efforts and confirmed that neither had been completed as of August 2013. In addition, the department did not provide any documentation to support that it had consolidated more than two commodity IT areas. We therefore stand by our recommendation. The Department of State’s comments are reprinted in appendix XVIII. In written comments, the Department of Veterans Affairs concurred with our four recommendations, stating the department is taking steps to manage its investment portfolio more effectively and has developed an action plan to address each recommendation. The department’s comments are reprinted in appendix XIX. We are sending copies of this report to interested congressional committees, the Director of the Office of Management and Budget, the secretaries and agency heads of the departments and agencies addressed in this report, and other interested parties. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staffs have any questions on the matters discussed in this report, please contact me at (202) 512-9286 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix XX. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) determine the status of efforts to implement key required PortfolioStat actions, (2) evaluate selected agencies’ plans for making portfolio improvements and achieving associated cost savings, and (3) evaluate Office of Management and Budget’s (OMB) plans to improve the PortfolioStat process. To determine the status of agency efforts to implement key PortfolioStat actions, we obtained and analyzed policies, action plans, PortfolioStat briefing slides, status reports, agency communications to OMB, and other documentation relative to the key requirements of the Portfolio initiative outlined in OMB’s 2012 memorandum from each of the 26 federal agencies in our review. These requirements included (1) designating a lead for the initiative; (2) completing a high-level IT portfolio survey; (3) establishing a commodity IT baseline; (4) holding a PortfolioStat session; (5) submitting a final plan to consolidate commodity IT; (6) migrating at least two duplicative commodity IT services by December 31, 2012; (7) and documenting lessons learned. For the final plan to consolidate commodity IT, we reviewed agency plans to determine whether each element required in the plan was fully addressed. A “partially” rating was given if the plan addressed a portion but not all of the information required in the element. In addition, we obtained a briefing book which OMB provided to the agencies that, among other things, summarized the agencies’ commodity IT baseline data. We assessed the reliability of OMB’s reporting of these data through interviews with OMB officials regarding their processes for compiling the briefing books and used the briefing books to describe the federal investment in commodity IT at the time of the 2012 PortfolioStat. We also assessed the reliability of agencies’ commodity IT baseline data by reviewing the processes agencies described they had in place to ensure that all investments were captured in the baseline. We identified issues with the reliability of the agencies’ commodity IT baseline data and have highlighted these issues throughout this report, as appropriate. For objective two, we selected five agencies with (1) high fiscal year IT expenditure levels (based on information reported on the OMB’s IT dashboard); (2) a mix of varying IT and CIO organizational structures (centralized vs. decentralized); and (3) a range of investment management maturity levels based on knowledge gathered from prior work and reported results of PortfolioStat sessions. In addition, to the extent possible, we avoided selecting projects that were the subject of another engagement underway. The agencies selected are the Departments of Agriculture, Defense, the Interior, the Treasury, and Veterans Affairs. To evaluate the selected agencies’ plans for making portfolio improvements and achieving associated cost savings, we obtained and analyzed agencies’ action plans to consolidate commodity IT, and other relevant documentation, and interviewed relevant agency officials to compile a list of planned portfolio improvements and determine the processes agencies used to identify these portfolio improvements. We determined the extent to which these processes included using (1) the agency enterprise architecture and (2) a valuation model, which OMB recommended in its guidance to assist in analyzing portfolio information and developing action plans. In addition, we assessed the reliability of the cost savings and avoidance estimates by obtaining and analyzing the support for the estimates for the two efforts that were to be migrated by December 2012 and the two efforts with the highest anticipated savings between fiscal years 2013 and 2015. Based on the results of our analysis, we found the data to be sufficiently reliable given the way they are reported herein. To evaluate OMB’s plans for making PortfolioStat improvements, we reviewed PortfolioStat guidance for fiscal year 2013 and interviewed OMB officials to compile a list of planned improvements. In addition, we analyzed the information obtained from our sources and the results of our analyses for our first two objectives to determine whether OMB’s plans for improving PortfolioStat addressed the issues we identified. We conducted this performance audit from October 2012 to November 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Agencies’ Commodity IT Migration Efforts The table below lists the commodity IT efforts for migration to shared services agencies identified in their action plan. Moving website and census data to shared service provider Moving e-mail to shared services Electronic Capital Planning and Investment Control (eCPIC) Portfolio Management tool (FESCOM Program Participant) Electronic Capital Planning and Investment Control (eCPIC) Appendix III: Agencies’ Portfolio Initiatives The table below lists the commodity IT initiatives that agencies identified in the cost target templates provided to OMB in September 2012. Total estimated savings or cost avoidance Dollars in millions (rounded) Geo Spatial Consolidation Total reported savings and cost avoidance Commerce Desktop/Laptop Management Several Data Center Consolidation Activities Reduce total number of computers, use Commerce PC purchase contract to get discount. National Oceanic and Atmospheric Administration National Service Desk Enterprise-Wide IT Security Assessment and Authorization National Institute of Standards and Technology Cloud Initiatives Voice over Internet Protocol Total reported savings and cost avoidance Defense Branch Services Consolidation of Commodity IT Components and Applications n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. Multi-level Security Domain Thin Client Solutions n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. Consolidation Procurement of Commodity IT Hardware Purchases n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. Unclassified Information Sharing Service / All Partner Access Network n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. Implement Cross Domain Solution as Enterprise Service n.d. n.d. n.d. n.d. Fiscal year 2013 estimated savings or cost avoidance n.d. Fiscal year 2014 estimated savings or cost avoidance n.d. Fiscal year 2015 estimated savings or cost avoidance n.d. Total estimated savings or cost avoidance n.d. Video Over Internet Protocol Enterprise Service n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. Defense Red Switch Network Rationalization n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. Computing Infrastructure and Services Optimization n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. Enterprise Messaging and Collaboration Services n.d. n.d. n.d. n.d. Identify and Access Management Services n.d. n.d. n.d. n.d. Enterprises Services – Identify and Access Management n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. Defense Interoperability with Mission Partners n.d. n.d. n.d. n.d. General Fund Enterprise Business System n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. Total reported savings and cost avoidance Asia Pacific Economic Cooperation Websites Initiative Office of the Chief Financial Officer Grants Information Award Database Internet Site Education Web Total reported savings and cost avoidance Energy Commodity IT Contract Consolidation Enhanced Connectivity for Telework and Travel Public Key Infrastructure Migration to Shared Service Provider Collaboration Tools Consolidation (Microsoft SharePoint) Migration on-premise Exchange Services into Cloud 365 offering Rocky Mountain Oilfield Testing Center - Commodity IT Full Time Equivalent Reduction eCPIC Migration to General Services Administration Cloud Environment Implement CISCO Unified Communication & Collaboration ITSM Replacement of Office of the Chief Information Officer Remedy systems with ServiceNow Total reported savings and cost avoidance Environmental Protection Agency Email (Software as a Service) Collaboration Tools (Software as a Service) Identity Credentials and Access Management (.16) Total reported savings and cost avoidance Initiative Enterprise eArchive System part of eMail Enterprise Records and Document Management System Financial and Business Management System deployment 7&8 Enterprise Forms System Total reported savings and cost avoidance Justice Consolidation of Classified Processing Services n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. Web Time and Attendance Cloud Solution n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. Justice Management Division Mobility-Virtual Private Network n.d. n.d. n.d. n.d. Consolidation of Justice Land Mobile Radio Systems n.d. n.d. n.d. n.d. Monitoring at two security operations centers n.d. n.d. n.d. n.d. n.d. n.d. n.d. n.d. Bureau of Alcohol, Tobacco, Firearms and Explosives Unified Communications n.d. n.d. n.d. n.d. Strategic sourcing (contract escalations) n.d. n.d. n.d. n.d. Network Delivery Order for CISCO services Total reported savings and cost avoidance Labor DOLNet Network Infrastructure consolidation n.d. n.d. n.d. Learning Management System aka Integrated Talent Management System = LMS (Learning Management System) + PM (Performance Management) $265k WFA aka Workforce Analytics or Workforce Planning $535k eCPIC Portfolio Management Tool (FESCOM Program Participant) Telecommunications and Computer Operations Center Total reported savings and cost avoidance U.S. Army Corps of Engineers eCPIC Total reported savings and cost avoidance Veterans Affairs Server Virtualization Eliminate Dedicated Fax Servers Consolidation Standardize Spend Planning and Consolidation Contracts Total reported savings and cost avoidance Total reported savings and cost avoidance (all agencies) n.d.—no data. Numbers may not add up due to rounding. Appendix IV: Recommendations to Departments and Agencies Agriculture To improve the department’s implementation of PortfolioStat, we recommend that the Secretary of Agriculture direct the CIO to take the following four actions: Develop a complete commodity IT baseline. In future reporting to OMB, fully describe the following PortfolioStat Action plan elements: (1) consolidate commodity IT spending under the agency CIO and (2) establish criteria for wasteful, low-value, or duplicative investments. As the department finalizes and matures its valuation methodology, utilize this process to identify whether there are additional opportunities to reduce duplicative, low-value, or wasteful investments. Develop support for the estimated savings for fiscal years 2013 through 2015 for the Cellular Phone Contract Consolidation, IT Infrastructure Consolidation/Enterprise Data Center Consolidation, and Geospatial Consolidation initiatives. Commerce To improve the department’s implementation of PortfolioStat, we recommend that the Secretary of Commerce direct the CIO to take the following two actions: Reflect 100 percent of information technology investments in the department’s enterprise architecture. Develop a complete commodity IT baseline. Defense To improve the department’s implementation of PortfolioStat, we recommend that the Secretary of Defense direct the CIO to take the following three actions: Develop a complete commodity IT baseline. In future reporting to OMB, fully describe the following PortfolioStat action plan element: consolidate commodity IT spending under the agency CIO. Obtain support from the relevant component agencies for the estimated savings for fiscal years 2013 to 2015 for the data center consolidation, enterprise software purchasing, and General Fund Enterprise Business System initiatives. In addition, to improve the U.S. Army Corps of Engineers’ implementation of PortfolioStat, we recommend that the Secretary of Defense direct the Secretary of the Army to take the following two actions: In future reporting to OMB, fully describe the following PortfolioStat action plan elements: (1) consolidate commodity IT spending under the agency CIO; (2) target duplicative systems or contracts that support common business functions for consolidation; (3) establish criteria for identifying wasteful, low-value, or duplicative investments; and (4) establish a process to identify these potential investments and a schedule for eliminating them from the portfolio.. Report on the agency’s progress in consolidating eCPIC to a shared service as part of the OMB integrated data collection quarterly reporting until completed. Energy To improve the department’s implementation of PortfolioStat, we recommend that the Secretary of Energy direct the CIO to take the following action: In future reporting to OMB, fully describe the following PortfolioStat action plan elements: (1) consolidate commodity IT spending under the agency CIO and (2) establish criteria for identifying wasteful, low- value, or duplicative investments. Environmental Protection Agency To improve the agency’s implementation of PortfolioStat, we recommend that the Administrator of the Environmental Protection Agency direct the CIO to take the following three actions: Develop a complete commodity IT baseline. In future reporting to OMB, fully describe the following PortfolioStat action plan elements: (1) consolidate commodity IT spending under the agency CIO; (2) establish targets for commodity IT spending reductions and deadlines for meeting those targets; and (3) establish criteria for identifying wasteful, low-value, or duplicative investments. Report on the agency’s progress in consolidating the managed print services and strategic sourcing of end user computing to shared services as part of the OMB integrated data collection quarterly reporting until completed. General Services Administration To improve the agency’s implementation of PortfolioStat, we recommend that the Administrator of the General Services Administration direct the CIO to take the following action: Report on the agency’s progress in consolidating the contract writing module to a shared service as part of the OMB integrated data collection quarterly reporting until completed. Health and Human Services To improve the department’s implementation of PortfolioStat, we recommend that the Secretary of Health and Human Services direct the CIO to take the following action: In future OMB reporting, fully describe the following PortfolioStat action plan element: consolidate commodity IT spending under the agency CIO. Housing and Urban Development To improve the department’s implementation of PortfolioStat, we recommend that the Secretary of Housing and Urban Development direct the CIO to take the following three actions: Develop a complete commodity IT baseline. In future reporting to OMB, fully describe the following PortfolioStat action plan element: establish criteria for identifying wasteful, low- value, or duplicative investments. Report on the department’s progress in consolidating the HR End-to- End Performance Management Module to a shared service as part of the OMB integrated data collection quarterly reporting until completed. Interior To improve the department’s implementation of PortfolioStat, we recommend that the Secretary of the Interior direct the CIO to take the following three actions: Develop a complete commodity IT baseline. In future reporting to OMB, fully describe the following PortfolioStat action plan element: establish criteria for identifying wasteful, low- value, or duplicative investments. Report on the department’s progress in consolidating the Electronic Forms System component of the eMail Enterprise Records & Document Management System deployment 8 to a shared service as part of the OMB integrated data collection quarterly reporting until completed. Justice To improve the department’s implementation of PortfolioStat, we recommend that the Attorney General direct the CIO to take the following two actions: Reflect 100 percent of information technology investments in the department’s enterprise architecture. In future reporting to OMB, fully describe the following PortfolioStat action plan element: establish targets for commodity IT spending reductions and deadlines for meeting those targets. Labor To improve the department’s implementation of PortfolioStat, we recommend that the Secretary of Labor direct the CIO to take the following three actions: Develop a complete commodity IT baseline. In future reporting to OMB, fully describe the following PortfolioStat action plan elements: (1) consolidate commodity IT spending under the agency CIO and (2) establish targets for commodity IT spending reductions and deadlines for meeting those targets. Report on the department’s progress in consolidating the cloud e-mail services to a shared service as part of the OMB integrated data collection quarterly reporting until completed. National Aeronautics and Space Administration To improve the agency’s implementation of PortfolioStat, we recommend that the Administrator of the National Aeronautics and Space Administration direct the CIO to take the following three actions: Reflect 100 percent of information technology investments in the agency’s enterprise architecture. In future reporting to OMB, fully describe the following PortfolioStat action plan elements: (1) consolidate commodity IT spending under the agency CIO; (2) target duplicative systems or contracts that support common business functions for consolidation; (3) establish criteria for identifying wasteful, low-value, or duplicative investments; and (4) establish a process to identify these potential investments and a schedule for eliminating them from the portfolio. Report on the agency’s progress in consolidating the NASA Integrated Communications Services Consolidated Configuration Management System to a shared service as part of the OMB integrated data collection quarterly reporting until completed. National Archives and Records Administration To improve the agency’s implementation of PortfolioStat, we recommend that the Archivist of the United States direct the CIO to take the following action: In future reporting to OMB, fully describe the following PortfolioStat action plan elements: (1) consolidate commodity IT spending under the agency CIO; (2) target duplicative systems or contracts that support common business functions for consolidation; (3) establish criteria for identifying wasteful, low-value, or duplicative investments; and (4) establish a process to identify these potential investments and a schedule for eliminating them from the portfolio. National Science Foundation To improve the agency’s implementation of PortfolioStat, we recommend that the Director of the National Science Foundation direct the CIO to take the following action: In future reporting to OMB, fully describe the following PortfolioStat action plan elements: (1) consolidate commodity IT spending under the agency CIO and (2) establish criteria for identifying wasteful, low- value, or duplicative investments. Nuclear Regulatory Commission To improve the agency’s implementation of PortfolioStat, we recommend that the Chairman of the U.S. Nuclear Regulatory Commission direct the CIO to take the following two actions: Develop a complete commodity IT baseline. In future reporting to OMB, fully describe the following PortfolioStat action plan elements: (1) consolidate commodity IT spending under the agency CIO; (2) establish targets for commodity IT spending reductions and deadlines for meeting those targets; (3) target duplicative systems or contracts that support common business functions for consolidation; and (4) establish a process to identify these potential investments and a schedule for eliminating them from the portfolio. Office of Personnel Management To improve the agency’s implementation of PortfolioStat, we recommend that the Director of the Office of Personnel Management direct the CIO to take the following three actions: Develop a complete commodity IT baseline. In future reporting to OMB, fully describe the following PortfolioStat action plan elements: (1) move at least two commodity IT areas to shared services and (2) target duplicative systems or contracts that support common business functions for consolidation. Report on the agency’s progress in consolidating the help desk consolidation and IT asset inventory to shared services as part of the OMB integrated data collection quarterly reporting until completed. Small Business Administration To improve the agency’s implementation of PortfolioStat, we recommend that the Administrator of the Small Business Administration direct the CIO to take the following two actions: Develop a complete commodity IT baseline. In future reporting to OMB, fully describe the following PortfolioStat action plan elements: (1) consolidate commodity IT spending under the agency CIO; (2) establish targets for commodity IT spending reductions and deadlines for meeting those targets; (3) target duplicative systems or contracts that support common business functions for consolidation; and (4) establish a process to identify those potential investments and a schedule for eliminating them from the portfolio. Social Security Administration To improve the agency’s implementation of PortfolioStat, we recommend that the Commissioner of the Social Security Administration direct the CIO to take the following two actions: Develop a complete commodity IT baseline. Report on the agency’s progress in consolidating the geospatial architecture to a shared service as part of the OMB integrated data collection quarterly reporting until completed. State To improve the department’s implementation of PortfolioStat, we recommend that the Secretary of State direct the CIO to take the following three actions: Reflect 100 percent of information technology investments in the department’s enterprise architecture. In future reporting to OMB, fully describe the following PortfolioStat action plan elements: (1) consolidate commodity IT spending under the agency CIO; (2) establish targets for commodity IT spending reductions and deadlines for meeting those targets; (3) move at least two commodity IT areas to shared services; (4) target duplicative systems or contracts that support common business functions for consolidation; and (5) establish a process to identify those potential investments and a schedule for eliminating them from the portfolio. Report on the department’s progress in consolidating the Foreign Affairs Network and content publishing and delivery services to shared services as part of the OMB integrated data collection quarterly reporting until completed. Transportation To improve the department’s implementation of PortfolioStat, we recommend that the Secretary of Transportation direct the CIO to take the following two actions: In future reporting to OMB, fully describe the following PortfolioStat action plan elements: (1) consolidate commodity IT spending under the agency CIO; (2) establish targets for commodity IT spending reductions and deadlines for meeting those targets; (3) target duplicative systems or contracts that support common business functions for consolidation; and (4) establish a process to identify those potential investments and a schedule for eliminating them from the portfolio. Report on the department’s progress in consolidating the Enterprise Messaging to shared services as part of the OMB integrated data collection quarterly reporting until completed. Treasury To improve the department’s implementation of PortfolioStat, we recommend that the Secretary of the Treasury direct the CIO to take the following three actions: In future reporting to OMB, fully describe the following PortfolioStat action plan elements: (1) consolidate commodity IT spending under the agency CIO and (2) establish criteria for identifying wasteful, low- value, or duplicative investments. As the department finalizes and matures its enterprise architecture and valuation methodology, utilize these processes to identify whether there are additional opportunities to reduce duplicative, low-value, or wasteful investments. Develop support for the estimated savings for fiscal years 2013 to 2015 for the DoNotPay Business Center, Fiscal IT Data Center Consolidation and Business Process Management Status initiatives. U.S. Agency for International Development To improve the agency’s implementation of PortfolioStat, we recommend that the Administrator of the U.S. Agency for International Development direct the CIO to take the following four actions: Reflect 100 percent of information technology investments in the agency’s enterprise architecture. Develop a complete commodity IT baseline. In future reporting to OMB, fully describe the following PortfolioStat action plan elements: (1) target duplicative systems or contracts that support common business functions for consolidation and (2) establish a process to identify those potential investments and a schedule for eliminating them from the portfolio. Report on the agency’s progress in consolidating the e-mail and Telecommunication and Operations Center to shared services as part of the OMB integrated data collection quarterly reporting until completed. Veterans Affairs To improve the department’s implementation of PortfolioStat, we recommend that the Secretary of Veterans Affairs direct the CIO to take the following four actions: In future reporting to OMB, fully describe the following PortfolioStat action plan element: target duplicative systems or contracts that support common business functions for consolidation. Report on the department’s progress in consolidating the dedicated fax servers to a shared service as part of the OMB integrated data collection quarterly reporting until completed. As the department matures its enterprise architecture process, make use of it, as well as the valuation model, to identify whether there are additional opportunities to reduce duplicative, low-value, or wasteful investments. Develop detailed support for the estimated savings for fiscal years 2013 to 2015 for the Server Virtualization, Eliminate Dedicated Fax Servers Consolidation, Renegotiate Microsoft Enterprise License Agreement, and one CPU policy initiatives. Appendix V: Comments from the U.S. Department of Agriculture Appendix VI: Comments from the Department of Commerce Appendix VII: Comments from the Department of Defense Appendix VIII: Comments from the Department of Energy Appendix IX: Comments from the Environmental Protection Agency Appendix X: Comments from the General Services Administration Appendix XI: Comments from the Department of Homeland Security Appendix XII: Comments from the Department of Housing and Urban Development Appendix XIII: Comments from the National Aeronautics and Space Administration Appendix XIV: Comments from the National Archives and Records Administration Appendix XV: Comments from the National Science Foundation Appendix XVI: Comments from the Office of Personnel Management Appendix XVII: Comments from the Social Security Administration Appendix XIII: Comments from the Department of State Appendix XIX: Comments from the Department of Veterans Affairs Appendix XX: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, individuals making contributions to this report included Sabine Paul (Assistant Director), Valerie Hopkins, Lee McCracken, Tomas Ramirez, and Bradley Roach.
Federal agencies plan to spend at least $82 billion on IT in fiscal year 2014, and GAO has previously reported on challenges in identifying and reducing duplicative IT investments. In 2012, OMB launched its PortfolioStat initiative—a process where agencies gather information on their IT investments and develop plans for consolidation and increased use of shared-service delivery models.GAO was asked to review the implementation of PortfolioStat. GAO’s objectives were to (1) determine whether agencies completed key required PortfolioStat actions, (2) evaluate selected agencies’ plans for making portfolio improvements and achieving associated cost savings, and (3) evaluate OMB’s plans to improve the PortfolioStat process. To do this, GAO analyzed plans, status reports, and other documentation from agencies and interviewed agency and OMB officials. GAO also interviewed officials and reviewed documentation from five agencies selected based on their IT expenditures and management structures. The 26 major federal agencies that were required to participate in the PortfolioStat initiative fully addressed four of seven key requirements established by the Office of Management and Budget (OMB). However, only 1 of the 26 agencies addressed all the requirements. For example, agencies did not develop action plans that addressed all elements, such as criteria for identifying wasteful, low-value or duplicative information technology (IT) investments, or migrate two commodity IT areas—such as enterprise IT systems and IT infrastructure—to a shared service by the end of 2012. Further, OMB’s estimate of about 100 consolidation opportunities and a potential $2.5 billion in savings from the PortfolioStat initiative is understated because, among other things, it did not include estimates from the Departments of Defense and Justice. GAO’s analysis, which includes these estimates, shows that, collectively, the 26 agencies are reporting about 200 opportunities and at least $5.8 billion in potential savings through fiscal year 2015. Five selected agencies—the Departments of Agriculture, Defense, the Interior, the Treasury, and Veterans Affairs—identified 52 consolidation initiatives, along with other IT management improvements, and estimated at least $3.7 billion in potential cost savings through fiscal year 2015. However, four agencies did not always provide sufficient support for all of their estimates, and they varied in their use of processes—such as an enterprise architecture and a method for assessing the value of investments—recommended by OMB to identify consolidation opportunities. More consistently using these tools may reveal further opportunities for consolidation, and better support for estimated savings may increase the chances that they will be achieved. OMB‘s fiscal year 2013 PortfolioStat guidance identifies a number of planned improvements but does not fully address certain weaknesses in the implementation of the initiative, such as limitations in CIOs’ authority, weaknesses in agencies’ commodity IT baselines, accountability for migrating selected commodity IT areas, or the information on agencies’ progress that OMB intends to make public.
GAO_AIMD-98-40
Objective, Scope, and Methodology As you requested, the objective of this report is to provide a general description of three short-term DOD technology initiatives, which affect the current payment process, and four long-term initiatives, which are expected to change the way DOD currently does business. Although some of the initiatives include planned improvements to both contract and vendor payment processes, we focused on the contract payment process in this report. This report is limited to descriptive information on each of the initiatives and therefore does not address specific implementation or execution issues. We reviewed the Joint Financial Management Improvement Program’s Framework for Federal Financial Management Systems and the Office of Management and Budget’s (OMB) Circular A-127 to determine federal financial systems requirements. To determine the current DOD contract payment process, we reviewed DOD documents that discussed how DOD is organized, identified the various activities involved in the process, and collected data on the number and dollar value of contracts. We also reviewed DOD reports that addressed problems inherent in the contract payment process. To accumulate information on the seven initiatives, we reviewed DOD documents that (1) identified how the planned initiatives would streamline and improve DOD’s payment processes and (2) discussed the status and interrelationship of the initiatives. Given the overall assignment objectives and the descriptive nature of our report, we did not verify the data in the DOD reports. In addition, we interviewed headquarters and field office officials, including the program managers for each of these initiatives, to determine the current DOD contract payment process and obtain updated information on each initiative. We performed our work at DOD headquarters, Pentagon, Virginia; Defense Finance and Accounting Service (DFAS) headquarters, Arlington, Virginia; DFAS centers, Columbus, Ohio, and Indianapolis, Indiana; and Defense Logistics Agency (DLA) headquarters, Fort Belvoir, Virginia. We performed our work from October 1996 through December 1997 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Secretary of Defense or his designee. On January 20, 1998, DOD officials provided oral comments, which are discussed in the “Agency Comments and Our Evaluation” section of this report. Additional technical comments have been addressed as appropriate throughout the report. Background The following sections describe DOD’s current contract payment process, the controls necessary to ensure accurate contract payment, and DOD’s long-standing problems with matching disbursements to corresponding obligations. DOD’s Current Contract Payment Process Over time, DOD’s contract payment and accounting procedures have evolved into the complex, inefficient processes that exist today. These processes span numerous DOD and contractor organizations operating incompatible procurement, logistics, and accounting systems. Currently, much of the data are shared using hard copy documents that must be manually entered into numerous systems or electronic data that still must be manually verified and entered into the system. Figure 1 illustrates the current paper flow for the contract payment process. DOD has numerous nonintegrated automated and manual systems that contain contract data. There are 150 accounting systems, 76 procurement writing systems, numerous logistics systems, and 1 contract administration and payment system—Mechanization of Contract Administration Services (MOCAS). Although only 5 percent of DOD’s contractor and vendor invoices are processed and paid through MOCAS, these payments represent approximately 44 percent of the dollars paid by the department to contractors and vendors for fiscal year 1997. The remaining 95 percent of the invoices are primarily for vendor payments that are made by other disbursing offices. Because of DOD’s numerous nonintegrated computer systems, much of the data generated by procurement, logistics, and accounting systems cannot be electronically transferred among these systems, and therefore must be read, interpreted, and manually entered from hard copy documents. This duplicative manual entry of accounting data into the various systems is prone to keypunch errors, errors caused when data entry personnel are required to interpret sometimes illegible contracts, and inconsistencies among data in the systems. In January 1991, DOD established the Defense Finance and Accounting Service to assume responsibility for DOD finance and accounting. DFAS’ center in Columbus, Ohio, pays contracts administered by the Defense Contract Management Command (DCMC) of the Defense Logistics Agency. DCMC has post-award contract responsibility, which includes overseeing contractor progress, inspecting and accepting items, receiving and entering contractor delivery data, administering progress payments, negotiating contractor indirect costs, administering contract modifications, and negotiating final settlement proposals. DFAS-Columbus uses MOCAS to compute contractor payments, while DCMC uses this system to maintain contract administration and payment data on its contracts. DFAS-Columbus makes two basic types of contract payments—delivery payments and financing payments. About two-thirds of all payments are delivery payments for goods and services; the balance are financing payments. Delivery payments are made upon receipt of products or services. Financing payments, such as progress payments, are made as contractors incur costs and submit billings. The numerous parties involved in DOD’s contract payment process may increase the opportunity to introduce errors. DOD has 1,400 separate buying activities, up to 64,000 receiving locations, over 25,000 contractors, and 44 accounting offices, all funneling information to MOCAS. In addition, although MOCAS provides the accounting data used to control obligations and payments on these contracts, it does not maintain the official accounting records for the contracts. Instead, the official accounting records are kept at the 44 accounting offices located throughout the country. MOCAS records may differ from accounting office records because contract information, such as modifications, may not have been sent to or properly processed by both locations. To help alleviate this problem, DOD recently completed implementation of a direct input initiative started in October 1994. As of June 30, 1997, administrative contracting officers at all DCMC sites were able to input contract information, including modifications, directly to MOCAS from their remote locations. However, direct input of contract modification information by the administrative contracting officers is intended to be a temporary initiative and is expected to eventually be replaced by some of the other technological initiatives discussed in this report. Before making payments, DFAS-Columbus requires the matching of a number of key documents (such as the contract, the receiving report, and the invoice for delivery payments). However, problems often arise after payments have been made when the accounting offices, which maintain the official accounting records, cannot reconcile their obligation records to the payment information generated by MOCAS. DOD has instituted a prevalidation policy, which requires that before making a payment, DFAS-Columbus validate that sufficient funds are available from the appropriate obligation at the accounting offices. Prevalidation of payments made by MOCAS is required for expenditures of $3 million or more for contracts dated prior to fiscal year 1997, and $2,500 or more for contracts dated 1997 and later. Concept of Contract Payment Financial Control The financial management systems policy stated in OMB Circular A-127 requires that each agency establish and maintain a single, integrated financial management system. Having a single, integrated financial management system does not mean having only one software application covering all financial management system needs. Rather, a single, integrated financial management system is a unified set of financial systems and financial portions of mixed systems encompassing the software, hardware, personnel, processes (manual and automated), procedures, controls, and data necessary to carry out financial management functions, manage financial operations of the agency, and report on the agency’s financial status to central agencies, Congress, and the public. Unified means that the systems are planned for and managed together, operated in an integrated fashion, and linked together electronically in an efficient and effective manner to provide the agencywide financial system support necessary to carry out the agency’s mission and support the agency’s financial management needs. Figure 2 illustrates how, in the ideal environment, DOD could use integrated systems that share data among the procurement, logistics, and accounting functions to ensure financial control over the money spent for goods and services. For example, when a military service’s requiring activity determines that goods or services are needed, such as a tank or aircraft, the system would make that information available to both the military service’s buying activity and the DFAS accounting office. When the contract is issued, the military service buying activity, in turn, would provide that information through the system to the DLA receiving activity to expect the item as well as to the DFAS accounting office to obligate funds in the accounting systems. Throughout the procurement process, DLA contract administration personnel would monitor the contractor’s progress to ensure that the contractor is meeting all cost and delivery requirements. As construction of the tank or aircraft progresses, or at final delivery, the contractor would submit bills to the contractor payment personnel. All contract-related data would be available to authorized users in an integrated financial management system. DOD’s Long-standing Disbursement Problems DOD has acknowledged its long-standing problems with properly matching its disbursements to specific obligations. As of September 30, 1997, DOD had at least $22.7 billion in problem disbursements. Its disbursement problems fall into three basic categories. Unmatched disbursements - Disbursements and collections that have been received by the accounting office, attempted to be matched to an obligation, but have not been matched because an obligation was not identified in the accounting system. Negative unliquidated obligations (NULOs) - Disbursements that have been received and posted to specific obligations by the accounting office but recorded disbursements exceed recorded obligations—more funds have been paid out than were recorded as obligated. Aged in-transits - Disbursements and collections that have been reported to the Treasury but either have not been received by the accounting station or have been received but not processed or posted by the accounting office. DFAS considers in-transits to be aged if they have not been processed within 120 days, depending on the source of the transaction and the service processing the transaction. As noted previously, much of the inefficiency and errors associated with DOD’s contract payment process can be attributed to the lack of integrated computer systems that electronically link procurement, logistics, and accounting. Because the process is highly dependent on manual data entry, the information needed to make contract payments is plagued with timeliness and accuracy problems. The reliance on paper documents, which must be mailed to the proper location and stored for future reference, also adds to DOD’s payment difficulties. For example, we previously reported that DFAS-Columbus files about 25,000 loose contract documents per week. DOD has hundreds of efforts under way to help resolve disbursement and accounting problems, including the seven technology initiatives discussed in this report. However, as we have previously reported, DOD has not performed the in-depth analysis necessary to fully determine the underlying causes of its disbursement and accounting problems and therefore identify the most effective solutions and rank specific reforms. Short-term Initiatives The three short-term technology initiatives—electronic document management, electronic document access, and electronic data interchange—are intended to move DOD’s contract payment processes toward a paperless environment and reduce dependence on manual data entry. Although none of the initiatives significantly change the existing contract payment process, all are directed at providing more accurate and timely information or improving the processing of data at DFAS-Columbus. The lack of timely information and DFAS-Columbus’ reliance on cumbersome paper processes have been cited as contributing factors to problem disbursements. Electronic Document Management Electronic Document Management (EDM) is a technology initiative intended to convert paper copies of DOD contract payment documents to electronic images. The paper documents are received by DFAS-Columbusfrom DOD’s logistics and procurement communities and contractors. EDM’s objectives are to reduce DFAS-Columbus’ reliance on paper, increase its processing efficiency, and, as a result, reduce its operating costs. As we stated in our April 1997 report, DFAS-Columbus’ paper-dependent workflow has frequently led to misrouted and misplaced paper documents. This condition delays payments and further increases processing costs. As shown in figure 3, contracts, modifications, receiving reports, and invoices, which are received as paper documents, are scanned by DFAS-Columbus employees and stored as electronic images for DFAS-Columbus use. EDM has three basic components: document imaging, electronic foldering, and workflow processing. All paper documents received by DFAS-Columbus, such as contracts, invoices, and receiving reports, will be scanned and converted to electronic images—similar to photographs—and stored in an EDM database at DFAS-Columbus. Once stored, these images can be retrieved and viewed by DFAS-Columbus personnel. Because this initiative was implemented for DFAS-Columbus as a way of relieving its dependence on paper documents, the technology needed to view the scanned documents is only available at DFAS-Columbus for its EDM database. Since these electronic images are essentially “pictures” of the original paper documents, the data entry personnel can only view these documents, the same way they would look at a piece of paper. Thus, the data entry personnel must still view these “images” on their computers to obtain the data needed to process the payment and then manually enter those data into MOCAS. The electronic foldering component allows contract, invoice, and other related documents to be associated together for quick electronic retrieval. For example, the system would be able to associate all documents for a particular contract by a unique contract number and then retrieve all documents relating to that contract, eliminating the need for multiple manual searches. The workflow processing component helps to manage workload distribution by (1) automatically directing the electronic document to the appropriate processing technician and (2) tracking the progress of each document through the contract administration and payment process. This component, along with the foldering component, is expected to significantly reduce the time spent on manual voucher processing, which is necessary when MOCAS is unable to complete an automated verification of certain payment data. In 1997, approximately 45 percent of all invoices had to be processed manually. As we reported in April 1997, manual processing can cost up to seven times more than an automated payment. This increased cost is due to the time spent by DFAS-Columbus employees manually retrieving, verifying, and matching payment data to various records (invoices, purchase orders, receiving documents, DFAS-Columbus obligation records, or accounting office records) for these payments. Reductions in time are expected to result from employees being able to locate the needed documents more readily. These documents, once entered into the EDM database, will always be available for viewing, thus mitigating the problems associated with lost and misplaced documents within DFAS-Columbus. In addition, EDM also allows multiple employees at DFAS-Columbus to concurrently view a single electronic document. DOD’s 1996 Chief Financial Officer Financial Management Status Report and Five Year Plan states that the objective of EDM is to reduce operating costs. This affects DFAS-Columbus in a number of ways, such as, reducing the volume of paper; eliminating the need for paper storage; reducing document handling, copying, and manual retrieval; and reducing personnel requirements. Electronic Data Systems is developing EDM under a 5-year contract awarded in September 1994. EDM is being implemented at DFAS-Columbus for contract pay in one of its 11 operating divisions. Initial operational testing is expected to be completed in March 1998. According to the DFAS program manager for EDM, as of September 30, 1997, the program development and implementation is expected to be completed by the end of fiscal year 1999 at a cost of approximately $115 million. The $115 million reportedly includes EDM development and deployment costs for both contract and vendor pay processes, as well as for garnishment of wages. According to an EDM official, the DFAS-Columbus contract pay portion is expected to cost $33 million. Electronic Document Access In contrast to EDM, which begins with paper documents that are then captured as electronic images, the Electronic Document Access (EDA) initiative is designed to eliminate the original paper documents and capture these documents as electronic images from the beginning. Documents, such as contracts and contract modifications, are originally captured as a print file, similar to saving a word processing file on disk, and then converted to an electronic image for storage in the EDA database. These documents can then be accessed and viewed by authorized accounting, procurement, and logistics personnel on DOD’s computer network, the Non-Classified Interactive Processor Router Network (NIPRNET). This accessibility contrasts with documents scanned using EDM technology, which are only available to DFAS-Columbus personnel. EDA is being developed under the EDM program as an alternative to having DOD activities produce paper documents. EDM officials indicated that the expanded use of EDA by DOD will eventually reduce the need for the imaging component of EDM. EDA is expected to significantly reduce the amount of time spent mailing and distributing paper contracts and contract modifications. It is also expected to eliminate both document loss and delays that can result from mailing and the need to store paper documents. However, an EDM official stated that some imaging capability will always be needed since only contracts, contract modifications, government bills of lading, and payment vouchers are being captured and stored in the EDA database. For example, invoices and correspondence are not being captured and stored in the EDA database. As illustrated in figure 4, the electronic images of contract documents available via EDA can be retrieved and viewed by DFAS-Columbus personnel and other users of DOD’s NIPRNET, but the needed data on the document still must be manually entered into the appropriate systems. EDA is currently being used on a limited basis by DOD to view contracts, modifications, and other documents via NIPRNET and is beginning to reduce the amount of paper documents being exchanged. DFAS-Columbus is currently working with the military services and DLA to expand the EDA database by putting more of their documents on the system. We were told that over 100,000 contracts, representing all Services and DLA, have been loaded into the EDA database as of December 1997. DFAS has reached an agreement with each of the Services and with DLA to use EDA exclusively for contracts issued by some of the largest contract writing systems. Thus, as shown in figure 4, EDA will eliminate the need to mail paper contracts to many DOD locations. By accessing the contract from EDA, these locations will avoid not only the potential problems involved in mail delays and losses but the contract they see via EDA will be a clear original, not a photocopy, which can be difficult or impossible to read. According to the DFAS program manager for EDA, as of September 30, 1997, EDA is expected to cost about $2.7 million and is scheduled for completion in December 1998. EDA began in April 1996 and is funded as part of EDM. Electronic Data Interchange Electronic Data Interchange (EDI) is the computer-to-computer exchange of routine business information using standardized data formats. For nearly three decades, EDI has been popular among large companies because it saves money that otherwise would be spent processing paper and rekeying data. DOD, realizing that EDI technology could save the department millions of dollars annually, initiated the EDI program in May 1988 to create paperless business processes for exchanging information between DOD activities and industry. As part of this DOD initiative, DFAS initiated this program in October 1994 and established an electronic commerce program office in March 1995 to support its procurement and payment processes. Various DOD activities are working together to ensure that the EDI initiative will work on their individual systems. However, the development and implementation of EDI is made more difficult because of the hundreds of nonstandard, nonintegrated computer systems involved. To implement EDI, each system’s data must be individually converted to a standard format to be transmitted. In addition, the transmitted data must then be converted from the standard format to the format used by the receiving system. For example, DOD is using EDI to support its procurement processes. DOD has approximately 76 nonstandard procurement systems generating contractual documents and has begun working with 9 of the largest systems to convert their data for electronic transmission. Once converted, this information will be sent, via EDI, to one or more of the approximately 150 accounting systems, MOCAS, and various other contractor and DOD systems where the transmitted information must then be converted into a form useable by them. As illustrated in figure 5, the existing contractor payment processes related to the dissemination of contract and payment data between the procurement, accounting, and payment systems will become largely automated for those systems that will use EDI technology to transmit and receive data. However, because DOD’s systems were not developed with the EDI standard format, the use of EDI will require a conversion process for DOD’s numerous nonstandard systems. Using conversion and EDI technology, invoices, and receiving reports—traditionally conveyed in paper form—can be transmitted electronically between computers without human intervention. As systems increasingly implement the EDI standard formats, the extensive conversion process required for today’s many nonstandard systems will be reduced and efficiency improved. Where EDI is used, it will eliminate duplicative manual input—the source of many of the errors in the current process of getting information into the procurement, accounting, and contract payment systems. As of September 30, 1997, approximately 80 DOD contractors were approved to use EDI to transmit invoices to MOCAS; however, only about 50 contractors were actually transmitting invoices using EDI at that time. In addition to invoices, some contract data are also being transmitted through EDI. Two of the nine largest procurement systems are currently electronically transmitting contract data to MOCAS. Six of the remaining seven systems are scheduled to be using EDI to transmit contract data to MOCAS by the end of 1998. The remaining program is scheduled to transmit contract data using EDI in 1999. These nine procurement systems account for approximately 90 percent of all contract actions. Even when the procurement systems are on-line, not all data can be converted and transmitted using EDI. Presently, about 15 to 20 percent of all DOD contracts contain one or more nonstandard clauses that cannot be transmitted using EDI. For example, a nonstandard contract clause could say that contractor employees will only be paid local mileage for trips that exceed 50 miles. Until the nonstandard clause issue is resolved, DFAS-Columbus personnel will need to review a paper copy of the entire contract or view the contract via EDA and/or EDM. DOD is currently working to standardize the nonstandard clauses, and expects to have this issue resolved by June 30, 1998. In addition, EDI is not yet being used to transmit receiving report information. Traditionally, a contractor prepares the receiving report and submits it to a DOD official for verification of the receipt of the items purchased. The receiving report is then sent to MOCAS. DFAS is currently working with some contractors to convert receiving report data to the EDI standard. This capability is expected to be fully operational by the middle of 1998. According to the EDI program manager, as of September 30, 1997, DFAS plans to spend $47.1 million to develop and implement EDI for its centers and accounting offices over the 5-year period beginning in fiscal year 1995 and ending in fiscal year 1999. Schedule and Costs of Short-term Initiatives As shown in figure 6, DOD plans to spend about $80 million from fiscal years 1995 through 1999 developing and implementing the three short-term initiatives. This estimate includes contractor, personnel, and training costs. Long-term Initiatives The four long-term initiatives—Standard Procurement System, Defense Procurement Payment System, Shared Data Warehouse, and DFAS Corporate Database—are aimed at moving DOD’s contract payment process toward an integrated system using standard data, where a single copy of the official records is available to all users. The short-term initiatives discussed previously will all play a role, to some degree, in DOD’s long-term contract payment strategy. The scanning and image storage features of EDM will be used as part of the future payment system. However, because they are still in the process of analyzing contractors’ proposals, DOD officials are uncertain if they will use EDM’s foldering and workflow processing features. As currently envisioned, EDA will be used in conjunction with DOD’s planned Standard Procurement System to produce a “picture” of the contract for all authorized users to view as necessary. Finally, EDI will be used by all the long-term initiatives as the vehicle to transmit and transfer data among the systems and databases. As illustrated in figure 7, DOD plans to significantly change its contract payment process to conform to its vision of the future system. The lines in the figure indicate the numerous paper documents whose data must be electronically transmitted for the contract payment process to be paper free, such as contracts, contract modifications, receiving reports, and invoices. While these initiatives move DOD closer to a paper free environment, they will not allow DOD to meet the Secretary of Defense’s recently established goal of a paper-free contracting process by the year 2000, since three of the four long-term initiatives are not scheduled for completion until the end of fiscal year 2001. Standard Procurement System The objective of the Standard Procurement System (SPS) is to establish a fully functional automated procurement information system, which will be used to prepare procurement contracts and be used by contracting officials for contract administration. SPS is planned to replace DOD’s manual procurement systems and about 76 unique automated procurement systems that are used to prepare contracts. These systems had been developed over the years to meet individual mission needs using nonstandard processes and data and could not communicate well with each other or with MOCAS. Although some of these systems were able to transmit limited contract information to MOCAS electronically, hard copy paper contracts still had to be mailed to DFAS-Columbus before contract data, which is necessary to make a payment, could be entered into the system. As described in the background section of this report, the reliance on paper documents, such as contracts and contract modifications, and repetitive manual data input are major causes of disbursing problems. SPS will also be the system used by contracting officials to monitor and administer contracts. Currently, contracting officials have to rely on MOCAS to provide them the information they need to accurately account for the contracts. However, MOCAS is usually not provided with information that identifies the cost of the work accomplished with a specific funding source. Therefore, DOD is unable to ensure that payments are being made from the appropriate funding source. Accurate payments can only be made if accurate and complete data are available—regardless of which system is used. DOD officials stated that this technology initiative is expected to standardize procurement business practices and data elements throughout the department and provide benefits to the procurement and accounting communities by providing timely, accurate, and integrated contract information. Using SPS, the goal is that required contract and contract payment data will only be entered once—at the source of the information and be stored in the Shared Data Warehouse (another initiative described later in this report) for use by the entire procurement community. This is intended to result in more efficient management of contracts, standard contract business practices and processes, and less data entry and paper handling—a key factor in contract payment errors. SPS is planned to improve the procurement community’s ability to manage contracts from pre-award through contract closeout. The SPS program started in January 1994. The procurement software is a version of American Management Systems, Inc. (AMS) commercial software that is being tailored for DOD. The AMS contract was awarded in April 1997. SPS began deployment (installation, training, and deployment assistance) in May 1997. As of September 30, 1997, SPS was available to 2,535 users out of a planned total of 43,826. According to the program manager for SPS, as of September 30, 1997, the program development and implementation is expected to be completed by September 30, 2001, at a cost of about $295 million, including $20 million for the Shared Data Warehouse. Defense Procurement Payment System The Defense Procurement Payment System (DPPS) is intended to be the single standard DOD system for calculating contractor payments and generating accounting records. The system, as designed, will replace the contract payment functions currently in MOCAS. It is expected to standardize and improve contract payment processes by computing timely and accurate payments and making the disbursement data available to DOD entities responsible for procurement, logistics, and accounting. DPPS is expected to improve payment process efficiencies by (1) providing a single system that DFAS can use to validate funds availability, (2) reducing DFAS’ reliance on hard copy documents, and (3) eliminating manual reconciliations. DPPS will operate in an on-line, real-time environment—providing up-to-date contract and payment information. To calculate and schedule payments, DPPS will rely on the DFAS Corporate Database (another initiative described later in this report) for the needed contract and receiving report information. Contract payment information generated by DPPS will also be stored in the DFAS Corporate Database. Contract payment information needed by contracting officers to administer the contracts will be duplicated from the DFAS Corporate Database into the Shared Data Warehouse for their use. The DPPS program started in September 1995. DFAS plans to award a contract for DPPS by April 1998. DFAS also plans to procure a commercial off-the-shelf software package to compute entitlements and support the DPPS accounting functions. According to the DFAS program manager for DPPS, as of September 30, 1997, full DPPS deployment is expected by August 31, 2001, and the total program cost is reported to be $46 million. In commenting on a draft of this report, a DFAS official said that the program costs have been recalculated and as of December 31, 1997, they were estimated to be $114 million, and the system is expected to be completed by April 15, 2002. Shared Data Warehouse The Shared Data Warehouse (SDW) is a DOD initiative that is intended to be the single database containing the official procurement records needed to support contract placement and contract administration functions in SPS. Although DOD officials agree that the information produced in the accounting and procurement communities must be shared with each other, they have not finalized their plans on how or to what extent they will do this. SDW is designed to support the complete contract cycle, from initial concept and contract award through contract closeout. All contract information and contract modification information created in SPS will be stored in the SDW database. Original receiving report information will also be electronically entered into SDW from the existing logistics systems. Contract payment information generated by DPPS, which is originally stored in the DFAS Corporate Database, will be duplicated in SDW for use by the contracting officers to administer contracts. In addition, some procurement information, such as contracts, contract modifications, and receiving reports, which are stored in SDW and are needed by DFAS to compute contract payments, will also be duplicated and stored in the DFAS Corporate Database. Although SDW and the DFAS Corporate Database will contain duplicate information, it is likely that some information will not be shared. For example, some nonfinancial information, such as special shipping instructions and dates, would not be needed for contract payment and may only be stored in SDW. The SDW database is intended to significantly improve efficiency, reduce accounting errors, and support the payment process. SDW is expected to provide improved data integrity and accuracy and allow for a single point of data entry and for storage of procurement data, therefore reducing the need for manual re-keying of procurement data into multiple, nonstandard systems. According to the deputy program manager for SPS, as of September 30, 1997, SDW is to be completed by the end of fiscal year 2001 at a cost of $20 million, and is funded and developed as part of SPS. The SDW contract was awarded to Boeing Information Services and DLA’s System Development Center. They are currently developing the prototype database and completed their initial testing by loading a limited amount of contract data from MOCAS as of October 1997. An SDW official said that the test had been successful and, as new information is loaded into MOCAS, these data are duplicated into SDW for future use. The SDW program manager expects the procurement community to begin using the SDW information for decision-making by early fall of 1998. In addition to loading the information from MOCAS, they are also working to accept contracts and modifications, receiving reports, and other data directly from the procurement and logistics systems. DFAS Corporate Database The DFAS Corporate Database, as conceptualized, will be a single DFAS database that will be utilized by all DFAS systems. This shared database will contain all DOD financial information required by DFAS systems and will be the central point for all shared data within DFAS. This database will also contain all the data needed for DPPS to calculate contractor payments. For example, the database will include contracts, contract modifications, and receiving report information duplicated from SDW. Contract payment information, created by DPPS, will also be stored in the DFAS Corporate Database, for use by other DFAS payment and accounting systems. The DFAS Corporate Database will be used as a principal source of contract and contract payment information for all of DFAS. While SDW is intended to serve the needs of DLA’s procurement community, the DFAS Corporate Database will be used by authorized DFAS system users to support the contract payment and accounting process. The DFAS Corporate Database is intended to significantly improve efficiency, reduce accounting errors, and support the payment and accounting process. It is expected to improve data integrity and accuracy and serve as a single point of DFAS data entry and storage for procurement data, therefore reducing the need for manual re-keying of data into multiple, nonstandard systems. Although DFAS has not established a firm date, DFAS officials stated that this database will eventually be used as the official accounting records, shifting this responsibility from the accounting offices. The DFAS Corporate Database program office was established in June 1997. Preliminary planning, design, and prototyping activities are currently taking place. The target implementation date for those aspects of the DFAS Corporate Database needed to support DPPS is May 1998. According to the DFAS Corporate Database program manager, the initial cost to establish the program office and design, prototype, and test the shared database structure is estimated to be about $300,000 as of September 30, 1997. The costs of accessing the corporate database will be paid by the users. Schedule and Costs of Long-term Initiatives As shown in figure 8, DOD expects to spend about $341 million developing and deploying the four long-term initiatives during fiscal years 1994 through 2001. In commenting on a draft of this report, a DFAS official said that the DPPS program costs have been recalculated, and as of December 31, 1997, they were estimated to be $114 million, and the program is expected to be completed by April 15, 2002. With this change, DOD now estimates the total program costs for these four initiatives to be $409 million. Agency Comments and Our Evaluation In commenting on a draft of this letter, Department of Defense officials generally concurred with our description of how these seven DOD initiatives affect the contract payment process. They provided us with some suggested technical changes, which we incorporated throughout the report as appropriate. However, DOD officials were concerned that the report seemed to contrast the short-term initiatives with the long-term initiatives. DOD stated that the short-term and the long-term initiatives are designed to work in tandem. DOD officials added that the short-term initiatives support the department’s achievement of the Secretary of Defense’s goal of achieving a paper-free contracting process by the year 2000. DOD officials also stated that the long-term initiatives will bring the department greater benefits over the long haul, but that the long-term initiatives will take longer to implement and their schedule and cost definitely carry an element of greater risk and more uncertainty. Our report describes the short-term and long-term initiatives separately and is not intended to contrast these efforts. Also, at the time of our review, DOD had not yet fully defined how these independently managed initiatives will work in tandem. To the extent that relationships between initiatives were identified by DOD during our review, those relationships are incorporated in this report. In commenting on the draft, DOD provided no further clarification or documentation of those relationships. As discussed in the report, regarding the capability of the short-term initiatives to achieve the Secretary’s broader goal of having all aspects of the contracting process for major weapons systems paper free, EDM is only accessible to DFAS-Columbus; EDA does not capture all documents, such as the invoice; and the EDI schedule is only for implementation at DFAS centers and accounting offices. We agree with DOD that the long-term initiatives will take longer to implement and carry a greater risk and uncertainty. It is, therefore, important that as DOD continues its efforts to improve technology it understands and documents the problems in all aspects of the contracting process for major weapons systems and addresses the needs of procurement, logistics, and accounting functions. We are sending copies of this letter to the Chairman of the Senate Committee on Governmental Affairs; the Chairmen and Ranking Minority Members of the Senate Committee on Armed Services, the House Committee on National Security, the House Committee on Government Reform and Oversight and its Subcommittee on Government Management, Information and Technology; and the Director of the Office of Management and Budget. We are also sending copies of this report to the Secretary of Defense, the Acting Director, Defense Finance and Accounting Service, and the Director, Defense Logistics Agency. Copies will be made available to others upon request. Please contact me at (202) 512-9095 if you or your staff have any questions about this letter. Janett P. Smith, Roger Corrado, William Bricking, and Jean Lee were major contributors to this report. Lisa G. Jacobson Director, Defense Audits The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reported on seven technological initiatives that the Department of Defense (DOD) identified as key elements of its efforts to improve the contract payment process, focusing on DOD's goal of adopting the best business practices of the private sector. GAO noted that: (1) the descriptive information presented in GAO's letter highlights several areas of concern that could prevent DOD from meeting its goal of a paperless contracting process by the year 2000; (2) for example, even if DOD successfully meets the current schedule, as of September 30, 1997, three of the four long-term initiatives are not scheduled for completion until the end of fiscal year 2001; (3) in addition, the initiatives may not eliminate the weaknesses in the contract payment processing; (4) as GAO previously reported, although DOD has numerous initiatives under way to help resolve its disbursement and accounting problems, it has not performed the in-depth analysis necessary to fully determine the underlying causes of these problems and therefore identify the most effective solutions and rank specific reforms; (5) as a result, as with its other initiatives, the extent to which these seven technology initiatives discussed in GAO's letter will resolve DOD's long-standing disbursement problems is unclear; (6) even in a highly automated and paperless environment, proper payments can only be made by ensuring that accurate and complete data are available in the systems; and (7) for example, for some types of payments, unless provided with information that identifies the cost of work accomplished with the appropriate funding source, the new systems and databases will not have the information necessary to ensure that proper payments are made.
GAO_GAO-12-806
Background NNSA oversees programs to carry out its missions for nuclear weapons, nonproliferation, and naval nuclear propulsion, for which the President’s budget submission to Congress requested more than $11.5 billion for fiscal year 2013—about 42 percent of DOE’s total budget. NNSA has primary mission responsibilities in three areas. First, it is responsible for providing the United States with safe, secure, and reliable nuclear weapons in the absence of underground nuclear testing and maintains core competencies in nuclear weapons science, technology, and engineering. Second, NNSA implements key U.S. government nuclear security, nonproliferation, and arms control activities, including securing vulnerable nuclear and radiological material at facilities throughout the world; removing plutonium and highly enriched uranium from partner countries; eliminating U.S. nuclear material declared surplus to defense needs; negotiating and providing the technical capability to verify arms control treaties and agreements; strengthening other countries’ capacities to implement nonproliferation obligations; and enhancing other nations’ capabilities to deter and detect illicit movement of nuclear and radiological materials. Third, NNSA provides the research, development, design, and operational support for militarily effective naval nuclear propulsion plants, as well as enriched uranium for fabrication into fuel for the Navy’s propulsion reactors. NNSA receives four congressional appropriations to fund its activities, three of which align with its primary missions— Weapons Activities (for Defense Programs), Defense Nuclear Nonproliferation, Naval Reactors—and one that funds its management activities—Office of the Administrator. Since the Manhattan Project produced the first atomic bomb during World War II, NNSA, DOE, and predecessor agencies have depended on the expertise of private firms, universities, and others to carry out research and development work and efficiently operate the government-owned, contractor-operated facilities necessary for the nation’s nuclear defense. NNSA conducts its activities at research and development laboratories, production plants, and other facilities (collectively referred to as the nuclear security enterprise). Specifically, NNSA operates three national laboratories that design and ensure the reliability of nuclear weapons— Lawrence Livermore National Laboratory, California; Los Alamos National Laboratory, New Mexico; and the Sandia National Laboratories, New Mexico and California; and four nuclear weapons production sites—the Pantex Plant, Texas; the Y-12 National Security Complex, Tennessee; the Kansas City Plant, Missouri; and the Savannah River Site, South Carolina; as well as the Nevada National Security Site. NNSA’s relationship with its contractors has been formally established over the years through its M&O contracts—contracting strategies that give these contractors responsibility to carry out major portions of NNSA’s missions and apply their scientific, technical, and management expertise. M&O contractors at NNSA sites operate under NNSA’s direction and oversight but largely independently of one another. Various headquarters organizations within NNSA develop policies and NNSA site offices, colocated with NNSA’s sites, conduct day-to-day oversight of the M&O contractors, and evaluate the M&O contractors’ performance in carrying out the sites’ missions. NNSA Has Established a Formal PPBE Process That Includes Four Defined Phases According to NNSA’s policy, the NNSA PPBE process is composed of four phases—planning, programming, budgeting, and evaluation—and their associated activities. The different phases of PPBE appear sequential, but because of the amount of time required to develop and review resource requirements, the process is continuous and concurrent with at least two phases ongoing at any given time, including phases for different fiscal years. Figure 1 shows the four phases and the months during which each phase is scheduled to occur, according to NNSA policies and guidance. Planning. According to NNSA policy, in this phase, scheduled to begin annually in November, NNSA is to identify the goals it needs to achieve over the next 5 years and the program activities needed to meet these goals. According to NNSA officials, these goals are defined in a variety of documents, including presidential directives, policy statements, and DOE and NNSA strategic plans. This phase begins with the issuance of NNSA’s annual Strategic Planning Guidance, which provides any updates to the strategic plans and identifies any emerging issues. The NNSA program offices use this guidance to conduct their own internal planning processes, update their multiyear plans, including revising or adding program activities needed to meet the agency’s goals. Programming. According to NNSA policy, in this phase, scheduled to begin annually in February, NNSA is to determine which program activities and funding levels it will include in its budget proposal to DOE for the fiscal year beginning in October of the following calendar year. This determination is based on analysis of the activities’ estimated costs, as well as the need to meet the NNSA goals defined in the planning process. To determine these activities, NNSA program offices are to work with their contractors to obtain estimates for the cost of the program activities identified in the planning phase and determine how to accomplish these activities within anticipated funding levels, which are defined in annual NNSA Program and Fiscal Guidance. NNSA program offices are to then rank these activities in order of priority for meeting program goals and document these decisions in integrated priority lists. These lists can include proposed program activities above the anticipated funding levels specified in NNSA guidance—these proposed activities are known as unfunded requirements. Using these lists, as well as other briefing materials, a group of senior NNSA officials including the heads of all program offices—the Program Review Council—then is to meet with the Principal Deputy Administrator to discuss and defend each program’s proposed program activities. After reviewing the deliberations of the Program Review Council and the associated documents provided by the program offices, the NNSA Administrator is to decide on resources trade- offs that result in the combination of program activities that best meet NNSA’s goals over the 5-year period covered by the Future Years Nuclear Security Program plan. The Administrator is responsible for issuing the Administrator’s Final Recommendations (AFR), scheduled to be completed in May at the conclusion of the programming phase, to document NNSA’s justification for its priorities and to serve as the basis for the agency’s participation in DOE’s program review process, the Strategic Resources Review. Budgeting. According to NNSA policy, this phase is to integrate NNSA planning and programming priorities and budget estimates into DOE’s departmental budget process and consists of the following three parts: Budget formulation. During formulation, which is scheduled to begin annually in July for the fiscal year beginning in October of the following calendar year, NNSA submits its proposed budget to DOE and participates in the Strategic Resources Review. If DOE’s budget deliberations result in changes to NNSA’s proposed budget, NNSA may have to rebalance its work activities. In September each year, DOE submits its proposed budget to the Office of Management and Budget (OMB) for review. Depending on OMB revisions, NNSA may need to again revise its work activities. These revisions are incorporated into the President’s final budget request for DOE, which is submitted to Congress in February. Budget validation. According to NNSA guidance, the agency uses its budget validation review process to ensure its budget request is consistent with NNSA priorities and its budget estimating processes are reasonable. NNSA policy calls for NNSA’s Office of PPBE to manage a three-phase process of validating approximately 20 percent of NNSA’s programs each year, so that 100 percent of its budget is validated every 5 years. Programs to undergo validations are to be determined by a combination of factors, including Program Managers’ request, Administrator direction, and/or significant external interest/high program visibility. During Phase I of the process, scheduled for completion in June, before the beginning of the fiscal year in October, program officials determine if their activities conform with strategic guidance and program plans and review their methods for formulating budgets. In Phase II, scheduled annually for July to September, NNSA contractors or program offices, whichever developed the budget estimates, conduct a self-assessment of their budget planning, formulation, and cost-estimating processes. Phase II self-assessments are to be reviewed by a team—known as a validation review team—that comprises NNSA headquarters and site office staff. During Phase III, scheduled to occur from July through August, these validation review teams also review the cost-estimating practices used by the NNSA contractors and program offices. Importantly, NNSA’s validation guidance emphasizes that reviews should focus on the processes used to formulate budget plans and derive budget estimates rather than on the accuracy of the resulting estimates. According to this guidance, validation review teams are to issue a report on their findings on Phases II and III in September to inform NNSA, DOE, and OMB decisions for the following year’s budget cycle. Budget execution. According to NNSA policy, during this process, DOE and NNSA are to allocate, distribute, and control funds to achieve the priorities established in the programming phase, and to maintain the fiscal limits set up in the budgeting phase, which are subject to appropriation of funds by Congress. The execution coincides with the fiscal year and commences once appropriated funds become available—whether by appropriation or Continuing Resolution—at the beginning of the fiscal year every October. Evaluation. According to NNSA policy, NNSA is to employ an ongoing cycle of evaluations to review program performance. Evaluations are to include annual and quarterly NNSA performance reviews, performance reviews conducted as part of the Government Performance and Results Act, reviews conducted by OMB, and DOE oversight activities. NNSA Does Not Thoroughly Review Budget Estimates When Developing Its Annual Budget NNSA does not thoroughly review budget estimates before it incorporates them into its annual budget request. Instead, it relies on undocumented, informal reviews of these estimates by site and headquarters program office officials and the formal budget validation reviews, which conclude after the submission of the President’s budget to Congress. Neither of these processes meets DOE provisions for ensuring the credibility and reliability of agency budgets, as defined in DOE Order 130.1. According to senior NNSA officials, NNSA does not comply with DOE Order 130.1 because it believes the order expired in 2003 and therefore no longer applies to NNSA budget activities. Furthermore, they stated that the need for a formal review of budget estimates is minimized, in part, because of the inherent trust between NNSA and its M&O contractors. Additionally, we identified three key problems in NNSA’s formal budget validation review process: it occurs too late to affect budget decisions, is not sufficiently thorough, and includes other weaknesses that limit its effectiveness. NNSA’s Process for Reviewing Budget Estimates Is Not Thorough or Documented NNSA does not have a thorough, documented process for assessing the validity of its budget estimates prior to their inclusion in the President’s budget submission to Congress, thereby limiting the reliability and credibility of the budget submission. Specifically, according to NNSA officials from NNSA’s Offices of Management and Budget, Defense Programs; Defense Nuclear Nonproliferation; and the site offices for Los Alamos, Sandia, and Y-12, during the programming phase of PPBE, site and headquarters program office officials conduct informal, undocumented reviews of the budget estimates that M&O contractors submitted to determine their reasonableness, though some officials noted that the level of review may vary across site and headquarters program offices. According to these officials, this informal review is often conducted by comparing current budget estimates with those from previous years because the work is largely the same from year to year. If the estimates are similar, and no major programmatic change has taken place, site office and headquarters program office officials said that they generally view these budget estimates as reasonable for inclusion in NNSA’s budget estimate. However, site office officials told us that their ability to thoroughly review budget estimates is limited. For example, according to NNSA officials at the Los Alamos Site Office, they do not have the personnel needed or the time, because of other laboratory management responsibilities, to oversee the laboratory’s budget estimation practices. They told us that only one dedicated budget analyst is employed at the site office and, because of insufficient personnel resources in the office, a majority of this analyst’s time is spent conducting work that is not directly related to budget oversight. NNSA officials from the Y-12 Site Office also told us that they informally review budget estimates when they initially submit them to headquarters program offices. However, they also stated that they become more involved in reviewing budget estimates when the agency is formulating its final budget submission, and the M&O contractors are asked to develop multiple iterations of budget estimates based on various hypothetical funding scenarios. However, these officials also stated that their reviews are not documented. NNSA officials from Defense Programs’ Office of Analysis and Evaluation told us that the presence of certified cost engineers—individuals with professional certification in the field of cost assessment and program management—at the NNSA site offices could enhance NNSA’s ability to understand how M&O contractors and programs develop budget estimates and assess those estimates. The practices the site and headquarters program offices follow do not align with the criteria for thoroughness or documentation established in DOE Order 130.1. Specifically, DOE Order 130.1 states that contractor- developed budget estimates should be thoroughly reviewed and deemed reasonable prior to their inclusion in agency budgets and that these reviews should be documented. Senior officials from NNSA’s Office of Management and Budget told us that the agency does not strictly adhere to DOE Order 130.1 because it believes that the order has expired and no longer applies to NNSA budget activities. According to these officials, this order expired in 2003, and they are unaware of any other DOE or federal government requirement to conduct budget validation reviews. They further stated that NNSA is conducting budget validation reviews only because it considers them to be a good business practice and that NNSA will work with DOE on updating the order if DOE initiates that process. NNSA officials stated that, if DOE updated and reissued DOE Order 130.1, it would comply to the extent that it had the resources to do so. However, DOE Order 130.1 remains listed on DOE’s “All Current Directives” website, and a senior DOE budget official told us that DOE Order 130.1 remains an active order. Additionally, this official stated that a key principle of DOE Order 130.1—federal oversight of contractors’ practices for budget formulation—is appropriate and valid. This official noted, however, that the order is outdated in terms of the terminology it uses to describe DOE—it was issued in 1995, predating the 2000 establishment of NNSA—and should be updated to reflect the department’s current organizational structure. Furthermore, in March 2009, we issued a cost-estimating guide—a compilation of cost- estimating best practices drawn from across industry and government—in which we reported that validation is considered a best practice to ensure that cost data are credible and reliable for use in justifying estimates to agency management. As a result, NNSA’s site and headquarters program office reviews of budget estimates are neither thorough nor documented. According to the Principal Deputy Administrator, NNSA continues to face challenges in moving away from its historical process for developing budgets based solely on the unreviewed estimates produced by NNSA M&O contractors and that NNSA’s practices for understanding its program activity costs are not as sufficient as they need to be. In contrast, NNSA’s Office of Naval Reactors’ is jointly staffed and funded by both NNSA and the Navy and is therefore subject to naval and DOD, as well as NNSA, standards for reviewing contractor-developed budget estimates. The Office of Naval Reactors conducts a semiannual process—known as budget confirmation—to review all contractor- developed budget estimates. This review is conducted and documented by NNSA technical experts and approved by the Director of the Office of Naval Reactors; this director manages both NNSA and the Navy’s activities within the office and has final budgetary decision authority. Officials in NNSA’s Office of Management and Budget told us that the Office of Naval Reactors’ process is much more rigorous than that used by other NNSA program offices we reviewed. Furthermore, NNSA has exempted the Office of Naval Reactors from NNSA’s formal budget validation review process because of management’s confidence in the quality of the office’s budget confirmation process. Senior officials in NNSA’s Office of Management and Budget told us that NNSA does not have the financial and personnel resources needed to conduct budget estimate reviews with the same rigor as the Navy and DOD. Furthermore, these officials said, the need for a formal review of M&O contractor-developed budget estimates is minimized within NNSA because site office officials have historical knowledge of work with NNSA’s M&O contractors that allows them to assess the reasonableness of M&O contractor-developed budget estimates without conducting a formal review and because of the “inherent trust” between NNSA and its M&O contractors that results from its contracting strategy with them. Specifically, one of these officials stated that, to a large extent, only the M&O contractors are in a position to know the scientific and engineering details of nuclear weapons and the associated work scope and funding necessary to ensure their safety and reliability. However, for the last 10 years, we have reported that NNSA has significant weaknesses in its ability to control costs and effectively manage its M&O contractors. We are therefore concerned that NNSA management continues to deny the need for NNSA to improve its processes for developing credible and reliable budget estimates. NNSA’s Formal Budget Validation Review Process Occurs Too Late to Affect Budget Decisions and Is Not Sufficiently Thorough We identified three key problems in NNSA’s annual budget validation review process—its formal process for assessing M&O contractor- and program-developed budget estimates. First, NNSA’s annual budget validation review process occurs too late in the budget cycle to inform NNSA, DOE, OMB, and congressional budget development or appropriations decisions. DOE Order 130.1, which is referenced in NNSA’s policy for its budget validation review process, states that agencies should thoroughly review budget estimates before using these estimates to develop budgets. However, NNSA’s Phase II and Phase III budget validation reviews are scheduled to begin 5 months after the President submits his budget to Congress. Additionally, during each of the past four budget validation cycles, NNSA did not complete its budget validation reports for at least 12 months following the President’s budget submission to Congress and at least 4 months after the beginning of the fiscal year for which NNSA reviewed the budget estimates. Therefore, Congress considered the budget request for NNSA and appropriated funds to it, and NNSA executed these funds to M&O contractors, before NNSA had published the results of the budget validation reviews. Because of their timing, NNSA’s budget validation reviews cannot inform NNSA’s budget development, DOE or OMB reviews, or Congress’ appropriation processes. According to NNSA policy, the timing of NNSA’s budget validation review process is designed to inform the NNSA, DOE, and OMB budgeting processes for the fiscal year following that for which the budget validation reviews were conducted. However, the timing of the publication of the budget validation review reports for each of the last 4 years precluded even such delayed consideration because they were issued following the OMB budget formulation process for the following fiscal year. Second, NNSA’s budget validation review process is not sufficiently thorough to ensure the credibility and reliability of NNSA’s budget. DOE Order 130.1 states that budgets should be based on budget estimates that have been thoroughly reviewed by site and headquarters program offices. However, NNSA’s budget validation review process is limited to assessing the processes M&O contractors and programs used to develop budget estimates rather than the accuracy of the resulting budget estimates. NNSA’s 2010 budget validation review guidance states that the agency lacks the resources and expertise needed to thoroughly evaluate the accuracy of budget estimates on its own and therefore relies on assessments of the reasonableness of the processes used by M&O contractors to develop budget estimates. NNSA officials from the Los Alamos and Y-12 Site Offices told us that they believe the budget validation review process would benefit NNSA more if it more thoroughly assessed the budgetary processes M&O contractors used to develop their budget estimates. Furthermore, NNSA policy and budget validation review guidance stipulate that 20 percent of the agency’s programs should be reviewed annually to help ensure its validity, but NNSA’s formal validation process actually results in significantly smaller portion of its budget being reviewed. For example, in 2011, NNSA’s annual budget validation guidance identified four programs subject to budget validation review—the Engineering Campaign, Nuclear Counterterrorism Incident Response, Global Threat Reduction Initiative, and Fissile Materials Disposition—each of which is conducted at multiple NNSA sites. However, NNSA conducted validation reviews at only one site for each of these programs, which resulted in a formal validation review of approximately 12, 21, 15, and 4 percent of each of those programs’ total budgets, respectively, which, together, constituted 1.5 percent of NNSA’s budget request for fiscal year 2012. Third, other weaknesses in NNSA’s budget validation review process limit its effectiveness as a resource to assess the validity of its budget estimates. In particular, NNSA workgroups that reviewed the 2007 and 2008 budget validation review cycles recommended that NNSA formally evaluate the status of recommendations made during previous budget validation reviews. However, NNSA has not incorporated a formal mechanism for implementing an evaluation into its budget validation review process. NNSA officials at the Los Alamos and Y-12 site offices also told us that not having such an evaluative mechanism was a weakness in NNSA’s budget validation process. Without a formal mechanism, NNSA is limited in its ability to measure (1) any progress M&O contractors or programs have made in their processes for estimating budgets in response to recommendations from previous budget validation reviews and (2) the effectiveness of NNSA’s budget validation review process. For example, a 2010 budget validation review of the Readiness Campaign recommended that the program more formally document its budget processes, guidance, and estimating assumptions. Furthermore, a 2009 budget validation review of the Elimination of Weapons Grade Plutonium Production program found that the program could not provide documentation of its internal budget processes. However, in both instances, NNSA did not follow up to determine if the programs had addressed these concerns during subsequent budget validation reviews. Additionally, budget validation reviews do not always include recommendations to improve M&O contractor or program processes for estimating budgets when they identify potentially serious weaknesses in those M&O contractor’s or programs’ ability to develop cost estimates. For example, according to a 2010 budget validation review of budget estimation activities for the Nonproliferation and Verification Research and Development program at Sandia National Laboratories, six of the eight projects reviewed lacked sufficient documentation to support their cost estimates, including two that lacked any supporting documentation. The report noted the importance of credible cost estimates, but it did not formally recommend any remedial improvements and rated the overall processes used to develop those cost estimates as satisfactory. Additionally, NNSA officials in the Defense Programs’ Office of Analysis and Evaluation told us that the cost information used to support budget validation review reports is often flawed or nonexistent. NNSA Has Implemented Some Tools to Support Decision Making on Resource Trade-offs but Has Stopped Using or Developing Other Capabilities During the programming phase of PPBE, NNSA uses a variety of management tools, such as integrated priority lists and requirements and resources assessments, to support its programming phase and assist senior managers in making decisions on resource trade-offs. However, it has stopped using these capabilities or developing others. NNSA Has Developed and Implemented Some Tools to Assist Management in Deciding on Resource Trade-offs NNSA uses the following management tools to decide on resource trade- offs during the programming phase of its PPBE process: Integrated priority lists. NNSA’s policy for the programming phase stipulates that each of NNSA’s nine program offices is to annually develop an integrated priority list that ranks program activities according to their importance for meeting mission requirements. These lists provide senior NNSA and DOE managers with an understanding of how various funding scenarios would affect program activities. Specifically, these lists rank the priority of program activities that are within anticipated appropriation levels—which are of the highest priority—as well as those that NNSA would fund if the appropriation levels were sufficiently high to do so. For example, the program activity listed last on an integrated priority list would be the first to forgo funding if appropriation levels are lower than anticipated. Conversely, these lists define program activities— unfunded requirements—that would be funded if appropriation levels are higher than anticipated. NNSA instructions for the programming phase stipulate that the agency is to combine the nine program office-developed integrated priority lists into four that correspond to the four congressional appropriations NNSA receives. Three of the integrated priority lists— those of the Offices of the Administrator, Defense Nuclear Nonproliferation, and Naval Reactors—correspond directly to specific appropriations,represent activities funded by the Weapons Activities appropriation into a but NNSA does not combine the six others that single integrated priority list. NNSA officials stated that these six others represent separate and distinct mission areas and only the Administrator can decide on the resource trade-offs among them. Of the six program offices funded by the Weapons Activities appropriation, Defense Programs accounts for a large majority—approximately 82 percent—of the funding requested in NNSA’s fiscal year 2013 budget submission to Congress. According to officials in NNSA’s Office of Management and Budget, the Administrator is responsible for deciding on how to allocate resources across program offices. However, these decisions are not documented or reflected in a single integrated priority list for program activities funded by the Weapons Activities appropriation. By not combining these lists, NNSA is limiting the formal documentation available to inform DOE about how various Weapons Activities appropriation funding scenarios would affect the program activities conducted by these six program offices. Specifically, because these six lists are not integrated, it is unclear which program activities would be affected by changes to appropriation levels or which programs across the six lists are of the highest priority. Requirements and Resources Assessments. During the 2010 and 2011 programming cycles, NNSA used its Requirements and Resources Assessment process to independently assess the need to conduct new program activities and unfunded requirements. According to the NNSA handbook for this process, officials in NNSA’s Office of Management and Budget review program offices’ budget submissions for activities that are either above anticipated funding levels or are for new activities. For these assessments, officials are to analyze specific factors related to these activities, such as their need for meeting agency priorities and the reasonableness of the assumptions used to produce their budget estimates. The objective of this process is to ensure that new program activities and unfunded requirements are needed to meet NNSA priorities. For example, according to officials in NNSA’s Office of Management and Budget, the use of the Requirements and Resources Assessment process was a contributing factor in reducing the amount of unfunded program activities included in NNSA’s budget from approximately $1 billion for fiscal year 2012 to approximately $80 million for fiscal year 2013. Furthermore, draft NNSA guidance states that the process has identified inconsistencies in the quality of estimates and the level of insight and understanding program managers have regarding the fidelity of the estimates supporting their budgets. According to officials in NNSA’s Office of Management and Budget, this process is a simple and effective tool for providing management with additional information on the need to conduct proposed new program activities or unfunded requirements. However, these officials also stated that this process is time-consuming and would not be practical or efficient to apply to the entirety of NNSA program activities because it was designed to assess program components rather than entire programs; they added that other types of program reviews or validations would be better suited for conducting program needs analysis on an enterprise-wide basis. Additionally, because the NNSA Office of Integration and Assessments, which was responsible for conducting these assessments, was dissolved in 2010, officials in NNSA’s Office of Management and Budget told us that they may discontinue the use of Requirements and Resources Assessment process in future programming cycles. Furthermore, in the current austere budget environment, they do not foresee any programs proposing activities that are either new or above anticipated funding levels. Therefore, the continued use of this process in future programming cycles is uncertain. However, we believe that NNSA has demonstrated this process can be an important tool for assessing the necessity to fund certain activities in order to meet its mission requirements. Enterprise Portfolio Analysis Tool. NNSA’s Office of Defense Programs is implementing a data system—the Enterprise Portfolio Analysis Tool— designed to provide a consistent framework for managing the PPBE process within Defense Programs, which accounts for 54 percent, or $6.2 billion, of the President’s $11.5 billion fiscal year 2013 budget request for NNSA. As we testified in February 2012, a tool such as this could help NNSA obtain the basic data it needs to make informed management decisions, determine return on investment, and identify opportunities for Currently, this tool includes a mechanism to identify when cost saving.decisions on resource trade-offs must be made if, for example, M&O contractor-developed budget estimates for program requirements exceed the budget targets NNSA provided for those programs. Additionally, the tool is to incorporate Defense Programs’ common work activity structure—known as its work breakdown structure—to facilitate an analysis of consistent budget data from across the NNSA enterprise. Specifically, the tool may allow Defense Programs managers to compare the budget estimates for analogous activities across the nuclear security enterprise regardless of which M&O contractor or program is conducting them. Furthermore, Defense Programs officials stated that they eventually plan to use this tool to compare budget estimates of program activities with the amounts the programs ultimately expended, but they said that the introduction of this capability is not imminent. According to Defense Programs and M&O contractor officials, the implementation of this tool is placing an additional labor burden on NNSA M&O contractors because of the quantity of historical budget data that need to be entered into it. However, according to Defense Programs officials, once these initial historical data are entered, the M&O contractors will need to annually update the system with the most recent year’s data. NNSA Does Not Use Capabilities That Were Previously Implemented or Partially Developed to Support Its Programming Activities NNSA no longer has an independent analytical capability to perform such functions as reviewing proposals for program activities and verifying cost estimates. In addition, since 2009, the NNSA Administrator has not formally documented his decisions on resource trade-offs at the close of the programming phase in the AFR. Furthermore, NNSA has not completed cost estimating guidance to assist NNSA program managers in identifying reliable M&O contractor practices for estimating costs for operations and sustainment activities. By not using these capabilities, NNSA has reduced its ability to decide on resource trade-offs because it has not enhanced, made formal, or implemented capabilities that it had already, to varying degrees, developed or used. The DOE Inspector General and GAO, recommended in 2003 and 2007, respectively, that NNSA establish an independent analysis unit to perform such functions as reviewing proposals for program activities and verifying cost estimates. NNSA agreed with these recommendations and, in 2009, instituted the Office of Integration and Assessments to identify, analyze, assess, and present to senior NNSA management options for managing its programs and making decisions on resource trade-offs. The specific responsibilities of this office included analyzing program performance, evaluating programming and funding alternatives, and assessing the implementation and effectiveness of process improvement initiatives. Furthermore, this office managed the Requirements and Resources Assessment process during the 2010 programming cycle. However, NNSA disbanded the office in 2010, 18 months after it was formally created. NNSA officials also told us that it was never properly staffed, which thereby limited its effectiveness. In the memorandum establishing the Office of Integration and Assessments, NNSA stated that it expected the office, in conjunction with DOE’s Office of Cost Analysis, to provide DOD-like analytical resources Since then, however, DOE has also eliminated its Office across NNSA.of Cost Analysis. With both of these offices now gone, neither NNSA nor DOE has independent cost assessment or program evaluation capabilities analogous to those of DOD. In contrast, because NNSA’s Office of Naval Reactors is organized as a separate entity within NNSA reporting both to NNSA and the U.S. Navy, unlike the rest of NNSA, it is subject to the Navy’s independent analytical resources, such as assessments by the Naval Sea Systems Command’s Office of Cost Engineering and Industrial Analysis and the Naval Center for Cost Analysis, which conduct independent reviews and analyses of program cost estimates. Furthermore, DOD has an Office of Cost Assessment and Program Evaluation, which has a similar function but with a purview that extends across DOD, including the Navy. These layers of independent review approximate NNSA’s vision for independent analysis, as described in the memorandum establishing the Office of Integration and Assessments. Following the dissolution of the Office of Integration and Assessments in 2010, NNSA’s Office of Defense Programs created the Office of Analysis and Evaluation to conduct similar program review functions. However, the capabilities of this office are limited by several factors. For example, because the office is positioned within Defense Programs, it does not have purview to conduct analysis on any of NNSA’s other programs, which, in total, constitute nearly half of the agency’s budget request for fiscal year 2013. Additionally, according to Defense Programs officials, this office does not have the capability to self-initiate reviews of programs but rather is instructed by Defense Programs’ management on what activities to assess, thereby limiting the office’s independence. Furthermore, NNSA officials from this office stated that properly staffing the office remains a challenge because many qualified individuals left DOE and NNSA when they eliminated the Offices of Cost Analysis and Integration and Assessments, respectively. Even though NNSA has had difficulty in maintaining an agencywide independent analytical capability, NNSA’s Principal Deputy Administrator told us that NNSA remains supportive of the concept of an independent analytical unit to conduct assessments of programs agencywide. However, senior NNSA officials told us that creating and developing the capabilities of such an office would be difficult in the current budget environment and that therefore NNSA has no current plans to institute such a capability. The NNSA Administrator has not formally documented his decisions on resource trade-offs at the close of the programming phase in the AFR since 2009, which is inconsistent with NNSA policy and instructions. When issued, this document articulated the Administrator’s rationale and methodology for deciding on resource trade-offs during the programming phase of the PPBE process—which one senior official in NNSA’s Office of Management and Budget described as an important component of the PPBE process—to support in his budget proposal to DOE and to better facilitate NNSA’s participation in DOE’s Strategic Resources Review. According to senior NNSA officials, the Administrator considered the AFR to be a useful management tool but decided to discontinue issuing it because of concerns that its contents, which are predecisional Executive Branch deliberative material and embargoed from public release by OMB Circular A-11, could be leaked and thereby reduce the flexibility of DOE and OMB in making final decisions regarding the President’s Budget. Instead of the AFR, the Administrator now develops an internal document called “Administrator’s Preliminary Decisions,” which is not required in NNSA policy, guidance, or instructions; contains more generalized information; and does not have the rationales, methodologies, and justifications for decision making on resource trade-offs that were previously incorporated into the AFR. NNSA developed a draft guide—the Program Managers’ Guide to Understanding and Reviewing Cost Estimates for Operations and Sustainment Activities—in 2010 to assist NNSA program managers in identifying reliable M&O contractor practices for estimating costs for operations and sustainment activities—activities not related to construction; according to this guide, these activities constitute approximately 80 percent of NNSA’s annual budget. This guide was also created to supplement the information provided in NNSA’s Business Operating Procedure 50.005, Establishment of an Independent Cost Estimate Policy and interim Cost Estimating Guide 50.005, which identifies best practices for preparing cost estimates.Program Managers’ Guide to Understanding and Reviewing Cost Estimates for Operations and Sustainment Activities was largely completed but never finalized before NNSA dissolved the Office of Integration and Assessments, which had drafted the guide, and NNSA officials said the agency has no plans to complete or issue it. According to officials in NNSA’s Office of Management and Budget, NNSA drafted this guide because it recognized that supplemental information focused on operations and sustainment activities cost estimates—the development of which, according to this guide, are not governed by any specific NNSA guidance or processes—could enhance the tools available to program managers in evaluating cost estimates and how they are translated into budget estimates. The objective of the guide was to provide an instructive document to facilitate program managers’ ability to understand what constitutes a rigorous process for ensuring quality operations and sustainment cost estimates on an ongoing basis and evaluate the reasonableness those estimates. This guide also defined key components of cost estimating to clarify the responsibilities and expectations of NNSA program managers and included instructions for how NNSA program managers can assess the quality of budget estimates submitted by M&O contractors. NNSA officials with Defense Programs’ Office of Analysis and Evaluation told us that additional guidance on how to assess the costs of operations and sustainment activities could enhance program managers’ ability to assess the reliability and credibility of cost estimates. Conclusions NNSA has established a formal four-phase PPBE process that uses short- and long-term planning to define program priorities and match them to available budgetary resources. However, DOE and NNSA have not taken adequate steps to make this process as effective and efficient as possible. In particular, DOE Order 130.1, which defines DOE’s provisions for budget activities, references outdated terminology and organizations that no longer exist within the department, leading to confusion regarding the order’s applicability and requirement for implementation. As a result, NNSA believes that the order has expired and that it is not required to adhere to its provisions. By not adhering to these provisions, NNSA is reducing the credibility of its budget proposals. Moreover, NNSA’s process for developing budget estimates continues to rely heavily on its M&O contractors to develop budget estimates without an effective, thorough review of the validity of those estimates. Without thorough reviews by site and headquarters program offices of budget estimates, NNSA cannot have a high level of confidence in its budget estimates or in its ability to make informed decisions on resource trade-offs and to enhance the credibility and reliability of its budget. Furthermore, NNSA’s formal budget validation review process does not sufficiently ensure the credibility and reliability of NNSA’s budget, primarily because of deficiencies in the timing of these reviews. Also, without a formal mechanism to evaluate the status of recommendations made during previous budget validation reviews, NNSA is limited in its ability to measure any progress M&O contractors or programs have made to their budget estimating processes. NNSA has reduced its ability to decide on resource trade-offs because it has not enhanced, made formal, or implemented capabilities that it has already, to varying degrees, developed or used. In particular, NNSA does not follow its instructions for preparing an integrated priority list for each congressional appropriation, as it does not combine the six priority lists that represent activities funded by the Weapons Activities appropriation into a single integrated list. By not combining these lists into a single integrated priority list, NNSA is limiting the formal documentation available to inform DOE which program activities would be affected by changes to this appropriation. Moreover, NNSA instituted and then disbanded an independent analytic capability that would provide it with an independent cost assessment or evaluation capabilities of the reasonableness and affordability of various programs and projects proposed by NNSA offices. By disbanding its independent analytical capability, NNSA is losing its ability to improve its cost-estimating capabilities and better ensure that its project cost estimates are credible and reliable. Because of the fiscal constraints in the current budget environment, it is all the more critical that NNSA have the capability to conduct independent cost analyses to enhance its ability to make the most effective and efficient resource decisions on resource trade-offs. Despite previous recommendations that DOE’s Inspector General made in 2003, and that we made in 2007, to institute an independent analytical capability to assess programs throughout all of NNSA, NNSA continues to lack such a function. Not having this capability could preclude NNSA from making the best decisions about what activities to fund and whether they are affordable. In addition, NNSA may cease using its Requirements and Resources Assessment process—which is intended to provide some independent analysis of new program activities and unfunded requirements—in future PPBE budget cycles because it does not anticipate program proposals for new activities or unfunded requirements. By not retaining this process, NNSA would lose an important tool for assessing the necessity to fund certain activities in order to meet its mission requirements. Furthermore, NNSA no longer follows its policy to issue the AFR. Without a formal document, NNSA and DOE have no formal record of the Administrator’s rationale and methodology for deciding on resource trade- offs during the programming phase of the PPBE process. We recognize that NNSA needs to hold confidential, internal budgetary and resource trade-off deliberations; however, we do not believe that this need supersedes NNSA policy or the benefits provided by documented decision making during programming, which one senior NNSA official described to us an important component in NNSA’s PPBE process. Not issuing the AFR (or some similarly precise documentation) places the Administrator in conflict with official NNSA policy and with an important PPBE precept—the importance of transparency. Finally, NNSA developed draft guidance in 2010 to assist NNSA program managers in identifying reliable M&O contractor practices for estimating costs for operations and sustainment activities. Such guidance would better equip NNSA program managers to more accurately evaluate the reasonableness of cost estimates, but this guidance is in draft form and NNSA has no plans to complete and issue it. Without such guidance, NNSA program managers are limited in their ability to assess the reliability and credibility of budget estimates. Recommendations for Executive Action To enhance NNSA’s ability to better ensure the validity of its budget submissions, and to decide on resource trade-offs, we recommend that the Secretary of Energy take the following seven actions: Direct the DOE Office of Budget to formally evaluate DOE Order 130.1 and revise as necessary, and communicate any revisions to the NNSA Administrator so that the agency will have updated provisions for assessing the quality of its budget estimates. Direct the Administrator of NNSA to: Develop a formal process, or amend its budget validation review process, to ensure that all budget estimates are thoroughly reviewed by site and headquarters program offices, and that these reviews are timed to inform NNSA, DOE, OMB, and congressional budget decisions. Once this process is developed, incorporate a formal mechanism to evaluate the status of recommendations made during previous budget validation reviews so that NNSA can measure M&O contractors’ and programs’ progress in responding to deficiencies with their budget estimates. Combine the integrated priorities lists for each of the program offices funded within the Weapons Activities appropriation into a single integrated priorities list, as stipulated in NNSA instructions, to better inform DOE which program activities would be affected by changes to this appropriation. Reinstitute an independent analytical capability to provide senior decision makers with independent program reviews, including an analysis of different options for deciding on resource trade-offs, and facilitate NNSA making the best decisions about what activities to fund and whether they are affordable. As part of this capability, formally retain the Requirements and Resources Assessment process to review proposed new activities and unfunded requirements. Reinstitute the issuance of the Administrator’s Final Recommendations to document the Administrator’s rationale and methodology for deciding on resource trade-offs to support in his budget proposal to DOE and to better facilitate NNSA’s participation in DOE’s budget process. Complete and formally issue the Program Managers’ Guide to Understanding and Reviewing Cost Estimates for Operations and Sustainment Activities so that program managers will be better equipped to evaluate the reasonableness of cost estimates. Agency Comments and Our Evaluation We provided DOE with a draft of this report for its review and comment. In its written comments, NNSA, responding on behalf of DOE, provided observations on the report’s findings and stated that it generally agreed in principle with six of our seven recommendations and did not concur with one. NNSA did not concur with our recommendation to combine the integrated priorities lists for all program offices funded by the Weapons Activities appropriation into a single integrated priorities list, as is stipulated by NNSA instructions for the programming phase of PPBE. NNSA agrees that the integrated priorities lists are a useful tool to facilitate NNSA and DOE decision-making. However, NNSA states that it believes reaching management consensus on a single integrated priorities list for these program offices would be a difficult, time consuming process and that its current approach for deciding on resource trade-offs is effective and efficient. We acknowledge that NNSA uses a variety of tools in addition to integrated priorities lists to conduct programming activities, but we continue to believe that combining the integrated priorities lists for all program offices funded by the Weapons Activities appropriation could enhance the agency’s ability to support its decisions on resource trade-offs for DOE consideration during the Strategic Resources Review. However, NNSA stated in its comments that it would consider the development of more robust integrated priority lists if circumstances require changes to its current approach. NNSA further acknowledged that aspects of its PPBE process could be improved but disagreed with our report’s characterization of its budget estimate review processes as not being thorough. NNSA commented that it believes that our conclusions overemphasize some procedural areas for potential improvement, without accurately considering the cumulative effectiveness of NNSA’s PPBE process as a whole. We continue to believe that the agency’s processes for reviewing budget estimates are not sufficiently thorough to ensure the credibility and reliability of those estimates and do not meet the provisions defined in DOE Order 130.1. Specifically, the reviews conducted by site and headquarters program office officials are informal and undocumented, and NNSA’s budget validation review process—the agency’s formal process for assessing M&O contractor- and program-developed budget estimates—does not assess the accuracy of budget estimates and is conducted for a small portion of the agency’s annual budget. NNSA’s letter is reproduced in appendix II. NNSA also provided technical comments, which we incorporated throughout the report as appropriate. We are sending this report to the Secretary of Energy, the Administrator of NNSA, the appropriate congressional committees, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The objectives of our review were to determine (1) the current structure of the National Nuclear Security Administration’s (NNSA) planning, programming, budgeting, and evaluation (PPBE) process; (2) the extent to which NNSA reviews its budget estimates; and (3) how NNSA decides on resource trade-offs in its PPBE process. To determine the current structure of NNSA’s PPBE process, we reviewed the NNSA policies and other headquarters-developed instructions and guidance documents that define how the process is designed to function. We also reviewed program-specific PPBE documentation from the Offices of Defense Programs, Defense Nuclear Nonproliferation, and Naval Reactors; these three offices correspond to NNSA’s primary missions and collectively account for approximately 85 percent of the President’s fiscal year 2013 NNSA budget submission to Congress. We also interviewed officials from NNSA’s Office of Management and Budget, which is responsible for managing NNSA’s PPBE process, as well as the offices of Defense Programs, Defense Nuclear Nonproliferation, and Naval Reactors to discuss how NNSA’s PPBE process is designed to function. To determine the extent to which NNSA reviews its budget estimates, we reviewed DOE Order 130.1 and NNSA policies, instructions, and guidance that define how such reviews are to be conducted. We also analyzed documentation of the formal budget validation reviews conducted by NNSA for the last five review cycles, as well as the results of two NNSA workgroups that evaluated the budget validation review process. Furthermore, we interviewed officials involved in the development, oversight, or execution of NNSA budget estimate reviews from the NNSA Offices of the Administrator, Management and Budget, Defense Programs, Defense Nuclear Nonproliferation, Naval Reactors, and Acquisition and Project Management; the site offices for Los Alamos, Sandia, and the Y-12 National Security Complex, and the Naval Reactors Laboratory Field Office; DOE officials from the Office of Budget; and M&O contractor officials from Los Alamos and Sandia National Laboratories, the Y-12 National Security Complex, and Bettis Atomic Power Laboratory. Because NNSA’s Office of Naval Reactors is organized as a separate entity within NNSA reporting both to NNSA and the U.S. Navy, we also met with Navy officials from its Offices of Financial Management and Budgeting, and Cost Engineering and Industrial Analysis. To determine how NNSA decides on resource trade-offs, we reviewed NNSA policies, instructions, and guidance for its programming process. Based on these documents, we identified the tools that NNSA uses, or has used, to assist NNSA management in deciding on resource trade-offs and reviewed documentation of how these tools were applied by program offices and NNSA management during the programming phases of previous PPBE cycles. We also interviewed officials from the NNSA Offices of the Administrator, Management and Budget, Defense Programs, Defense Nuclear Nonproliferation, Naval Reactors to discuss how they decide on, and document, resource trade-offs. We conducted this performance audit from July 2011 to July 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the U.S. Department of Energy Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Dan Feehan (Assistant Director), Robert Baney, Matthew Tabbert, and Eugene Wisnoski made significant contributions to this report. Cheryl Peterson, Jeremy Sebest, Rebecca Shea, Carol Herrnstadt Shulman, and Kiki Theodoropoulos provided technical assistance.
NNSA, a semiautonomous agency within DOE, is responsible for the nation’s nuclear weapons, nonproliferation, and naval reactors programs. Since its inception in 2000, the agency has faced challenges in its ability to accurately identify the costs of major projects. In addition, both the DOE Inspector General, in 2003, and GAO, in 2007, reported concerns with NNSA’s PPBE process, specifically in how NNSA validates budget estimates and decides on resource allocations or trade-offs. GAO was asked to review how NNSA manages programming and budgeting through its PPBE process. GAO examined (1) the current structure of NNSA’s PPBE process, (2) the extent to which NNSA reviews its budget estimates, and (3) how NNSA decides on resource trade-offs in its PPBE process. To carry out its work, GAO reviewed NNSA policies, instructions, guidance, and internal reports documenting the agency’s PPBE process and interviewed NNSA, DOE, and M&O contractor officials. The National Nuclear Security Administration’s (NNSA) planning, programming, budgeting, and evaluation (PPBE) process provides a framework for the agency to plan, prioritize, fund, and evaluate its program activities. Formal policies guide NNSA and management and operating (M&O) contractors through each of four phases of the agency’s PPBE cycle—planning, programming, budgeting, and evaluation. These phases appear to be sequential, but the process is continuous and concurrent because of the amount of time required to develop priorities and review resource requirements, with at least two phases ongoing at any time. NNSA does not thoroughly review budget estimates before it incorporates them into its proposed annual budget. Instead, NNSA relies on informal, undocumented reviews of such estimates and its own budget validation review process—the formal process for assessing budget estimates. Neither of these processes adheres to Department of Energy (DOE) Order 130.1, which defines departmental provisions for the thoroughness, timing, and documentation of budget reviews. NNSA officials said the agency does not follow the order because it expired in 2003. Nevertheless, the order is listed as current on DOE’s website, and a senior DOE budget official confirmed that it remains in effect, although it is outdated in terminology and organizational structure. Additionally, according to NNSA officials, the agency’s trust in its contractors minimizes the need for formal review of its budget estimates. GAO identified three key problems in NNSA’s budget validation review process. First, this process does not inform NNSA, DOE, Office of Management and Budget, or congressional budget development decisions because it occurs too late in the budget cycle—after the submission of the President’s budget to Congress. Second, this process is not sufficiently thorough to ensure the credibility and reliability of NNSA’s budget because it is limited to assessing the processes used to develop budget estimates rather than the accuracy of the resulting estimates and is conducted for a small portion of NNSA’s budget—approximately 1.5 percent of which received such review in 2011. Third, other weaknesses in this process, such as no formal evaluative mechanism to determine if corrective actions were taken in response to previous findings, limit the process’s effectiveness in assessing NNSA’s budget estimates. NNSA uses a variety of management tools to decide on resource trade-offs during the programming phase of the PPBE process. One of these tools, integrated priority lists—which rank program activities according to their importance for meeting mission requirements—is to provide senior managers with an understanding of how various funding scenarios would affect program activities. However, NNSA has weakened its ability to gauge the effects of resource trade-offs. For example, in 2010, NNSA disbanded its Office of Integration and Assessments, created in response to DOE Inspector General and GAO recommendations that NNSA establish an independent analysis unit to perform such functions as reviewing proposals for program activities and verifying cost estimates. NNSA agreed with these recommendations and, in 2009, instituted the office to identify, analyze, and assess options for deciding on resource trade-offs. Without an independent analytical capability, NNSA may have difficulty making the best decisions about what activities to fund and whether they are affordable.
GAO_GAO-13-727
Background SBA was created in 1953 to assist and protect the interests of small businesses, in part by addressing constraints in the supply of credit for these firms. The 7(a) program, named after the section of the Small Business Act that authorized it, is SBA’s largest business loan program. The program is intended to serve creditworthy small business borrowers who cannot obtain credit through a conventional lender at reasonable terms and do not have the personal resources to provide financing themselves. Under the 7(a) program, SBA guarantees loans made by commercial lenders to small businesses for working capital and other general business purposes. These lenders are mostly banks, but some are nondepository lenders, including small business lending companies (SBLC). The guarantee assures the lender that if a borrower defaults on a loan, SBA will purchase the loan and the lender will receive an agreed- upon portion (generally between 50 percent and 85 percent) of the outstanding balance. For a majority of 7(a) loans, SBA relies on lenders with delegated authority to process and service 7(a) loans and to ensure that borrowers meet the program’s eligibility requirements. To be eligible for the 7(a) program, a business must be an operating for-profit small firm (according to SBA’s size standards) located in the United States and meet the “credit elsewhere” requirement, including the personal resources test. Within the 7(a) program, there are several delivery methods—including regular 7(a), the Preferred Lenders Program (PLP), and SBA Express. Under the regular (nondelegated) 7(a) programs, SBA makes the loan approval decision, including the credit determination. Under PLP and SBA Express, SBA delegates to the lender the authority to make loan approval decisions, including credit determinations, without prior review by SBA. The maximum loan amount under the SBA Express program is $350,000 (as opposed to $5 million for 7(a) loans). This program allows lenders to utilize, to the maximum extent possible, their respective loan analyses, procedures, and documentation. In return for the expanded authority and autonomy provided by the program, SBA Express lenders agree to accept a maximum SBA guarantee of 50 percent. Regular (nondelegated) 7(a) loans and delegated 7(a) loans made by PLP lenders generally have a maximum guarantee of 75 or 85 percent, depending on the loan amount. In June 2007, under its own authority, SBA established the Patriot Express pilot loan program, which has features that are similar to those of the SBA Express and other 7(a) loan programs. Like the SBA Express program, the Patriot Express program allows lenders to use their own loan analyses and documents to expedite loan decisions for eligible borrowers. However, the Patriot Express has a different guarantee rate than SBA Express and different eligibility requirements. Patriot Express borrowers must have a business that is owned and controlled (51 percent or more) by the following members of the military community: veterans (other than dishonorably discharged), active duty military participating in the military’s Transition Assistance reservists or National Guard members, spouse of any of these groups, a widowed spouse of a service member who died while in service, or a widowed spouse of a veteran who died of a service-connected disability. Like the 7(a) program, the Patriot Express program provides the same loan guarantee to SBA-approved lenders on loan amounts up to $500,000, and the loan proceeds can be used for the same purposes. SBA initially intended to operate the Patriot Express pilot for about 3 years, after which it would evaluate the program. However, SBA announced on December 14, 2010, that it would continue to operate the program for at least 3 more years to allow the agency to evaluate the program. SBA determined that it was premature to assess the results of the pilot because most of the loans were made in the previous 2 years and there had not been enough time to measure their performance. Appendix II compares the key features of the Patriot Express program to those of the regular 7(a) and SBA Express programs. Figure 1 depicts the Patriot Express loan process, including the roles played by the lender and SBA in the transaction and the fees associated with the loans. A lender may request that SBA honor its guarantee by purchasing the loan if a borrower is in default on an SBA loan for more than 60 calendar days and if the borrower is unable to cure the loan after working with the lender. The lender is required by regulation to liquidate all business personal property collateral before demanding that SBA honor the guarantee. As shown in figure 2, after the lender has liquidated all business personal property collateral, it submits the purchase request to one of SBA’s Office of Financial Program Operation’s centers, which processes loan guarantee requests. The center reviews the lender’s package to determine if it has complied with SBA rules and regulations. If SBA finds that the lender has complied with the agency’s rules and regulations and conducted proper due diligence when originating the loan, SBA honors the guarantee and pays the lender the guaranteed portion of the outstanding loan amount. According to SBA officials, the 7(a) program—including its subprograms, such as SBA Express and Patriot Express—is projected to be a “zero subsidy” program in fiscal year 2014, meaning that the program does not require annual appropriations of budget authority for new loan guarantees.costs of purchasing defaulted loans, SBA assesses lenders two fees on each 7(a) loan, including Patriot Express loans. The guarantee fee must be paid by the lender at the time of application for the guarantee or within 90 days of the loan being approved, depending upon the loan term. This fee is based on the amount of the loan and the level of the guarantee, and lenders can pass the fee on to the borrower. The ongoing servicing fee must be paid annually by the lender and is based on the outstanding balance of the guaranteed portion of the loan. To offset some of the costs of the program, such as the SBA’s Office of Credit Risk Management is responsible for overseeing 7(a) lenders, including those with delegated authority. SBA created this office in fiscal year 1999 to better ensure consistent and appropriate supervision of SBA’s lending partners. The office is responsible for managing all activities regarding lender oversight, including lender risk ratings and lender activities, and preparing written reports based on such oversight. Patriot Express Loans Default at a Higher Rate Than Other SBA Loans, and Costs Have Exceeded Overall Program Income Since 2007 From 2007 through 2012, SBA made 8,511 Patriot Express loans. The majority of these loans were valued below $150,000, and close to half were uncollateralized loans valued below $25,000. Although Patriot Express loans represent a fraction of SBA’s larger loan portfolio and are concentrated among 11 lenders, these loans have defaulted at higher rates compared to similar SBA loans made in the same time frame. At the current default and recovery rates, the costs of the Patriot Express program will likely continue to exceed overall program income. Prior to reorganization in May 2007, the office was called the Office of Lender Oversight. Lenders Have Made 8,511 Patriot Express Loans Since 2007 From the start of the program through the fourth quarter of 2012, lenders made a total of 8,511 Patriot Express loans. Taken together, these loans are valued at $702,753,406, with an average of about $82,570 per loan. As shown in figure 3, after a rapid expansion in the first 2 years of the program from 2007 through 2009, the number of Patriot Express loans declined from 2,176 approved in 2009 to 869 approved in 2012. Similarly, the total loan amounts of Patriot Express loans approved each year grew from approximately $67 million in 2007 to over $150 million in 2008 and 2009, but have since decreased. The higher numbers of Patriot Express loans approved in 2009 and 2010 may be attributable, in part, to the American Recovery and Reinvestment Act of 2009 (ARRA) and subsequent legislation, which provided funding to temporarily subsidize the overall 7(a) guarantee program’s fees and to increase the maximum loan guarantee percentage from 75 or 85 percent to 90 percent, with the exception of loans approved under the SBA Express 7(a) subprogram.With a 5 to 15 percent increase in the maximum allowed guarantee through ARRA, lenders had a greater incentive to approve SBA loans in general (including Patriot Express loans), knowing that SBA would guarantee a higher percentage of the loan. Figure 3 also shows that average loan amounts have varied over the years. For loans approved in 2007, the average loan amount was for about $100,000, decreasing to about $70,000 in 2009, and increasing since then to just under $100,000 in 2012. Based on our analysis of SBA data from 2007 through 2012, about 67 percent of borrowers used Patriot Express loans for working capital, and about half of these loans funded businesses that were either new or had been in existence for less than 2 years. The majority of Patriot Express loans approved since the program’s inception are valued at 30 percent of the maximum loan limit, and about half are small enough that they do not require collateral. Although SBA allows Patriot Express loans of up to $500,000, about 84.2 percent of the loans made since 2007 (7,166) were below $150,000. Further, 41.2 percent of Patriot Express loans (3,509) were $25,000 or less. More than 64 percent of loans up to $25,000 were provided by one lender and this lender accounted for about 26 percent of total loans in the program. This lender primarily provided loans between $5,000 and $25,000, and its average Patriot Express loan made from 2008 through 2012 was $9,759. As noted previously, loans under the Patriot Express program below $25,000 do not require collateral. The Patriot Express program is highly concentrated in a small number of lenders. For example, the top 11 lenders (in terms of number of loans made) represent 52 percent of the Patriot Express loans made since the program’s inception (see table 1). These top 11 lenders accounted for 27.55 percent of the total amount approved for the Patriot Express program. This concentration is explained, in part, by one lender that focuses on providing low-dollar loans to veteran-owned businesses and represents about 26 percent of the Patriot Express program, as discussed previously. In contrast, the remaining 782 lenders that participate in Patriot Express have approved fewer loans. For example, 246 of these 782 lenders approved one loan each since the program began in 2007. As previously discussed, in addition to reviewing data on the Patriot Express program from 2007 through 2012, we also reviewed similar data from two other SBA loan programs: the SBA Express program and SBA’s 7(a) program. SBA Express and the 7(a) program, which are not limited to borrowers in the military community, are significantly larger than the Patriot Express program. Since 2007, the SBA Express program has surpassed Patriot Express in total number of loans (156,280) and total amount ($10.9 billion) approved, but the average loan amounts for Patriot Express are larger than those for SBA Express. SBA Express has seen a decline in loan numbers and amounts approved since 2007 (see fig. 4). The number of SBA Express loans approved each year declined by about 50 percent from 2007 through 2008, and that number has remained at lower levels since then. SBA officials told us that part of the decline from 2007 through 2008 may have been due to the economic downturn, which prompted lenders to cut back on these loans. Figure 4 also shows the total value of SBA Express loans peaked in 2007 ($2.9 billion) but then decreased by nearly half in 2008 ($1.7 billion). The total value of SBA Express loans then increased to about $2 billion in 2011 before falling to about $1.3 billion in 2012. The 7(a) program is also significantly larger than the Patriot Express program in all measures, including total numbers of loans approved, average loan amounts, and total loan amounts approved. Annually, the total numbers of 7(a) loans approved have declined since peaking in 2010 at 19,131, while the average loan amount for 7(a) approvals annually has steadily increased from about $470,784 in 2007 to $716,489 in 2012 (see fig. 5). The total value of 7(a) loans approved within each year has been relatively steady, as shown in figure 5, ranging from around $7.7 billion to around $9.2 billion, with the exception of 2010, when the total value of loans approved was around $12 billion. Table 2 shows the total numbers of loans, total dollar values, and average loan amounts approved for Patriot Express, SBA Express, and 7(a) from June 2007 through 2012. Additionally, the table shows the relative percentage of loans made and dollar values for each program when compared among all three programs. When comparing the three programs since the inception of Patriot Express in June 2007 through the end of 2012, Patriot Express is significantly smaller than SBA Express and 7(a) in terms of number of total loans approved (3.76 percent) and dollar amount (1.15 percent). However, the average loan amount for Patriot Express is larger than the average loan approved under SBA Express. When comparing loans approved in each year from the inception of Patriot Express through December 31, 2012, Patriot Express loans (with the exception of 2007) defaulted at a higher rate than SBA Express or 7(a) loans (see fig. 6). For loans approved in 2009, the default rate for Patriot Express was 17 percent, approximately three times that of SBA Express and 7(a) loans. Additionally, the default rate for Patriot Express loans approved in 2010 was 7.4 percent, again more than three times that of SBA Express and 7(a) loans. Loans approved in more recent years have had a shorter amount of time during which to observe defaults, which may at least partially explain lower default rates in more recent years of the program. The higher default rates for Patriot Express are generally consistent with one of the key measures of creditworthiness that SBA collects, the Small Business Portfolio Solutions (SBPS) scores. For example, 61.6 percent and 52.1 percent of 7(a) and SBA Express loans approved from 2007 through 2012 had SBPS scores of 180 or greater, compared to just 48.3 percent of Patriot Express loans approved in the same time period. Finally, although the economic downturn may account for some of the overall higher default rates in all three programs from 2007 through 2009, Patriot Express has maintained a higher default rate compared to SBA Express and 7(a) since 2008. The default rates for the Patriot Express program are generally higher for the smaller loan amounts. For example, as shown in figure 7, loans under $10,000, which represent 21.3 percent of all Patriot Express loans from 2007 through 2012, had an overall 22 percent default rate. Additionally, Patriot Express loans under $25,000, which represent 41.2 percent of loans made in the same period, had a default rate of 20 percent. Our analysis of SBA data identified a concentration of low-dollar, uncollateralized Patriot Express loans with significantly higher default rates (compared to other Patriot Express loans) that were approved by a single lender. In 2009, the peak year for Patriot Express, this lender accounted for about 39 percent of Patriot Express loans approved, as shown in figure 8. Patriot Express loans approved by this lender have been defaulting at rates as high as 38 percent for loans approved in 2008 and 25 percent for loans approved in 2009, approximately 13 percentage points higher than loans approved by other lenders in the same years, also shown in figure 8. Although overall default rates have decreased since 2008, the default rates for this lender remain significantly higher than those of all other lenders. For example, in 2009, at 25 percent, the default rate of the one lender was more than double that of the remaining lenders, at 12 percent. In May 2013, SBA decided not to renew this lender’s delegated authority to make SBA loans, which includes its authority to make Patriot Express loans. Figure 9 shows the default rates of Patriot Express, SBA Express, and 7(a) by loan amounts. When comparing default rates with different loan amounts based on program requirements, the performance of Patriot Express loans improves as loan amounts increase. For example, the largest improvement in performance for Patriot Express loans was between loans of less than $25,000 and loans valued from $25,000 to $150,000; for loans in this range, the default rate drops by almost half, from 20 percent to 12 percent. As mentioned earlier, more than 64 percent of loans up to $25,000 were provided by one lender. However, even when loans approved by this one lender were excluded, the default rate for loans up to $25,000 did not change significantly. Consistent with overall SBA lending through Patriot Express, SBA Express, and 7(a), available data suggest that the number of loans made to veterans through these programs are currently at similar levels, but overall lending to veterans through these programs has decreased over the past 8 years. Although some SBA loans made to veterans may not be identified, the available data using the veteran status field in SBA’s database show that the differences in levels of lending to veterans across the 7(a), SBA Express, and Patriot Express programs have been lower over the last 2 years, as shown in figure 10. For example, in 2012, 664 loans were made to veterans through the Patriot Express program, 551 loans through the SBA Express program, and 391 loans through the 7(a) program. In comparison, there were more than twice as many Patriot Express loans made to veterans compared to SBA Express loans and The trends shown in figure 10 are consistent with 7(a) loans in 2009.overall lending from 2007 through 2012 in terms of total loans made under Patriot Express, SBA Express, and 7(a). Although veterans have been able to access capital through the Patriot Express, SBA Express, and 7(a) loan programs, overall lending to veterans peaked in 2004—at which time only the 7(a) and SBA Express programs existed—and has continued to decrease since then, even after the Patriot Express program started in 2007. Between 2004 and 2012, the number of loans made to veterans decreased 77 percent, from about 7,000 loans in 2004 to 1,600 loans in 2012. Further, even with the introduction of the Patriot Express program in 2007, the overall levels of lending to veterans through all three SBA programs has remained lower than the overall level of lending to veterans before the program’s inception. A number of factors could have contributed to this decrease in overall lending to veterans through SBA programs, including more conservative lender credit standards and the economic downturn in 2008. In addition, as mentioned previously, veteran status information is self- reported by 7(a) and SBA Express borrowers, and the veteran status field may not accurately and consistently capture all veterans who have received a loan through these programs. In addition to a decrease in the total number of loans, the total dollar amount of loans made to veterans through Patriot Express, SBA Express, and 7(a) and also decreased from 2007 through 2012. As shown in figure 11, the overall dollar amount of loans to veterans through these three programs decreased from 2007 through 2009 before spiking in 2010 and continuing to decline again through 2012. The trends shown in figure 11 are consistent with overall lending in terms of total value of loans made under the Patriot Express, SBA Express, and 7(a) programs from 2007 through 2012. In May 2013, SBA announced a new initiative to increase lending to veteran entrepreneurs by $475 million over the next 5 years across all SBA loan programs. Figure 12 shows the default rates of Patriot Express, SBA Express, and 7(a) loans made to veterans by approval year. Loans made to veterans through these programs in 2007 and 2008 had higher default rates than those in more recent years, which may be at least partially explained by the longer time periods these loans have had in which to observe defaults. While the default rates for veteran loans for SBA Express and 7(a) have decreased for more recent loan cohorts, the Patriot Express default rates for veteran loans remained relatively high. For example, Patriot Express loans made to veterans in 2009 and 2010 defaulted more than twice as often as loans made to veterans through SBA Express and 7(a). According to our analysis of SBA’s data on Patriot Express, program costs exceed the fees collected, funds recovered from borrowers in default, and other funds collected by SBA to offset the costs of the program. SBA’s costs for the Patriot Express program are primarily based on the guaranteed portion of the purchased loan. As described earlier, when a loan defaults, the lender asks SBA to honor the guarantee (that is, purchase the loan). For the Patriot Express program, as indicated previously, the guaranteed portion is 85 percent for loans of $150,000 or less and 75 percent for loans over $150,000. The exact amount that SBA purchases is offset by any proceeds of sale of collateral prior to purchase. Following default, if SBA determines that it will honor the guarantee, SBA purchases these loans from the lender at either 85 percent or 75 percent, depending on the approved value of the loan. These costs are partially offset by guarantee fees that SBA collects at origination and annual fees it collects from lenders. Additional offsets are based on recoveries in the form of borrower payments following purchase or from proceeds from the liquidation of collateral that was not liquidated within 60 days following default of the loan. According to SBA officials, Patriot Express lenders are required to liquidate non-real-estate collateral prior to purchase, unless situations arise that would prevent them from liquidating, such as a bankruptcy or stay on liquidation. In these situations, SBA will purchase a loan prior to full liquidation. As shown in table 3, from fiscal years 2007 through 2012, SBA purchased $45.3 million in Patriot Express loans. These default costs were offset by $12.9 million in collected fees and $1.3 million in recoveries, resulting in $31.1 million in losses for this period (excluding future revenues from fees and potential additional recoveries). Based on these cash flows, the Patriot Express program has had an overall recovery rate of 2.87 percent since 2008—that is, of $45.3 million in Patriot Express loans that SBA purchased from 2008 through 2012, SBA has recovered almost $1.3 million (2.87 percent) of the funds.Express makes it more likely that the program will continue operating at a loss. In addition, SBA provided projected cash flows for the Patriot Express program, which show projected losses of $36 million including future revenues from fees and potential recoveries. Stakeholders Reported Benefits and Challenges, but SBA Has Not Evaluated the Effects of the Patriot Express Pilot Selected loan recipients and lenders, as well as veteran service organizations we met with, identified various benefits and challenges to Patriot Express, but SBA has not evaluated the effects of the Patriot Express pilot. Lenders and borrowers we met with most frequently identified supporting veteran businesses and providing veterans with a streamlined application process as benefits of the program. Low awareness among veterans of the program and participating lenders were among the most frequently cited challenges by selected lenders, borrowers, and veteran service organizations. In addition to Patriot Express, veterans also access capital through alternate SBA-guaranteed loan products and other means. SBA provides optional training and counseling through a variety of resources to help veteran entrepreneurs navigate the options available to them. However, as with some of its previous pilot loan programs, SBA has not conducted an evaluation of the Patriot Express program to assess the extent to which it is achieving its objectives, including an assessment of its effect on eligible borrowers. Our previous work has shown that an evaluation gives an agency the opportunity to refine the design of a program and determine whether program operations have resulted in the desired benefits for participants. Helping Veterans Expand Their Business and Providing a Streamlined Loan Process Were among the Cited Benefits of the Program Participating loan recipients and lenders, as well as veteran service organizations we met with, identified supporting veteran businesses as a top benefit of the Patriot Express program. Specifically, 21 of the 24 Patriot Express loan recipients we met with said that the loan had enabled them to start their business, expand operations, or keep their business open during challenging times. In addition, four of the six recipients we spoke with who received a line of credit through the program said that having available credit increased their attractiveness as a potential contractor because it signaled to other businesses that they could pay for the costs to complete projects. Ten loan recipients believed that if they had not received the loan, they would currently not be in business because the loan provided capital at a critical point in time. The remaining 14 loan recipients believed that they would still be in business if they had not received the loan but would have faced difficult decisions to cover the costs, including firing staff and foregoing key projects. All loan recipients we met with said that they would apply for the program again based on their experience, and 6 recipients had pursued and received another Patriot Express loan. Likewise, the three veteran service organizations that we met with stated that the program benefited veterans who obtained Patriot Express loans. The Patriot Express program provides veterans with a streamlined application process, and loan recipients and lenders we met with noted that this was a benefit of the program. Six of the eight lenders and one veteran service organization we met with said that the program provided veterans with a less onerous application process and reduced SBA paperwork requirements, particularly when compared to SBA’s 7(a) loan program. For example, SBA requires borrowers to submit additional documents to apply for a 7(a) loan, such as monthly cash-flow projections, income statements, and balance sheets for the last 3 years. Further, since 7(a) borrowers must pledge all available collateral up to the loan amount, SBA requires borrowers to complete a schedule of collateral of all real estate and personal property used to secure the loan and provide supporting documents for such collateral, including real estate appraisals and environmental investigation reports. Almost all loan recipients we met with reported that they had a positive experience with the Patriot Express loan application process, including satisfaction with the amount of documentation required. In addition, nearly all loan recipients said that they received the loan proceeds in a timely manner, ranging from a few days to 3 months from the time they applied for the loan. Selected loan recipients, lenders, and veteran service organizations also identified other benefits to the program, such as providing veterans with favorable loan terms. For example, nearly all lenders, one veteran service organization, and officials from the National Association of Government Guaranteed Lenders (NAGGL) said that the program provided veterans with more favorable loan terms than an SBA Express loan, such as lower interest rates or higher maximum loan amounts. In addition, seven loan recipients we met with said that the Patriot Express loan terms provided a more cost-effective credit alternative to fund their small business expenses compared to other financing options. For instance, four recipients stated that receiving a Patriot Express loan saved them from using credit cards and other expensive lines of credit to obtain the necessary capital for their business. Finally, borrowers, lenders, and veteran service organizations we met with said that having a dedicated program solely for those in the military community was a benefit. For example, 10 Patriot Express loan recipients said that they appreciated that the program targeted veterans specifically and noted that it played a large role in their decision to obtain the loan. In addition, one lender said that having a loan program that also targets the business needs of spouses of service members or reservists is valuable, particularly if the business is jointly owned by the couple, because it provides access to capital to expand the business if one spouse is deployed. Further, two veteran service organizations we met with stressed that having a program for veterans also helped to initiate conversations between the veteran entrepreneur and the lender about other small business resources and financing options available. Low Awareness of the Program and Its Participating Lenders Were Reported as Challenges, among Others Lack of Awareness of the Program Selected loan recipients, lenders, and veteran service organizations said that a low awareness of the Patriot Express program among the military community was among the most frequently cited challenges. Specifically, over half of the Patriot Express loan recipients, six of the eight lenders, and two veteran service organizations we met with said that SBA could do more to increase outreach to veteran entrepreneurs and better market the program to the military community. In addition, five loan recipients did not know about the program until they approached a lender for financing and were notified about it. Further, awareness of the program among selected veteran entrepreneurs who have not participated in the program was also low. For example, 11 of the 16 veterans that received 7(a) loans and all 15 SBA Express veteran loan recipients that we were able to contact were unaware that Patriot Express existed. SBA officials said the agency tries to increase awareness of the program through district offices, resource partners, and lenders. For example, SBA officials noted that there is a veteran loan specialist at each SBA district office who could recommend specific small business resources, including the Patriot Express program, to veteran entrepreneurs. Additionally, SBA officials said that their resource partners, such as Small Business Development Centers (SBDC) and SCORE (formerly the Service Corps of Retired Executives) chapters, could advertise the program through hosted events that discuss potential options for financing small business needs. Five loan recipients we met with said that they learned about the program through an SBA resource partner, including SBDCs and SCORE counselors, and two noted that these resources further helped them to find a participating lender. For example, one loan recipient said that the SBDC staff member who told him about the program also recommended a lender, assisted him with his loan application, and followed up with him after the loan was approved. SBA officials also said that they have reached out to NAGGL to increase marketing of the program at the lender level. According to NAGGL officials, NAGGL hosted roundtables at its 2013 Lender Leadership Summit and Lending Technical Conference to discuss ways that lenders can better serve veteran entrepreneurs, including the Patriot Express program. Although NAGGL does not participate in marketing SBA programs to borrowers, NAGGL officials said that individual lenders typically advertise certain SBA loans based on their involvement with those programs. For example, some lenders we met with noted that they try to increase awareness by marketing themselves as Patriot Express lenders, particularly if they have branches in locations with large concentrations of veterans. These lenders also partnered with veteran groups at their branch locations and presented their loan products, including Patriot Express loans, to interested members at events hosted by veteran groups. One lender, however, noted that it was difficult to market SBA loan products at their branches because identifying borrowers who can qualify for SBA loans can be challenging. According to this lender, pursuant to SBA’s “credit elsewhere” requirement, the lender needs to first evaluate a borrower’s ability to obtain credit against their own lending policies for conventional loans in order to determine if an SBA loan product is appropriate for the borrower. This approach is consistent with what we have previously reported regarding how lenders make credit elsewhere decisions. Patriot Express and 7(a) loan recipients we met with stated that low awareness of which lenders make Patriot Express loans is also a challenge to the program. For example, 7 of the 24 Patriot Express recipients and 3 of the 4 7(a) veteran loan recipients we met with reported that SBA could provide better information about which lenders currently participate in the program. A majority of these 10 recipients found that the search for a participating lender was difficult and required many phone calls and visits to lenders. Three recipients also noted that the SBA resources they used incorrectly identified banks as participating lenders. For example, one veteran said that he spent significant time away from his business to contact six banks—which the district SBA office said were participating lenders—and found that none of them participated in Patriot Express. Additionally, two 7(a) veteran loan recipients said they initially sought financing through the Patriot Express program but they said that they settled for a 7(a) loan when they could not find a participating lender. Further, two Patriot Express loan recipients told us that they paid fees to a third-party entity that could identify lenders that made Patriot Express loans. All 10 of these recipients stated that having a consolidated and up- to-date list of participating lenders would have been helpful to their search for a loan. SBA officials said that they did not have a list of participating lenders on their website because the agency did not want to appear to be steering borrowers toward financing their businesses through loans, especially loans from particular lenders. Rather, SBA officials stated that prospective veteran borrowers interested in the program should first contact an SBA district office or SBDC to determine if financing through a loan would be suitable for their business. Further, SBA officials said that if financing through a loan was the best solution for the veteran, SBDCs would then give the veteran a list of local lenders that participate in the program. As mentioned previously, two of the loan recipients we met with found a lender through these SBA resources, such as SBDCs and SCORE counselors. Other challenges reported by selected borrowers, lenders, and veteran service organizations included high fees associated with the loan, stringent collateral requirements, and limited maximum loan amount. High Fees: Six Patriot Express loan recipients and five lenders we met with said that the SBA guarantee fees were unaffordable for some veterans and suggested that they should be reduced or waived. These six Patriot Express loan recipients also noted that the lender packaging fees were unaffordable and suggested that they should be reduced or waived as well. According to SBA officials, the guarantee fee plays an important role in the continuation of the loan guarantee program because fees are collected to offset potential losses from defaulted and purchased loans. SBA officials also noted that the guarantee fee is ultimately the responsibility of the lender, though often it is passed on to the borrower. In addition, SBA guidance establishes limits to the amount of packaging and other fees a lender can charge based on a percentage of the loan amount. SBA officials said that issues regarding potentially excessive fees charged at origination could be identified either through complaints from the SBA OIG’s hotline or during SBA’s 7(a) lender on-site examinations, which are discussed in the next section of this report. According to SBA officials, there has only been one complaint about fees, which was reported to the SBA OIG hotline. SBA officials said they resolved the issue by confirming that the fees were inconsistent with SBA guidance and working with the lender to compensate the borrower. Stringent Collateral Requirements: Three Patriot Express loan recipients noted that they struggled to meet the collateral requirements for their loans. Additionally, three lenders felt that the SBA collateral requirement for Patriot Express loans above $350,000—for which the borrower must make all collateral available to the lender up to the loan amount— was excessive and a disincentive for prospective veteran borrowers to participate in the program. According to SBA officials, the agency is considering some modifications to the collateral requirements for regular 7(a) that would still maintain a strong underwriting process. To the extent those changes are adopted, they would apply as well to Patriot Express loans in excess of $350,000. Limited Maximum Loan Amount: Two Patriot Express loan recipients, two veteran service organizations, and one lender we met with said that the current maximum loan amount for the program was challenging because certain projects and contracts require more than $500,000. For example, one veteran service organization we met with noted that veterans who are federal contractors often need a loan for more than $500,000 to win a contract. SBA officials noted that the agency has not considered changing the maximum loan amount for Patriot Express loans. Veterans Also Access Capital through Alternate SBA Programs and Other Means Veterans access capital through other SBA-guaranteed loan products, including 7(a), SBA Express, and Small Loan Advantage (SLA) loans. These loan products have some terms that are similar to those of Patriot Express and some that are different, as shown in figure 14. As shown above, there are several similarities and differences between the programs, and three lenders we met with reported that deciding which SBA loan products to offer veteran borrowers was challenging. For example, Patriot Express loans offer veteran recipients lower maximum interest rates, but higher guarantee percentages and fees compared to SBA Express. Additionally, while regular 7(a) loans can provide veterans with similar loan terms and fees, these loans typically have longer processing times than Patriot Express loans due to the increased SBA paperwork requirements previously discussed. While Patriot Express and SLA have some similar loan terms, SBA officials identified other differences in the programs. Three of the eight lenders we met with said that deciding what product to offer a veteran entrepreneur was difficult because the loan terms and underwriting process for a Patriot Express loan were similar to those of other SBA loans they offered. Additionally, seven of the eight lenders believed that if the Patriot Express program were not available, veterans could still access capital through these other SBA loan programs. While 7(a) and SBA Express are alternatives to Patriot Express, loan recipients noted that other ways veterans could access capital were less advantageous and all loan recipients we met with were not aware of any veteran-specific loan guarantee programs aside from Patriot Express. For example, nine recipients said that veterans could finance their small business needs through conventional loans or credit cards, but they stated these options may be more expensive than a Patriot Express loan because they typically have higher interest rates. Two recipients considered bringing on an investor, which would inject capital into their business, but would require the recipient to give up ownership of a part of the business to the new investor. Finally, five recipients thought about financing their business through their personal savings accounts, but said that this option could have depleted their savings and a few noted that it might not have been enough to cover the amount of capital needed. SBA Offers Several Veteran-Specific Training and Counseling Efforts SBA provides training and counseling to veteran entrepreneurs through a variety of resources, although Patriot Express loan recipients are not required to use them. According to SBA officials, the agency delivers training and counseling to veterans through the following ways: Cooperative agreements: SBA has cooperative agreements with 16 organizations that serve as Veteran Business Outreach Centers (VBOC), which offer services such as business plan preparations and veteran entrepreneur counseling for service-disabled veterans. Additionally, SBA has cooperative agreements with other resource partners through which veteran entrepreneurs can receive training and counseling, including SBDCs, SCORE chapters, and Women’s Business Centers (WBC). According to SBA data on veteran participation in training and counseling offered by the aforementioned resource partners (VBOCs, SBDCs, SCORE chapters, and WBCs) from fiscal year 2008 through fiscal year 2012, overall veteran participation remained steady from 2008 through 2010. However, it increased over 40 percent from approximately 115,000 veterans in 2010 to about 163,000 veterans in 2012. Further, veteran participation in training and counseling offered through VBOCs also increased in 2011, from about 45,000 veterans in 2010 to about 90,000 veterans in 2012. As of June 2013, about 36,000 veterans had received training and counseling through SCORE, SBDCs, and WBCs. SBA-sponsored activities: According to SBA officials, some SBA- sponsored activities may be provided in coordination with the previously mentioned resource partners, and veterans can also receive training and counseling through these efforts. For example, Operation Boots to Business leverages SBA’s resource partner network—VBOCs, SBDCs, SCORE chapters, and WBCs—and SBA’s partnership with, among other entities, Syracuse University’s Institute for Veterans and Military Families to provide an entrepreneurship training program for transitioning service members. Operation Boots to Business consists of several phases, including a 2-day training session on creating a feasibility analysis for a business plan and an 8- week online course on the fundamentals of small business ownership, including marketing, accounting, and finance. As of March 2013, a total of 1,390 veterans (1,309 for the 2-day session and 81 for the online course) had participated in this effort. SBA participation in third-party activities: Veteran entrepreneurs can access training and counseling services provided through SBA’s participation in third-party activities, including events hosted by other federal agencies and nonprofit entities. For example, SBA awarded a 3-year grant to Syracuse University to create the Entrepreneurship Bootcamp for Veterans with Disabilities (EBV), which provided small business management training to post-9/11 veterans with disabilities. According to SBA, 463 veterans participated in EBV during this 3-year grant period. In 2010, SBA provided Syracuse University with funding for two additional programs that support veteran entrepreneurship: Veteran Women Igniting the Spirit of Entrepreneurship (V-WISE), which focuses on the training and mentorship of women veterans and spouses, and Operation Endure and Grow (OEG), which features an 8-week online course geared toward National Guard and Reserve members, their families, and their business partners. As of April 2013, 857 women veterans, female spouses and partners of active service members, and transitioning female members of the military community had participated in V- WISE, and 168 reservists had received training through OEG. Veterans who have participated in certain training and counseling efforts have generally found them to be helpful. For example, SBA’s Office of Veterans Business Development (OVBD) conducts an annual VBOC client satisfaction survey, which shows that client satisfaction with VBOC services had increased from 85 percent in 2008 to 93 percent in 2012. According to these SBA officials, the survey results are used to, among other things, identify areas for improvement and new training topics. OVBD officials said they are responsible for collecting feedback surveys for the VBOC program only. Veterans whom we met with who participated in these efforts also found them to be helpful. Specifically, 14 of the 28 loan recipients we met with—Patriot Express loan recipients as well as 7(a) veteran loan recipients—participated in an SBA-sponsored training or counseling session, and the most commonly used resources among these recipients were SBDCs and SCORE counselors. Eight of the recipients said these sessions were helpful in starting and growing their business, such as assisting in the development of business plans and marketing strategies, and they noted that these sessions were free. Two loan recipients suggested that SBA develop more advanced workshops for seasoned entrepreneurs, but acknowledged that these training and counseling resources would be helpful for first-time business owners. As with Some of Its Previous Pilot Programs, SBA Has Not Evaluated Patriot Express SBA has not evaluated the Patriot Express program’s performance or its effect on eligible borrowers. GAO’s guide for designing evaluations states that an evaluation gives an agency the opportunity to refine the design of a program and provides a useful tool to determine whether program operations have resulted in the desired benefits for participants. In addition, evaluations can inform future program decisions. Program evaluations are individual, systematic studies that use research methods to assess how well a program, operation, or project is achieving its objectives and the reasons why it may or may not be performing as expected. Program evaluations are distinct from routine monitoring or performance measurement activities in that performance measurement entails the ongoing monitoring of a program’s progress, whereas program evaluation typically assesses the achievement of a program’s objectives and other aspects of performance in the context in which the program operates. At a minimum, a well-developed and documented program evaluation plan includes measurable objectives, standards for performance, methods for data collection, and time frames for completion. Incorporating these elements and executing the plan can help ensure that the implementation of a pilot generates performance information needed to make effective management decisions about the future of the program. In addition, recent legislation has highlighted the importance of program evaluation for federal agencies. Specifically, Congress updated the Government Performance and Results Act of 1993 (GPRA) with the GPRA Modernization Act of 2010 (GPRAMA), which requires agencies to describe program evaluations that were used to establish or revise strategic goals. When Patriot Express was created in 2007 under SBA’s authority to initiate pilots, SBA indicated that it would evaluate the program’s performance and make a decision whether to modify or continue the program after December 31, 2010. In December 2010, SBA announced through a Federal Register notice that it would extend the pilot through 2013 in order to have more time to evaluate the effect of the program and determine whether any changes need to be made.officials, they have not established any measurable goals for the pilot, but have begun to hold meetings on what information they will need to assess the performance of Patriot Express loans. However, although SBA officials said that they have begun to hold meetings, the program extension ends in only a few months on December 31, 2013. As of August 2013, SBA had not established a plan for the evaluation of the program, and such a plan should include clear and measurable objectives, standards for performance, methods for data collection, and time frames for completion. In addition, SBA has taken several actions in an attempt to increase lending to veterans across its programs, but these initiatives have not been substantiated by findings from an evaluation of the Patriot Express program or the current state of SBA lending to veterans. As mentioned previously, SBA announced a new initiative to increase loans to veteran entrepreneurs by $475 million over the next 5 years across all SBA loan programs. Because SBA had not conducted an evaluation of the pilot, the agency had little information available to inform such decisions, such as a comparison of benefits that veterans receive from Patriot Express in relation to those received by veterans participating in other SBA loan programs. SBA has conducted performance measurement and monitoring activities—such as internally reporting the number of Patriot Express loans made each quarter and deciding not to renew a top lender’s delegated authority to make Patriot Express loans based on ongoing monitoring, as previously mentioned—but these Because there are activities are not the same as program evaluation.many more 7(a) loans, which therefore pose a greater risk to SBA than the smaller volume of Patriot Express loans, SBA officials told us that they have focused more resources on evaluating the performance of 7(a) loans. In addition to Patriot Express, SBA has authorized other pilot loan programs that it has subsequently not evaluated when making decisions about the future of the program. For example, in 2010, SBA’s OIG conducted an assessment of the Community Express program, which was established in 1999, to determine, among other things, whether the program was properly structured to ensure success and minimize the risk of fraud. presented by SBA to the SBA OIG regarding poor performance of the Community Express program. In this assessment, the SBA OIG found that SBA did not establish measurable performance goals and outcomes for evaluating the Community Express program until 9 years after the pilot’s inception. Further, though the OIG determined that these performance measures were adequate, SBA had extended the pilot without using the measures to assess the program’s effectiveness. Similarly, in 2006 the OIG found that SBA had not reviewed the SBA Express program—which was initiated in 1995 as a pilot—to determine, This assessment was completed in response to a concern among other things, if final rules and regulations would be developed. SBA, Office of Inspector General, Assessment of the Community Express Pilot Loan Program, Report No. 10-12 (Washington, D.C.: Aug. 25, 2010). The Community Express program authorized approved lenders to adopt streamlined and expedited loan procedures to provide financial and technical assistance to borrowers in the nation’s underserved communities. Rather than evaluate the program to develop regulations, SBA continued to extend the program as a pilot for 9 years until Congress made it permanent in 2004. Because of this lack of review and establishment of regulations, the OIG recommended in 2006 that the agency issue regulations to, among other things, ensure that that SBA has legally enforceable rules to manage the program. SBA agreed that regulations were needed for the program, but did not establish such regulations, according to OIG officials. The Administrator of SBA has the authority to suspend, modify, or waive rules for a limited period of time to test new programs or ideas through pilot programs, but this authorization does not include a specific requirement for SBA to conduct a pilot evaluation. Congress has established an annual limit for the number of loans made through pilots within the 7(a) program. Specifically, no more than 10 percent of all 7(a) loans guaranteed in a fiscal year can be made through a pilot program.According to SBA officials, a pilot program’s duration and the number of times the agency can extend it depend on the length of time needed to complete testing of the pilot. However, as shown by SBA’s experience with the Patriot Express, Community Express, and SBA Express pilots, SBA does not always test pilots or evaluate their effects when initiating pilot programs under its own authority. Without designing and conducting evaluations of the pilot programs it conducts under its own authority, SBA has little information to assess the performance of the programs and their effects on eligible borrowers, which could be used in decisions on the future of these pilots, including the Patriot Express program. For example, information on the financial performance of veteran-owned businesses participating in various SBA loan programs could help inform policy decisions. Further, the information drawn from an evaluation of Patriot Express could also be used to inform training and counseling resources for veterans. In turn, input from veteran borrowers participating in SBA loan programs and from counselors at SBA resource partners assisting veteran borrowers could provide a basis for improvements in existing SBA loan programs. SBA’s Internal Controls May Not Provide Assurance of Borrower Eligibility SBA has two primary internal control activities to ensure lender compliance with borrower eligibility requirements—on-site examinations and purchase reviews. However, these reviews may not provide the agency with reasonable assurance that Patriot Express loans are only made to eligible borrowers. SBA only reviews a small number of Patriot Express loans for eligibility as part of on-site examinations, and although it examines eligibility as part of purchase reviews, these reviews occur only for loans that have defaulted, in some cases long after an ineligible borrower may have received proceeds from a Patriot Express loan. In addition, although SBA officials told us that they expect borrowers to maintain their eligibility throughout the term of the loan, SBA has not developed procedures to provide reasonable assurance that Patriot Express loans continue to serve eligible borrowers after a loan is disbursed. Internal control standards for federal agencies and GAO’s fraud-prevention framework state that oversight programs should be designed to ensure that ongoing monitoring occurs in the course of Furthermore, the intent of the Patriot Express normal operations.program is to support eligible members of the military community. Without greater review of Patriot Express transactions during on-site examinations of lenders and requirements for lenders to ensure that borrowers remain eligible after disbursement, there is an increased risk that the proceeds of Patriot Express loans will be provided to or used by borrowers who do not qualify for the program. SBA Has Reviewed Few Patriot Express Loans during On-Site Examinations of the Largest Participating Lenders GAO’s fraud-prevention framework identifies three elements needed to minimize fraud: (1) up-front preventive controls, (2) detection and For Patriot monitoring, and (3) investigations and prosecutions.Express, SBA addresses the first element of the framework through the steps lenders are required to take under their delegated authority to ensure borrower eligibility at loan origination. It addresses the third element by the steps it must take to refer potential cases of fraud to its OIG for investigation and possible prosecution. However, we found that SBA’s detection and monitoring—the second element of the framework— could be strengthened. One of SBA’s primary monitoring activities to provide reasonable assurance that Patriot Express loans are made only to eligible borrowers is the reviews it performs as part of its on-site examinations of lenders. However, since the program’s inception in 2007, SBA has reviewed only a small number of Patriot Express loans for the 10 largest Patriot Express lenders. SBA does not conduct specific Patriot Express program examinations. Instead, it reviews a lender’s compliance with Patriot Express program eligibility requirements as part of its examination of the lender’s 7(a) program or as part of a safety and soundness examination of an SBLC. These examinations are known as risk-based reviews or safety and soundness examinations for SBLCs. During these reviews, SBA draws a sample of loans from a lender’s files to assess, among other things, whether the loans met specific program eligibility requirements at the time of approval. For example, if an SBA examiner selects a Patriot Express loan, the examiner is expected to review the lender’s documents to determine whether that loan was provided to a veteran or other eligible member of the military community. The lenders must document in their files how they determined the borrower’s eligibility for the Patriot Express program, including what Department of Defense and Department of Veteran Affairs documents they used to verify veteran status. Additionally, the examiner is expected to review lender documentation to determine whether the veteran or other eligible borrower owned 51 percent or more of the small business at the time of loan approval. As part of the risk- based review, SBA’s examiners are required to compile a list of all eligibility deficiencies by issue type and errors, and identify any trends of deficiencies that warrant lender attention. In this context, 7(a) refers to (1) regular (nondelegated) 7(a) loans, (2) delegated 7(a) loans made by PLP lenders, and (3) all subprograms including Patriot Express and SBA Express. We reviewed the most recent 7(a) risk-based examination and an SBLC safety and soundness examination for the 10 largest Patriot Express lenders and found that with the exception of 3 lenders, SBA examined few Patriot Express loans. As table 4 shows, for the first 3 lenders, SBA sampled at least six Patriot Express loans during the examination. However, for the remaining lenders, SBA sampled one or two loans at two of the lenders and did not sample any Patriot Express loans at the other 5 lenders. For the 5 lenders in table 4 for which SBA sampled at least one Patriot Express loan, 4 lenders were found by SBA to be in compliance with eligibility requirements. For the remaining lender, SBA did not report on its assessment of eligibility requirements in the examination. SBA officials said SBA examined few or no Patriot Express loans for 7 of these 10 lenders because Patriot Express comprised a small percentage of these lenders’ overall lending. At six of the 7 lenders, the Patriot Express loan volume as of the program’s inception to the year prior to the examination ranged from 1 percent to 8 percent of their overall SBA lending activities. However, while these percentages are relatively small, in a program that has a specific target population—veterans and other eligible members of the military community—assessing lenders’ compliance with eligibility requirements is particularly important to help ensure that the guaranteed loans are assisting only eligible veteran entrepreneurs as intended. The monitoring of borrower eligibility that occurs through on-site examinations is a key internal control and fraud- prevention element for Patriot Express because the loan program serves a specific population with loan provisions intended only for this population of borrowers. Another primary internal control that SBA uses to monitor borrower eligibility is the purchase reviews that it conducts for loans that have defaulted and for which the lender is seeking the guarantee payment. As part of the purchase review, an SBA official must review documentation relied upon by the lender to determine whether the borrower was eligible for the program. However, purchase reviews are only conducted for loans that have defaulted and would not identify ineligible borrowers who continue to make their loan payments. Additionally, ineligible borrowers may have the loan for years before ultimately defaulting. Because SBA conducts so few on-site examinations of Patriot Express loans, opportunities to identify these ineligible borrowers prior to a default are limited. For a program with a specific target population, an increased emphasis on reviewing borrower eligibility is important. Without sampling more Patriot Express loans during examinations, SBA may have difficulty identifying deficiencies related to eligibility. This, in turn, could increase the risk to SBA of Patriot Express loans being provided to borrowers who do not qualify for the program. SBA Has Not Provided Clarity to Lenders Regarding Borrowers’ Ongoing Ownership Requirements Although SBA requires lenders to assess borrowers’ eligibility for Patriot Express at the time of loan approval, it does not require them to reassess eligibility, including the 51 percent ownership requirement, after the loan has been disbursed. SBA does not have a stated requirement for borrowers to maintain their eligibility after the loan has been disbursed, but SBA officials told us that they do expect borrowers to maintain 51 percent ownership after a loan has been disbursed to remain eligible for the program. SBA requires that borrowers certify that they will not change the ownership structure or sell the business without the consent of the lender. Additionally, SBA officials told us that in the event of a borrower default, a lender could lose the SBA guarantee if the borrower had sold his or her business to an individual who does not qualify for a Patriot Express loan. However, in the examples below, lenders may not be aware of changes in ownership structure or sale of the business if the borrower has not informed lenders of such actions and the lender is not periodically reassessing Patriot Express eligibility after the loan has been disbursed. Borrowers may initially be approved as meeting Patriot Express eligibility requirements at the time of loan approval, but subsequent events may affect their eligibility and result in the loan being used by an ineligible borrower. For example, according to SBA OIG officials, a business may recruit a veteran to pose as the majority business owner in order to be eligible for a Patriot Express loan and add the veteran to legal ownership documents that would be provided to the lender when applying for the loan. Once the loan is disbursed, however, the business could reduce the ownership interest or remove the veteran as an owner of the business. Such cases could also involve the businesses giving the veteran a kickback after the loan was disbursed. In another example, after the loan has been disbursed, an eligible Patriot Express borrower might sell all or part of his or her ownership interest in the qualifying business. In these examples, an ineligible party benefits from the Patriot Express loan proceeds. These examples illustrate the importance of effective monitoring and detection activities, which are key internal controls and an element of the fraud-prevention framework. Detection and monitoring controls include activities such as periodically evaluating lender procedures to provide reasonable assurance that only eligible borrowers obtain loans and benefit from the program as intended. Such assurance is particularly important in a program that has specific eligibility requirements and was created to serve a specific population. Four of six lenders we spoke with thought that borrowers needed to remain eligible for the loan after disbursement, but these four lenders stated that they did not think that they needed to check on borrowers to make sure that they remain eligible after loan disbursement. The other two lenders we spoke with told us that they did not think ongoing borrower eligibility was a requirement of the program. In the absence of formal SBA eligibility procedures to ensure that only borrowers who maintain 51 percent ownership receive assistance after a loan has been disbursed, Patriot Express loan proceeds may ultimately be used by those other than the intended program beneficiaries. As a result, SBA may not have reasonable assurance that Patriot Express loans are serving the intended population. Conclusions Prior to 2007, SBA served the small business needs of veteran entrepreneurs through its 7(a) and SBA Express programs. SBA established the Patriot Express Pilot Loan initiative in 2007 as a targeted effort to provide veterans and other eligible members of the military community access to capital to establish or expand small businesses. However, the effect this initiative has had on the small business financing needs of veterans and other entrepreneurs in the military community is unknown. While SBA recently announced an initiative to increase overall lending to veteran small businesses by $475 million over the next 5 years, the role of the Patriot Express pilot initiative is unclear given that SBA has yet to evaluate the effectiveness of the program. Based on our analysis, with the exception of 2007, Patriot Express loans made to veterans have had a relatively high default rate, and losses for the initiative have exceeded its income. Moreover, SBA has not conducted an evaluation of the pilot initiative that would include standards for pilot performance, comparative measures with other programs that may also serve veterans, methods for data collection, evaluation of data on the performance of the loans, data and analysis from external reports and evaluations, and time frames for completion. Although SBA officials said that they have begun to hold meetings on what information they will need to assess the performance of Patriot Express loans, SBA has not established a plan to evaluate the program, and only a few months remain before the current extension of the program is set to end. Program evaluations can be useful in informing future program decisions, including SBA’s planned efforts to expand lending to veterans. In addition, the lack of an evaluation or an evaluation plan for Patriot Express follows a pattern for SBA pilot loan programs. As with the Patriot Express pilot initiative, SBA has authorized other pilot loan programs in the past that it has subsequently not evaluated when making decisions about the future of those programs. SBA’s past experience with pilots raises questions about its commitment and capacity to fully implement pilots that include a rigorous evaluation. Without evaluations of pilot initiatives, SBA lacks the information needed to determine if a pilot program is achieving its intended goals and whether it should be cancelled, modified, or expanded. Finally, SBA’s reliance on lenders to assess borrowers’ eligibility for Patriot Express highlights the importance of strong internal controls over lenders to ensure that only eligible borrowers are served by the program. Federal internal control guidance and GAO’s fraud-prevention framework indicate that program controls should include monitoring and detection. However, SBA currently samples few Patriot Express loans during on-site examinations. In addition, while SBA expects borrowers to maintain 51 percent ownership after a loan has been disbursed, SBA has not developed procedures to require lenders to verify that the 51 percent ownership requirement is maintained, nor does it monitor the lenders’ activities to ensure eligibility after disbursement. As a result, SBA’s internal controls may not provide the necessary assurance that Patriot Express loans are made to and used by only eligible members of the military community—the intended mission of the program. Recommendations for Executive Action As SBA considers whether or not to extend the Patriot Express Pilot Loan program, we recommend that the Administrator of SBA design and implement an evaluation plan for the pilot program that assesses how well the Patriot Express pilot is achieving program goals and objectives regarding its performance and its effect on eligible borrowers. The evaluation plan should include information such as evaluation of SBA data on performance of Patriot Express loans; evaluation of borrowers served by Patriot Express in relation to veteran borrowers served by other SBA loan programs; and review of relevant SBA OIG reports and other external studies. To help ensure that SBA makes informed decisions on the future of pilot programs it creates under its own authority, we recommend that the Administrator of SBA require the agency to design an evaluation plan for any such pilot program prior to implementation—including an assessment of the program’s performance and its effect on program recipients—and to consider the results of such an evaluation before any pilot is extended. To help ensure that Patriot Express loans are only provided to members of the military community eligible to participate in the program, we recommend that the Administrator of SBA strengthen existing internal controls, including sampling a larger number of Patriot Express loans during examinations; developing a requirement in SBA’s Standard Operating Procedures for lenders to verify the eligibility of the borrower, including the 51 percent ownership requirement, after the loan has been disbursed; and periodically monitoring the lenders’ implementation of this eligibility requirement. Agency Comments We provided the Administrator of the Small Business Administration with a draft of this report for review and comment. On August 26, 2013, the SBA liaison—Program Manager, Office of Congressional and Legislative Affairs—provided us with the following comment via email on the draft. He stated that the agency will consider the findings from this report as it reviews the extension of the Patriot Express Pilot Loan Program. SBA also provided technical comments, which we incorporated into the report where appropriate. We are sending copies of this report to SBA, appropriate congressional committees and members, and other interested parties. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Our objectives were to examine (1) trends in the Patriot Express program and related Small Business Administration (SBA) guarantee programs, including performance of these loans, and what is known about the costs of the Patriot Express program, (2) the benefits and challenges of the Patriot Express program for members of the military community eligible to participate as well as training and counseling opportunities available to them, and (3) what internal controls SBA has in place to ensure that the Patriot Express program is available only to eligible members of the military community. To describe trends in the Patriot Express program, including how Patriot Express loans approved from 2007 through 2012 have performed, we obtained SBA loan-level data on loans approved from the second quarter of 2007 through the fourth quarter of 2012 for Patriot Express and from the first quarter of 2007 through the fourth quarter of 2012 for the 7(a), and SBA Express programs. We took a number of steps to develop a dataset we could use for our analyses. We excluded loans with missing disbursement dates unless they had positive balances at some point in their history, which to us indicated loan activity. Additionally, we excluded loans that in December 2012 were indicated to have been cancelled. Once we arrived at our final dataset, we analyzed it for various A loan was defined as performance measures, including default rates.defaulted (purchased) if it had a purchase date on or after the approval date. Specifically, we analyzed the default rates by the following categories: Cohort analysis—Using the loan approval date data field, we identified loans for all three programs and grouped them in calendar year cohorts reflecting loans approved from 2007 through 2012. Once these loans were identified, we calculated the default rates, total number of loans, and total loan values approved from 2007 through 2012 for all three programs. Loan amount—Using the gross amount approved data field, we identified the number of loans by loan amounts that were approved for all three programs from 2007 through 2012. We grouped these loans into major categories based on requirements of the programs. For example, we focused on loans below $25,000 because the Patriot Express and SBA Express programs require no collateral for these loans. We selected the next category, loans valued between $25,001 and $150,000, based on the guarantee percentage change from 85 percent to 75 percent for Patriot Express and 7(a) that occurs at $150,000. We selected the next two categories of loans valued between $150,001 and $350,000 and between $350,001 and $500,000 to capture the maximum allowable loans for SBA Express and Patriot Express, respectively. Additionally, we focused on loans valued between $500,001 and $1,000,000 and between $1,000,001 to $5,000,000 to account for the larger loan amounts for 7(a). Once these loans were identified by loan amounts, we calculated the default rates for all three programs based on loans approved from 2007 through 2012. Lender concentration—Using the main bank data field, we identified the top 11 lenders based on the number of approved Patriot Express loans from 2007 through 2012. Once these lenders were identified, we calculated the default rates, average loan amounts, and total loan amounts approved from 2007 through 2012. Additionally, we calculated the relative percentage of loans made by each of the top 11 lenders compared to the overall number of Patriot Express loans approved from 2007 through 2012. After we identified that one lender accounted for 26 percent of all Patriot Express loans approved, we calculated the relative percentage and default rates of this one lender compared to all other lenders from 2007 through 2012. Veteran status—Using a data field that identifies borrowers based on their veteran status, we identified borrowers that self-identified as either a veteran, service-disabled veteran, or Vietnam-era veteran from each of the three programs. Once these loans were identified, we calculated the default rates, total number of loans, and total loan values approved from 2001 through 2012 for SBA Express and 7(a), and 2007 through 2012 for Patriot Express. New Business—Using the new or existing business data field and information provided by SBA, we identified new businesses that had been in operation 2 years or less prior to loan approval, and existing businesses that had been in operation for more than 2 years at time of loan approval. Once these loans were identified, we calculated the relative percentage of new businesses for loans approved from 2007 through 2012. Use of Proceeds—Using the loan proceeds data field and information provided by SBA, we identified the most common use of loan proceeds for Patriot Express loans approved from 2007 through 2012. Small Business Portfolio Scores (SBPS)—Using a data field that identifies borrowers by their SBPS scores, based on available data, we grouped businesses based on having a low (139 or lower), medium (140-179) or high (180 or greater) SBPS score. We then calculated the default rates, total number of loans, total value of loans, and relative percentage of loans for Patriot Express, SBA Express and 7(a). For all of our analyses on the performance of Patriot Express, 7(a), and SBA Express loans, we did not weight default rates by loan amount. In addition, for each analysis we did not include loans with missing values. To assess data reliability, we interviewed SBA representatives from the Office of Performance and Systems Management and the Office of Credit Risk Management about how they collected data and helped ensure data integrity. We also reviewed internal agency procedures for ensuring data reliability. In addition, we conducted reasonableness checks on the data to identify any missing, erroneous, or outlying figures, and when necessary, submitted follow-up questions to SBA officials at the Office of Performance and Systems Management and the Office of Credit Risk Management to clarify our understanding of the data. Through our electronic data testing, we identified irregularities in the data in a small percentage of cases, such as loans with approval amounts in excess of what we understood to be the limits of the program or loans with disbursal dates, but zero dollars disbursed. However, SBA was able to explain these cases as being due to periods in which the limits of the program were temporarily expanded, or provided other explanations. We did not find more than a minimal amount of missing values in fields relating to approved amount, approval year of purchase, and key variables for our analysis of performance. As such, we determined that the data were sufficiently reliable for our purposes. To describe what is known about the costs of the Patriot Express program from 2007 through 2012, we obtained and analyzed SBA cash-flow data on SBA purchases of defaulted loans, as well as data on offsets, which include the following three categories: (1) upfront fees generated by the program at time of approval, (2) annual fees based on loans in a lender’s portfolio in good standing, and (3) recoveries either from the proceeds of attached collateral to the defaulted loans or subsequent payments on loans following purchase by SBA. Additionally, we reviewed SBA guidance, the agency’s standard operating procedures, and inspector general reports to obtain more information on cash-flow data. To assess data reliability, we interviewed SBA representatives from the Office of Financial Analysis and Modeling, the Office of Performance and Systems Management, and the Office of Credit Risk Management to understand how they collect data and help ensure the integrity of the cash-flow data, as well as how they use these data for budgetary purposes. We also submitted follow-up questions to SBA officials at both the Office of Financial Analysis and Modeling and the Office of Credit Risk Management to clarify our understanding of the data. We determined that the data were sufficiently reliable for our purposes. To assess the effect of the Patriot Express program on members of the military community eligible to participate in the program, we conducted semi-structured interviews with a sample of 24 Patriot Express loan recipients about how the Patriot Express loan affected their businesses and their views on how the program could be improved. We selected this nongeneralizable, stratified random sample of loan recipients to reflect two factors: the recipient’s loan amount and the number of Patriot Express loans their lender has made since the program’s inception to 2012. While the results of these interviews could not be generalized to all Patriot Express loan recipients, they provided insight into the benefits and challenges of the program. Table 5 below highlights selected characteristics of the Patriot Express loan recipients we interviewed. To obtain the perspectives of veteran entrepreneurs who were aware of the Patriot Express program and appeared to meet the eligibility requirements for a Patriot Express loan but instead obtained an SBA Express or 7(a) loan, we attempted to contact a nongeneralizable sample of veterans who participated in these two other programs. Of the 15 SBA Express veteran loan recipients and 16 7(a) veteran loan recipients whom we were able to contact, we interviewed 4 veteran entrepreneurs who obtained a 7(a) loan.inquire about their experiences with the 7(a) loan and to obtain their views on the Patriot Express program. We conducted interviews with these recipients to We also interviewed a sample of lenders to obtain their perspectives on the benefits and challenges of the Patriot Express program. We selected the top 10 lenders that made the greatest number of Patriot Express loans from 2007 through 2012. The selected lenders made approximately 48 percent of the Patriot Express loans over this period and consisted of various types of lending institutions, including large banks, a credit union, and a small business lending company (SBLC). While the results of these interviews could not be generalized to all lenders participating in the Patriot Express program, they provided insight into the key differences in administering the program as compared to other SBA loan programs. To obtain a broader set of lender perspectives on the program, we interviewed representatives from the National Association of Government Guaranteed Lenders (NAGGL), a trade organization representing SBA 7(a) lenders. We also interviewed representatives from three veteran service organizations with an interest in veteran entrepreneurship, namely the Veteran Entrepreneurship Task (VET) Force, Veteran Chamber of Commerce, and American Legion, to gather information on the benefits and challenges of the program that their members have experienced. Finally, we interviewed SBA officials from the Offices of Capital Access and Veterans Business Development who are responsible for managing and promoting the program. We interviewed these officials to obtain their perspectives on identified benefits and challenges to the program, promotion of the program and its lenders, and efforts to evaluate the program’s effect on members of the military community eligible to participate. To describe other ways in which veteran entrepreneurs accessed capital, as part of our interviews with Patriot Express and 7(a) loan recipients, as well as selected lenders and veteran service organizations, we also inquired about other ways in which veterans can gain access to capital. To describe the training and counseling efforts SBA has in place for veteran entrepreneurs, we obtained and reviewed reports by the Interagency Task Force on Veterans Small Business Development from 2011 and 2012. We also reviewed SBA documents related to training and counseling resources and SBA information on the number of veterans that have used these resources from 2008 through 2012. We also interviewed SBA officials responsible for these efforts. To describe the perspectives of veteran entrepreneurs on the effectiveness of SBA’s training and counseling efforts, we reviewed results from SBA’s annual Veteran Business Outreach Center client satisfaction survey from 2008 through 2012. We also interviewed the selected veteran service organizations and Patriot Express and 7(a) loan recipients on their perspective on the quality of training and counseling efforts sponsored by SBA. To determine SBA’s prior experience with pilots initiated under its own authority, we obtained and reviewed pertinent regulations on SBA’s authority to initiate pilots and applicable limitations. We also reviewed two SBA Office of Inspector General (OIG) reports pertaining to SBA’s experience with the Community Express and SBA Express pilot programs. To assess how well SBA has conducted pilot programs, including Patriot Express, we reviewed components identified in our previous work as key features of a program evaluation and an evaluation plan. To evaluate SBA’s internal controls related to ensuring that the Patriot Express program is available only to members of the military community eligible to participate in the program, we reviewed SBA’s standard operating procedures related to borrower eligibility requirements. Also, as part of our interviews with the selected lenders and borrowers previously discussed, we inquired about the documentation used to establish eligibility for the program. To determine how SBA oversees lenders to ensure they are complying with the Patriot Express eligibility requirements, we reviewed SBA’s standard operating procedures related to lender oversight. We also obtained copies of examination reports for the top 10 Patriot Express lenders (based on the number of loans made) from 2007 through 2012. We reviewed these reports to determine the number of Patriot Express loans sampled during the examination and SBA’s disposition on whether the lender was complying with SBA rules and regulations related to borrower eligibility. Additionally, we interviewed officials from the Office of Credit Risk Management to inquire about SBA’s oversight of its lenders as it relates to the Patriot Express program. To determine how SBA reviews defaulted loans as part of its purchase review, we reviewed SBA’s standard operating procedures related to these reviews, as well as an SBA OIG report on improper payments, which also described the purchase reviews. We also met with officials from SBA’s Office of Financial Program Operations to understand how SBA staff review submissions from lenders requesting that SBA purchase defaulted loans. Finally, to help assess the extent to which the Patriot Express program could be susceptible to fraud and abuse, we reviewed SBA’s internal control standards related to ensuring that Patriot Express loans were made to eligible members of the military community. We compared these internal controls to federal internal control standards, as well as to GAO’s Fraud Prevention Framework. We also interviewed officials from SBA’s Office of Inspector General to learn about scenarios under which the Patriot Express program could be susceptible to fraud and abuse. We conducted this performance audit from November 2012 to September 2013 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comparison of SBA’s 7(a), SBA Express, and Patriot Express Loan Programs In addition to the Patriot Express pilot program, there are several delivery methods within the SBA 7(a) program, including regular (nondelegated) 7(a), delegated 7(a) loans made by lenders in the Preferred Lenders Program (PLP), and SBA Express loans. While all delivery methods provide a borrower with an SBA-guaranteed loan, there are several similarities and differences between these three programs, such as eligibility restrictions, maximum loan amounts, and percent of guarantee. Table 6 below compares the key features of these three loan programs discussed throughout this report. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Andrew Pauline (Assistant Director), Benjamin Bolitzer, Daniel Kaneshiro, José R. Peña, Christine Ramos, Jessica Sandler, Jennifer Schwartz, Jena Sinkfield, and Andrew Stavisky made key contributions to this report.
In June 2007, SBA established the Patriot Express Pilot Loan Program within its 7(a) loan guarantee program to provide small businesses owned and operated by veterans and other eligible members of the military community access to capital. Through Patriot Express, SBA guarantees individual small business loans that lenders originate. GAO was asked to evaluate the program. This report examines (1) trends in the volume and performance of Patriot Express and related SBA loan programs; (2) the effect of the program on eligible members of the military community; and (3) SBA internal controls to ensure that only eligible borrowers participate. GAO analyzed data on performance and costs of Patriot Express and other similar SBA loan programs from 2007 through 2012; interviewed selected borrowers, lenders, and veteran service organizations; and reviewed SBA internal control guidance on borrower eligibility. Patriot Express loans valued at about $703 million have defaulted at a higher rate than loans under the Small Business Administration's (SBA) other related loan guarantee programs, and losses for Patriot Express have exceeded its income. With the exception of loans approved in 2007, Patriot Express loans have defaulted at a higher rate than loans made under SBA's main 7(a) program or loans made under SBA's streamlined loan guarantee program (SBA Express). The Patriot Express program's overall default rate was significantly higher for smaller loans, especially for loans below $25,000 (20 percent). Additionally, one lender accounted for more than 64 percent of these smaller loans and experienced higher default rates than the remaining lenders. From 2007 through 2012, losses in the Patriot Express program exceeded income by $31.1 million (not accounting for future fee revenues or funds recovered from loans in default). Selected borrowers and lenders, as well as veteran service organizations GAO met with, reported various benefits and challenges to the Patriot Express program, but SBA has yet to evaluate the effect of this pilot program on eligible members of the military community. Borrowers and lenders said that some benefits of the program were that it helped veterans expand their businesses and allowed them to take advantage of the streamlined application process. Some challenges they identified were low awareness of the program and which lenders participated in the program. In 2010, SBA extended the Patriot Express pilot through 2013 to allow time to evaluate the effect of the program. To date, SBA has not evaluated the program or established a plan of what it intends to do to evaluate it. SBA officials told us that they focused their resources on evaluating 7(a) loans because there are many more of them and, therefore, they pose a greater risk to SBA than Patriot Express loans. In addition to Patriot Express, SBA has previously initiated other pilot programs that it has not evaluated. GAO has found that a program evaluation gives an agency the opportunity to refine program design, assess if program operations have resulted in the desired benefits, and, for pilots, determine whether to make the programs permanent. Without conducting evaluations of pilot programs, SBA lacks the information needed to assess their performance and their effects on eligible participants and decide whether to extend these programs, including Patriot Express. SBA's internal controls over lenders may not provide reasonable assurance that Patriot Express loans are only made to eligible members of the military community and that only these members benefit from loan proceeds. SBA relies on lenders to verify and document borrower eligibility at the time of loan approval. One of SBA's controls over lenders' compliance with eligibility requirements consists of sampling loan files during examinations of the 7(a) program, but few Patriot Express loans are reviewed. Patriot Express is intended to assist only eligible members of the military community and SBA officials told us that they expect borrowers to maintain eligibility after the loan is disbursed. But SBA has not developed procedures for lenders to provide reasonable assurance that borrowers maintain this eligibility. Federal internal control standards and GAO's fraud-prevention framework indicate that ongoing monitoring is an important component of an effective internal control system. Without enhanced internal controls, particularly with respect to monitoring of borrowers, SBA lacks assurance that Patriot Express loans are serving only eligible borrowers.
GAO_GAO-06-266T
Background FAA’s safety oversight system is made up of a number of programs for airlines and other entities. Safety oversight programs for airlines provide for their initial certification, periodic surveillance, and inspection. Since 1985, FAA has used National Work Program Guidelines (NPG), its traditional inspection program for airlines, as a primary means of ensuring that airlines comply with safety regulations. In NPG, an FAA committee of program managers identifies an annual minimum set of required inspections that are to be undertaken to ensure that airlines are in compliance with their operating certificates. In 1998, the agency implemented the Air Transportation Oversight System (ATOS), which currently oversees the nation’s largest 15 commercial airlines and cargo carriers, with the goal of eventually including all commercial passenger and cargo airlines in it. ATOS emphasizes a system safety approach that extends beyond periodically checking airlines for compliance with regulations to the use of technical and managerial skills to identify, analyze, and control hazards and risks. For example, under ATOS, inspectors develop surveillance plans for each airline, based on data analysis and assessment of risks, and adjust the plans periodically based on inspection results. However, the agency has been delayed in placing a significant number of other passenger airlines in ATOS, resulting in 99 passenger airlines, which we refer to as non-legacy airlines, continuing to be overseen through NPG, a process that is not risk-based or system safety oriented. In 2002, FAA added the Surveillance and Evaluation Program (SEP) to the NPG inspection program to incorporate principles of ATOS into its oversight of non-legacy passenger airlines. The two programs are used together to establish the number of annual inspections for non-legacy airlines. (Appendix 1 describes each inspection program.) Figure 1 illustrates some typical activities covered during inspections. FAA’s safety oversight programs for other aspects of the aviation industry—including manufacturers of aircraft and aircraft parts, repair stations, flight schools, aviation maintenance technician schools, pilots, and mechanics—involve certification, surveillance, and inspection by FAA’s safety inspectors, engineers, flight surgeons, and designated representatives. FAA authorizes about 13,400 private individuals and about 180 organizations (called “designees”) to act as its representatives to conduct many safety certification activities, such as administering flight tests to pilots, inspecting repair work by maintenance facilities, conducting medical examinations of pilots, and approving designs for aircraft parts. These designees are grouped into 18 different programs and are overseen by three FAA offices—Flight Standards Service, Aerospace Medicine, and Aircraft Certification Service—all of which are under the Office of Aviation Safety (see fig. 2). Since 1990, FAA has emphasized gaining compliance from the aviation industry through cooperative means by establishing industry partnership programs with the aviation community that allow participants, such as airlines and pilots, to self-report violations of safety regulations and help identify safety deficiencies, and potentially mitigate or avoid fines or other legal action. For example, the Voluntary Disclosure Program encourages the self-reporting of manufacturing problems and safety incidents by participants that can include air carriers and repair stations. Appendix II describes the industry partnership programs. When violations of statutory and regulatory requirements are identified through inspections, through the partnership programs in certain cases, or through other methods, FAA has a variety of enforcement tools that it may use to respond to them, including administrative actions (such as issuing a warning notice or a letter of correction that includes the corrective actions the violator will take) and legal sanctions (such as levying a fine or suspending or revoking a pilot or other FAA-issued certificate). FAA’s Safety Oversight System Focuses on Risk Identification and Mitigation Through System Safety, Leveraging of Resources, and Enforcement of Safety Regulations, but Benefits Are Not Being Fully Realized In recent reports, we found that FAA’s safety oversight system has programs that focus on risk identification and mitigation through a system safety approach, the leveraging of resources, and enforcement of safety regulations, but that the benefits of these programs are not being fully realized. In our recent report on FAA’s oversight of non-legacy airlines, we found that the focus on risk identification through the addition of SEP has many strengths and allows for enhancing the efficiency of FAA’s oversight activities. Rather than relying on NPG’s customary method of conducting a set number of inspections of an airline’s operations, SEP emphasizes a system safety approach of using risk analysis techniques. SEP allows for the efficient use of inspection staff and resources by prioritizing workload based on areas of highest risk, and it includes a requirement that inspectors verify that corrective actions have occurred. For example, FAA has developed risk assessment worksheets for SEP that are aligned with key airline systems that guide inspectors through identifying and prioritizing risks. The worksheets guide inspectors to organize the results of their previous inspections and surveillance into a number of areas such as flight operations and personnel training in order to identify specific risks in each area and target the office’s resources to mitigating those risks. The development of a system safety approach addresses a long- standing concern by us that FAA did not have a methodology for assessing airline safety risks so that it could target limited inspection resources to high-risk conditions. Another strength of SEP, consistent with findings in our past reports, is that SEP relies on teams of inspectors, which are generally more effective than individual inspectors in their ability to collectively identify concerns. However, the benefits of FAA’s system safety approach for the inspection of non-legacy airlines could be enhanced by a more complete implementation of SEP and addressing other challenges. The inspection workload for non-legacy airlines is still heavily oriented to the NPG’s non- risk based activities. For example, as shown in table 1, from fiscal years 2002 through 2004, 77 percent of inspection activities required for the top 25 non-legacy airlines in terms of the number of enplanements were identified through NPG, and the remaining percentage of inspection activities were identified based on risk through SEP. Although inspectors can replace NPG-identified activities with SEP-identified activities that they deem constitute a greater safety risk, we found that FAA inspectors interpret agency emphasis on NPG as discouraging this practice. In order to ensure that all inspectors who oversee non-legacy airlines have a complete and timely understanding of the agency’s policies relating to the inspection process, we recommended in September 2005 that FAA improve communication with and training of inspectors in this area. Another way that FAA attempts to enhance the efficiency of its oversight activities is through its designee programs. We reported that FAA maximizes its resources by allowing designees to perform about 90 percent of certification-related activities, thus allowing FAA to better concentrate its limited staff resources on the most safety-critical functions. For example, while designees conduct routine certification functions, such as approvals of aircraft technologies that the agency and designees have had previous experience with, FAA focuses on new and complex aircraft designs or design changes. In addition, the use of designees expands FAA’s access to technical expertise within the aviation community. For the aviation industry, the designee programs enable individuals and organizations to obtain required FAA certifications—such as approvals of the design, production, and airworthiness of aircraft—in a timely manner, thus reducing delays and costs to the industry that might result from scheduling direct reviews by FAA staff. For example, officials from an aircraft manufacturer told us that the use of designees has added significantly to the company’s ability to enhance and improve daily operations by decreasing certification delivery time and increasing the flexibility and utilization of company resources. In addition, designees are convenient to the aviation industry due to their wide dispersal throughout the United States. However, concerns about the consistency and adequacy of designee oversight that FAA field offices provide have been raised by experts and other individuals we interviewed. For example, designees and industry officials that we spoke with indicated that FAA’s level of oversight and interpretation of rules differ among regions and among offices within a region, which limits FAA’s assurance that designees’ work is performed uniformly in accordance with FAA’s standards and policy. Experts also ranked this issue as a top weakness. Table 2 shows the top five weaknesses identified by our experts. Experts also made a number of suggestions to strengthen the designee program, including clearly defining and following agency criteria for selecting designees and increasing penalties for designees found to violate standards or who do not exercise proper judgment. To improve management control of the designee programs, and thus increase assurance that designees meet FAA’s performance standards, we recommended that FAA develop mechanisms to improve the compliance of FAA program and field offices with existing policies and incorporate, as appropriate, suggestions from our expert panel. In response to our recommendations, FAA is planning, among other things, to form a team to identify and share best practices for overseeing designee programs. FAA also leverages its resources through its industry partnership programs. These partnership programs are designed to assist the agency in receiving safety information, including reports of safety violations. According to FAA officials, the Aviation Safety Action Program, Aviation Safety Reporting Program, and Voluntary Disclosure Reporting Program augment FAA’s enforcement activities and allow FAA to be aware of many more safety incidents than are discovered during inspections and surveillance. In addition, the Flight Operational Quality Assurance Program provides safety information in the form of recorded flight data from participating airlines. FAA has established some management controls over its partnership programs, such as procedures to track actions taken to correct safety incidents reported under the programs, but the agency lacks management controls to measure and evaluate the performance of these programs, an issue that we will discuss later in the testimony. FAA’s enforcement process, which is intended to ensure industry compliance with safety regulations, is another important element of its safety oversight system. FAA’s policy for assessing legal sanctions against entities or individuals that do not comply with aviation safety regulations is intended to deter future violations. FAA has established some management controls over its enforcement efforts, with procedures that provide guidance on identifying regulated entities and individuals that are subject to inspections or surveillance actions, determining workload priorities on the basis of the timing and type of inspection to be performed, detecting violations of safety regulations, tracking the actions that are taken by the entities and individuals to correct the violations and achieve compliance with regulations, and imposing punitive sanctions or remedial conditions on the violators. These procedures provide FAA inspectors, managers, and attorneys with a process to handle violations of safety regulations that are found during routine inspections. However, we found that the effect of FAA’s legal sanctions on deterrence is unclear, and that recommendations for sanctions are sometimes changed on the basis of factors that are not associated with the merits of the case. We found that from fiscal years 1993 through 2003, attorneys in FAA’s Office of the Chief Counsel authorized a 52 percent reduction in the civil monetary penalties assessed from a total of $334 million to $162 million. FAA officials told us that the agency sometimes reduces sanctions in order to prioritize attorneys’ caseloads by closing the cases more quickly through negotiating a lower fine. Economic literature on deterrence suggests that although negative sanctions (such as fines and certificate suspensions) can deter violations, if the violator expects sanctions to be reduced, he or she may have less incentive to comply with regulations. In effect, the goal of preventing future violations is weakened when the penalties for present violations are lowered for reasons not related to the merits of the case. In addition, FAA lacks management controls to measure and evaluate its enforcement process, which we discuss later in this testimony. FAA Has Made Training an Integral Part of Its Safety Oversight System but Several Actions Could Improve Results FAA’s use of a risk-based system safety approach to inspections requires inspectors to apply data analysis and auditing skills to identify, analyze, assess, and control potential hazards and risks. Therefore, it is important that inspectors are well-trained in this approach and have sufficient knowledge of increasingly complex aircraft, aircraft parts, and systems to effectively identify safety risks. It is also important that FAA’s large cadre of designees is well-trained in federal aviation regulations and FAA policies. FAA has made training an integral part of its safety inspection system and has established mandatory training requirements for its workforce as well as designees. FAA provides inspectors with extensive training in federal aviation regulations; inspection and investigative techniques; and technical skills, such as flight training for operations inspectors. The agency provides its designees with an initial indoctrination that covers federal regulations and agency policies, and refresher training every 2 to 3 years. We have reported that FAA has generally followed effective management practices for planning, developing, delivering, and assessing the impact of its technical training for safety inspectors, although some practices have yet to be fully implemented. In its planning activities for training, FAA has linked technical training efforts to its goal of safer air travel and has identified technical proficiencies needed to improve safety inspectors’ performance in meeting this goal. For example, FAA’s Offices of Flight Standards and Aircraft Certification have identified gaps in several of the competencies required to conduct system safety inspections, including risk assessment, data analysis, systems thinking, and designee oversight. According to FAA, it is working to correct these gaps. We have also identified gaps in the training provided to inspectors in the Office of Flight Standards who oversee non-legacy airlines, and have recommended that FAA improve inspectors’ training in areas such as system safety and risk management to ensure that these inspectors have a complete and timely understanding of FAA’s inspection policies. We have identified similar competency gaps related to designee oversight. For example, FAA does not require refresher training concerning designee oversight, which increases the risk that staff do not retain the information, skills, and competencies required to perform their oversight responsibilities. We recommended that FAA provide additional training for staff who directly oversee designees. We did not identify any specific gaps in the competencies of designees. In prioritizing funding for course development activities, FAA does not explicitly consider which projects are most critical. Figure 3 describes the extent to which FAA follows effective management practices in planning training. In developing its training curriculum for inspectors, FAA also for the most part follows effective management practices, such as developing courses that support changes in inspection procedures resulting from regulatory changes or agency initiatives. On the other hand, FAA develops technical courses on an ad hoc basis rather than as part of an overall curriculum for each inspector specialty—such as air carrier operations, maintenance, and cabin safety—because the agency has not systematically identified the technical skills and competencies each type of inspector needs to effectively perform inspections. Figure 4 describes the extent to which FAA follows effective management practices in developing training. In delivering training, FAA has also generally followed effective management practices. (See fig. 5.) For example, FAA has established clear accountability for ensuring that inspectors have access to technical training, developed a way for inspectors to choose courses that meet job needs and further professional development, and offers a wide array of technical and other courses. However, both FAA and its inspectors recognize the need for more timely selection of inspectors for technical training. In addition, FAA acknowledges the need to increase communication between inspectors and management with respect to the training program, especially to ensure that inspectors have bought into the system safety approach to inspections. FAA offers numerous technical courses from which inspectors can select to meet job needs. However, from our survey of FAA’s inspectors, we estimate that only about half think that they have the technical knowledge needed for their jobs. FAA officials told us that inspectors’ negative views stem from their wanting to acquire proficiencies that are not as crucial in a system safety environment. We also found a disparity between inspectors and FAA concerning the receipt of requested training. We estimated that 28 percent of inspectors believe that they get the technical training that they request. However, FAA’s records show that FAA approves about 90 percent of these requests, and inspectors are making good progress in receiving training. Over half of the inspectors have completed at least 75 percent of technical training that FAA considers essential. FAA officials told us that inspectors’ negative views on their technical knowledge and the training they have received stem from their not accepting FAA’s move to a system safety approach. That is, the inspectors are concerned about acquiring individual technical proficiency that is not as crucial in a system safety environment. Given that it has not completed assessing whether training for each inspector specialty meets performance requirements, FAA is not in a position to make definitive conclusions concerning the adequacy of inspector technical training. FAA also generally followed effective management practices in evaluating training. The agency requires that each training course receive a systematic evaluation every 3 years to determine if the course is up to date and relevant to inspectors’ jobs, although training officials noted that many courses have yet to undergo such an evaluation. However, FAA collects limited information on the effectiveness of training, and its evaluations have not measured the impact of training on FAA’s mission goals, such as reducing accidents. Training experts acknowledge that isolating performance improvements resulting from training programs is difficult for any organization. (See fig. 6.) While FAA follows many effective management practices in its training program, the agency also recognizes the need for improvements, including (1) systematically assessing inspectors’ needs for technical and other training, (2) better timing of technical training so that inspectors receive it when it is needed to perform their jobs, and (3) better linking the training provided to achieving agency goals of improving aviation safety. FAA has begun to act in these areas, and we believe that if effectively implemented, the actions should improve the delivery of training and ultimately improve aviation safety. Therefore, it is important for FAA to follow through with its efforts. As a result, we recommended in September 2005, among other things, that in order to ensure that inspector technical training needs are identified and met in a timely manner, FAA systematically assess inspectors’ technical training needs, better align the timeliness of training to when inspectors need the training to do their jobs, and gain inspectors’ acceptance for changes made or planned to their training. It is important that both FAA’s inspection workforce and FAA-certified aviation mechanics are knowledgeable about increasingly complex aircraft, aircraft parts, and systems. While we did not attempt to assess the technical proficiency that FAA’s workforce requires and will require in the near future, FAA officials said that inspectors do not need a substantial amount of technical training courses because inspectors are hired with a high degree of technical knowledge of aircraft and aircraft systems. They further indicated that inspectors can sufficiently keep abreast of many of the changes in aviation technology through FAA and industry training courses and on-the-job training. However, in its certification program for aviation mechanics, we found that FAA standards for minimum requirements for aviation courses at FAA-approved aviation maintenance technician schools and its requirements for FAA-issued mechanics certificates do not keep abreast with the latest technologies. In 2003, we reported that those standards had not been updated in more than 50 years. We recommended that FAA review the curriculum and certification requirements and update both. FAA plans to make changes in the curriculum for FAA approved aviation maintenance technicians that reflect up-to-date aviation technologies and finalize and distribute a revised Advisory Circular in March 2006 that describes the curriculum changes. FAA then plans to allow the aviation industry time to implement the recommended curriculum changes before changing the requirements for FAA-issued mechanics certificates. FAA Has Evaluated Some Safety Programs, but the Lack of Evaluative Systems and Nationwide Data Impedes FAA’s Ability to Continuously Monitor Its Safety Programs It is important for FAA to have effective evaluative processes and accurate nationwide data on its numerous safety oversight programs so that program managers and other officials have assurance that the safety programs are having their intended effect. Such processes and data are especially important because FAA’s workforce is so dispersed worldwide—with thousands of staff working out of more than 100 local offices—and because FAA’s use of a risk-based system safety approach represents a cultural shift from its traditional inspection program. Evaluation is important to understanding if the cultural shift has effectively occurred. Our most recent work has shown the lack of such processes and limitations with data for FAA’s inspection programs for non-legacy airlines, designee programs, industry partnership programs, and enforcement program. In response to recommendations that we have made regarding these programs, some improvements are being made. On the positive side, as we mentioned earlier, our most recent work found that FAA generally follows effective management practices in evaluating individual technical training courses. FAA has not evaluated its inspection oversight programs for non-legacy airlines—which include SEP and NPG—to determine how the programs contribute to the agency’s mission and overall safety goals, and its nationwide inspection database lacks important information that could help it perform such evaluations—such as whether risks identified through SEP have been mitigated. In addition, the agency does not have a process to examine the nationwide implications of or trends in the risks that inspectors have identified through their risk assessments—information it would need to proactively determine risk trends at the national level on a continuous basis. FAA’s evaluation office instead conducts analyses of the types of inspections generated under SEP by airline and FAA region, according to FAA. We recommended that FAA develop a continuous evaluative process for activities under SEP and link SEP to the performance-related goals and measures developed by the agency, track performance toward these goals, and determine appropriate program changes. FAA is considering our recommendation, but its plan to place the remaining non-legacy airlines in the ATOS program by the end of fiscal year 2007 might make this recommendation unnecessary, according to the agency. Since FAA’s past efforts to move airlines to ATOS have experienced delays, we believe that this recommendation is still valid. We also found that FAA lacked requirements or criteria for periodically evaluating its designee programs. In 2004, we reported that the agency had evaluated 6 of its 18 designee programs over the previous 7 years and had plans to evaluate 2 more, although it had no plans to evaluate the remaining 10 programs because of limited resources. FAA conducted these evaluations on an ad hoc basis usually at the request of headquarters directors or regional office managers. In addition, we found that FAA’s oversight of designees is hampered, in part, by the limited information on designees’ performance contained in the various designee databases. These databases contain descriptive information on designees, such as their types of designations and status (i.e., active or terminated). More complete information would allow the agency to gain a comprehensive picture of whether staff are carrying out their responsibilities to oversee designees. To improve management control of the designee programs, and thus increase assurance that designees meet the agency’s performance standards, we recommended that FAA establish a process to evaluate all designee programs and strengthen the effectiveness of its designee databases by improving the consistency and completeness of information in them. To address our recommendations, FAA expects to develop a plan to evaluate all designee programs on a recurring basis and intends to establish a team that will examine ways to improve automated information related to designees. In addition, we found that FAA does not evaluate the effects of its industry partnership and enforcement programs to determine if stated program goals, such as deterrence of future violations, are being achieved. For example, little is known about nationwide trends in the types of violations reported under the partnership programs or whether systemic, nationwide causes of those violations are identified and addressed. Furthermore, FAA’s enforcement policy calls for inspectors and legal counsel staff to recommend or assess enforcement sanctions that would potentially deter future violations. However, without an evaluative process, it is not known whether the agency’s practice of generally closing cases with administrative actions rather than legal sanctions and at times reducing the amount of the fines, as mentioned earlier in this testimony, may weaken any deterrent effect that would be expected from sanctions. FAA’s ability to evaluate the impact of its enforcement efforts is also hindered by the lack of useful nationwide data. FAA inspection offices maintain independent, site-specific databases because they do not find the nationwide enforcement database—the Enforcement Information System (EIS)—as useful as it could be because of missing or incomplete historical information about enforcement cases. As a result of incomplete data on individual cases, FAA inspectors lack the complete compliance history of violators when assessing sanctions. We recommended that FAA develop evaluative processes for its enforcement activities and partnership programs and use them to create performance goals, track performance towards those goals, and determine appropriate program changes. We also recommended that FAA take steps to improve the usefulness of the EIS database by enhancing the completeness of enforcement information. FAA expects to address some of these issues as it revises its enforcement policy, which is expected to be issued later in fiscal year 2006. In addition, FAA has established a database workgroup that is developing long- and short- term solutions to address the problems with EIS. Recommendations We Have Made to Improve FAA’s Safety Oversight System In order to help FAA fully realize the benefits from its safety oversight system, we have made a number of recommendations to address weaknesses that we identified in our reviews. These recommendations have not been fully implemented, although in some cases FAA has taken steps towards addressing them. Evaluative processes and relevant data are particularly important as FAA works to change its culture by incorporating a system safety approach into its oversight, and we have recommended that FAA develop continuous evaluative processes for its oversight programs for non-legacy airlines, its designee programs, and its industry partnership and enforcement programs, and systematically assess inspectors’ technical training needs. In addition, FAA’s nationwide databases are in need of improvements in their comprehensiveness and ease of use. Without comprehensive nationwide data, FAA does not have the information needed to evaluate its safety programs and have assurance that they are having the intended results. We have recommended that FAA improve the completeness of its designee and enforcement databases. Continuous improvements in these areas are critical to FAA’s ability to have a robust “early warning system” and maintain one of the safest aviation systems in the world. Contacts and Acknowledgments For further information on this testimony, please contact Dr. Gerald Dillingham at (202) 512-2834 or by email at [email protected]. Individuals making key contributions to this testimony include Brad Dubbs, Phillis Riley, Teresa Spisak, and Alwynne Wilbur. Appendix I: Description of FAA’s Inspection Programs Table 3 describes the Federal Aviation Administration’s (FAA) three inspection processes for overseeing airlines: Air Transportation Oversight System (ATOS), National Work Program Guidelines (NPG), and Surveillance and Evaluation Program (SEP). Many of the elements of ATOS, such as the use of data to identify risks and the development of surveillance plans by inspectors, are incorporated in the SEP process. The NPG process, in contrast, is not focused on the use of data and relies on an established set of inspections that are not risk based. Appendix II: Description of FAA’s Partnership Programs Aviation Safety Action Program (ASAP) Participation: Participants include employees of air carriers and repair stations that have entered into a memorandum of understanding with the Federal Aviation Administration (FAA). The memoranda can cover employee groups, such as pilots, maintenance employees, dispatchers, or flight attendants. Each employee group is covered by a separate memorandum of understanding. As of June 2004, FAA had accepted 54 memoranda of understanding and received over 80,000 ASAP reports, which may or may not include safety violations, according to FAA officials. Purpose: ASAP seeks to improve aviation safety through the voluntary self-reporting of safety incidents under the procedures set forth in the memorandum of understanding. Under the program, FAA does not take enforcement action against employees who voluntarily self-reported safety violations for reports that are sole-source (the report is the only way FAA would have learned about the incident) and will pursue administrative action only for reports that are not sole-source. Incidents that involve alcohol, drugs, criminal activity, or an intentional disregard for safety are not eligible for self-reporting under ASAP. Process: Each memorandum of understanding is a voluntary partnership between FAA, the airline, and an employee group. Although employee groups are not always included, FAA encourages their participation. The memorandum of understanding ensures that employees who voluntarily disclose FAA safety violations in accordance with the procedures and guidelines of ASAP will receive administrative action or no action in lieu of legal enforcement action. Once a memorandum of understanding is approved, employees can begin reporting violations that fall under the agreement. When a violation occurs, an employee notifies the Event Review Committee, which includes representatives from FAA and the airline or the repair station and generally includes the appropriate employee association. The committee must be notified in writing within the time limit specified in the memorandum of understanding. The committee then determines whether to accept the report under the ASAP program. If the report is accepted (it meets the acceptance criteria in the memorandum and does not involve criminal activity, substance abuse, controlled substances, or alcohol), then the committee determines the action to take. That action may include remedial training or administrative action, but it will not include a legal sanction. Results: FAA does not know the overall program results because it does not have a national, systematic process in place to evaluate the overall success of ASAP. However, FAA cites examples that describe ASAP’s contribution to enhanced aviation safety. These examples include identifying deficiencies in aircraft operations manuals, airport equipment, and runways. In July 2003, FAA’s Compliance and Enforcement Review recommended that FAA evaluate the use and effectiveness of this program. Aviation Safety Reporting Program (ASRP) Participation: Participants are all users of the national airspace system, including air traffic controllers and employees of air carriers and repair stations. Purpose: The program is designed to improve aviation safety by offering limited immunity for individuals who voluntarily report safety incidents. ASRP was founded after TWA Flight 514 crashed on approach to landing in December 1974 after the crew misinterpreted information on the approach chart. This accident occurred only 6 weeks after another plane experienced the same error. Process: The National Aeronautics and Space Administration (NASA) administers this program. When a safety incident occurs, a person may submit a form and incident report to NASA. There are four types of forms that can be submitted to NASA: (1) Air Traffic Control, (2) General Reports (includes Pilots), (3) Flight Attendants, and (4) Maintenance Personnel. At least two aviation safety analysts read these forms and the incident reports that accompany them. The analysts at NASA screen the incident reports for urgent safety issues, which will be marked for immediate action to the appropriate FAA office or aviation authority. NASA analysts also edit the report’s narrative to eliminate any identifying information. In addition, each report has a tear-off portion, which is separated and returned to the individual who reported the incident as a receipt of the incident report’s acceptance into the ASRP. When a safety violation that has been previously reported under ASRP comes to the attention of FAA, the agency issues a legal sanction, which is then waived. Reports that would not be eligible to have a legal sanction waived include deliberate violations, violations involving a criminal offense, or accident; reports filed by participants who have committed a violation of federal aviation regulations or law within the last 5 years and reports filed later than 10 days following an incident. Results: While FAA and NASA do not know the overall program results because they do not have a formal national evaluation program to measure the overall effectiveness of the program, the agencies widely disseminate information generated from the program to aircraft manufacturers and others. ASRP reports are compiled into a database known as the Aviation Safety Reporting System. When a potentially hazardous condition is reported, such as a defect in a navigational aid or a confusing procedure, NASA will send a safety alert to aircraft manufacturers, the FAA, airport representatives, and other aviation groups. The database is used for a monthly safety bulletin that includes excerpts from incident reports with supporting commentary by FAA safety experts. NASA officials estimate that the bulletin is read by over 150,000 people. In addition, individuals and organizations can request a search of the database for information on particular aircraft aviation safety subjects, including human performance errors and safety deficiencies. Further, NASA has used the database to analyze operational safety issues, such as general aviation incidents, pilot and controller communications, and runway incursions. Flight Operational Quality Assurance (FOQA) Participation: Participants include air carriers that equip their airplanes to record flight data. As of March 2004, 13 airlines had FAA-approved FOQA programs, and approximately 1,400 airplanes were equipped for the program. Purpose: FOQA is designed to enhance aviation safety through the analysis of digital flight data generated during routine flights. Process: Air carriers that participate in the program equip their aircraft with special acquisition devices or use the airplanes’ flight data recorders to collect data and determine if the aircraft are deviating from standard procedures. These data include engine temperatures, descent rate, and deviations from the flight path. When the aircraft lands, data are transmitted from the aircraft to the airline’s FOQA station, where they are analyzed for flight trends and possible safety problems. Once the data are transmitted to the FOQA ground station, the data are extracted and analyzed by software programs. The FOQA data are combined with data from maintenance databases, weather conditions, and other safety reporting systems, such as ASAP, in order to identify trends in flight operations. The analysis typically focuses on events that fall outside normal boundaries specified by the manufacturer’s operational limitations and the air carrier’s operational standards. FOQA data are collected and analyzed by individual air carriers. The data on safety trends are made available to FAA in an aggregated form with no identification of individual carriers. According to FAA officials, air carriers do not want to release this data to any outside party (including FAA) because of concerns that the data could then be publicly released. Air carriers pay for the special flight data recorders that can record FOQA data, which cost approximately $20,000 each. Although this can be an expensive investment for some air carriers, most newer aircraft models come with the data recorder built into the airplane. The International Civil Aviation Organization (ICAO) has recommended that airlines from member countries implement a FOQA program. FAA has notified ICAO that the program will remain voluntary in the United States. Results: Although FAA has no formal national evaluation program to measure the overall results or effectiveness of FOQA programs, FAA cites examples that describe FOQA’s contribution to enhanced aviation safety. For example, one FOQA program highlighted a high rate of descent when airplanes land at a particular airport. On the basis of the information provided from FOQA, air traffic controllers at the airport were able to develop alternative approach procedures to decrease the rate of descent. Voluntary Disclosure Reporting Program (VDRP) Participation: Participants include air carriers, repair stations, and production approval holders. Purpose: FAA initiated the program to promote aviation safety by encouraging the voluntary self-reporting of manufacturing, and quality control problems and safety incidents involving FAA requirements for maintenance, flight operations, drug and alcohol prevention programs, and security functions. Process: Upon discovering a safety violation, participants can voluntarily disclose the violation to FAA within 24 hours. The initial notification should include a description of the violation, how and when the violation was discovered, and the corrective steps necessary to prevent repeat violations. Within 10 days of filing the initial notification to FAA, the entity is required to provide a written report that cites the regulations violated, describes how the violation was detected, provides an explanation of how the violation was inadvertent, and provides a description of the proposed comprehensive fix. FAA may pursue legal action if the participant discloses violations during, or in anticipation of, an FAA inspection. The violation must be reported immediately after being detected, must be inadvertent, must not indicate that a certificate holder is unqualified, and must include the immediate steps that were taken to terminate the apparent violation. If these conditions are met, and the FAA inspector has approved the comprehensive fix, then the FAA inspector will prepare a letter of correction and the case is considered closed with the possibility of being reopened if the comprehensive fix is not completed. Results: FAA does not know the overall program results because it does not have a process to measure the overall effectiveness of the program nationwide. A 2003 internal FAA report recommended that the agency evaluate the use and effectiveness of this program. Selected GAO Reports on Aviation Safety Aviation Safety: System Safety Approach Needs Further Integration into FAA’s Oversight of Airlines. GAO-05-726. Washington, D.C.: September 28, 2005. Aviation Safety: FAA Management Practices for Technical Training Mostly Effective; Further Actions Could Enhance Results. GAO-05-728. Washington, D.C.: September 7, 2005. Aviation Safety: Oversight of Foreign Code-Share Safety Program Should Be Strengthened. GAO-05-930. Washington, D.C.: August 5, 2005. Aviation Safety: FAA Needs to Strengthen the Management of Its Designee Programs. GAO-05-40. Washington, D.C.: October 8, 2004. Aviation Safety: Better Management Controls are Needed to Improve FAA’s Safety Enforcement and Compliance Efforts. GAO-04-646. Washington, D.C.: July 6, 2004. Aviation Safety: Information on FAA’s Data on Operational Errors at Air Traffic Control Towers. GAO-03-1175R. Washington, D.C.: September 23, 2003. Aviation Safety: FAA Needs to Update the Curriculum and Certification Requirements for Aviation Mechanics. GAO-03-317. Washington, D.C.: March 6, 2003. Aviation Safety: FAA and DOD Response to Similar Safety Concerns. GAO-02-77. Washington. D.C.: January 22, 2002. Aviation Safety: Safer Skies Initiative Has Taken Initial Steps to Reduce Accident Rates by 2007. GAO/RCED-00-111. Washington, D.C.: June 30, 2000. Aviation Safety: FAA’s New Inspection System Offers Promise, but Problems Need to Be Addressed. GAO/RCED-99-183. Washington, D.C.: June 28, 1999. Aviation Safety: Weaknesses in Inspection and Enforcement Limit FAA in Identifying and Responding to Risks. GAO/RCED-98-6. Washington, D.C.: February 27, 1998. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The U.S. commercial aviation industry has an extraordinary safety record. However, when passenger airlines have accidents or serious incidents, regardless of their rarity, the consequences can be tragic. The Federal Aviation Administration (FAA) works to maintain a high level of safety through an effective safety oversight system. Keys to this system are to: (1) establish programs that focus resources on areas of highest safety risk and on mitigating risks; (2) provide training and communication to ensure that inspectors can consistently carry out the agency's oversight programs; and (3) have processes and data to continuously monitor, evaluate, and improve the numerous oversight programs that make up the safety oversight system. This statement focuses on these three key areas and is based on recent GAO reports on FAA's inspection oversight programs, industry partnership programs, enforcement program, and training program. FAA's safety oversight system includes programs that focus on risk identification and mitigation through a risk-based system safety approach, leveraging of resources through designee and partnership relationships, and enforcement of safety regulations, but the benefits of these programs are not being fully realized. For example, FAA's system safety approach includes the addition of a program that emphasizes risk identification to its traditional inspection program for overseeing some airlines, which is not based on risk. However, it is likely that the benefits of this approach could be enhanced if the inspection workload was not as heavily oriented to the traditional inspection program's non-risk based activities. FAA leverages its resources through its designee programs, in which designated individuals and organizations perform about 90 percent of certification-related activities, and through its industry partnership programs, which are designed to assist the agency in receiving safety information. An outgrowth of FAA's inspection process is its enforcement program, which is intended to ensure industry compliance with safety regulations. However, GAO has expressed concerns that this program may not be as effective as it could be in deterring violations. FAA has made training an integral part of its safety oversight system, but several actions could improve the results of its training efforts, including ensuring that inspectors are well-trained in FAA's system safety approach and have sufficient knowledge of increasingly complex aircraft and systems to effectively identify safety risks. FAA has established mandatory training requirements for its workforce and designees. We have reported that FAA has generally followed effective management practices for planning, developing, delivering, and assessing the impact of its technical training for safety inspectors. GAO has found inadequate evaluative processes and limitations with data for FAA's inspection programs, designee programs, industry partnership programs, and enforcement program. For example, FAA lacked requirements or criteria for evaluating its designee programs. In another example, FAA's nationwide enforcement database is not as useful as it could be because of missing or incomplete historical information about enforcement cases.
GAO_GAO-06-196
Background SSA pays retirement and disability benefits to both citizen and noncitizen workers who pay Social Security taxes and meet certain entitlement requirements. SSA also pays benefits to dependents of living workers and survivors of deceased workers who are entitled to benefits. Retirement, disability, and survivor benefits are known as Title II Social Security benefits. Historically, SSA paid benefits to noncitizens regardless of their work authorization status and/or lawful presence. SSA records earnings information for workers, regardless of their citizenship status, from earnings reports (IRS Form W-2, Wage and Tax Statement) submitted by employers and self-employed individuals. Workers in Social Security covered employment (“covered employment”) contribute to Social Security either through payroll taxes or self-employment taxes. The earnings from these jobs are reported under a worker’s SSN, if the individual has been assigned one. In cases where SSA is unable to match a worker’s earnings report with a valid SSN, SSA records the worker’s earnings in its Earning Suspense File (ESF), which electronically tracks such earnings. If workers later receive work authorization and SSNs, SSA will credit previous unmatched earnings to them, if they can show that such earnings in the ESF belong to them. SSA later determines whether a worker accrues enough work credits to receive benefits (also referred to as “quarters of coverage”). In addition to this, workers must meet certain age requirements and, in the case of disability benefits, have medical certification of their disability. An individual typically needs to work at least 10 years (which is equivalent to 40 work credits) and be at least 62 years old to qualify for retirement benefits. Fewer work credits are needed for disability benefits. In general, these applicants must also show recent employment history and that they have worked for a certain number of years prior to their disability, both of which vary with the worker’s age. Dependents and survivors of workers may also qualify for benefits based on the workers’ entitlement. However, noncitizen workers and their dependents or survivors applying for benefits after 1996 must also prove that they meet certain lawful presence requirements to receive benefit payments. New Restrictions Now Prevent Payment of Benefits to Noncitizens Unauthorized to Work in the United States While SSA previously paid benefits to all individuals who met Social Security entitlement requirements, without regard to their work authorization status, the Social Security Protection Act (SSPA) now prevents payment of benefits to noncitizens who lack authorization. According to a June 2005 Pew Hispanic Center report, about 6.3 million workers of the approximately 24 million noncitizens living in the United States in 2004 lacked such authorization. To qualify for benefits, Section 211 of the SSPA requires that claims based on a noncitizen worker assigned an SSN after 2003 prove that the worker meets one of the following requirements: has authorization to work in the United States or was temporarily admitted into the United States at any time as a business visitor or as a crewman under specified provisions of the Immigration and Nationality Act. Congress passed the SSPA in March 2004, but made its provisions retroactive to benefit applications based on SSNs issued on or after January 1, 2004. Although the provisions of Section 211 apply directly to noncitizen workers, they can also affect the entitlement of any person applying for a benefit on the worker’s record. For example, if a noncitizen worker is ineligible for benefits under Section 211, a child claiming benefits on the worker’s record would also be disallowed, regardless of the child’s citizenship or immigration status. Noncitizens assigned SSNs before January 1, 2004, are not affected by Section 211 restrictions. For noncitizens who meet the conditions of Section 211 or are exempt from its requirements, SSA counts all earnings from covered employment— including those from periods of unauthorized employment—toward their Social Security benefit. However, unauthorized workers no longer qualify for benefits if they were assigned an SSN on or after January 1, 2004, and do not meet the additional eligibility requirements under Section 211. In addition, since 1996, noncitizens and their noncitizen dependents or survivors must be lawfully present in the United States to receive benefits. If such noncitizens are entitled to benefits, but do not meet the lawful presence requirement, SSA approves their benefit application, but places their benefits in a suspended status, until they establish lawful presence. However, a noncitizen living outside of the United States may receive benefits under certain conditions. For example, a noncitizen may receive benefits outside of the United States if he/she is a citizen of certain countries that have agreements with the United States permitting such payments. Other Initiatives Are Intended to Reduce Unauthorized Work In addition to Section 211, there are other initiatives to reduce unauthorized work activity by noncitizens. Employers are required under the Immigration Reform and Control Act of 1986 to review certain documents and certify whether their workers are authorized to work in the United States, making it illegal for employers to knowingly hire unauthorized workers. To assist employers with this effort, SSA and DHS are offering services to help them verify whether a noncitizen is authorized to work in the United States. For example, SSA and DHS jointly operate an employee verification service called the Basic Pilot Program, which assists employers in verifying employment eligibility of newly hired workers, based on DHS and SSA records. In addition, Congress has recently passed the REAL ID Act of 2005, which could make it more difficult for noncitizens to engage in unauthorized employment by placing restrictions on state issuance of driver’s licenses and personal identification cards. Under the law, beginning in May 2008, federal agencies may not accept for any official purpose driver’s licenses or identification cards issued by a state unless the state meets certain minimum standards. These standards must include requirements that the state (1) receives proof and verifies, among other things, the person’s SSN, or verifies that the person is not eligible for one, and (2) receives valid documentary evidence that the person is in the United States legally. Also, the law requires that driver’s licenses and identification cards issued to certain noncitizens must expire when the individual’s authorized stay in the United States ends or, if there is no definite authorized period of stay, after 1 year. Despite these initiatives, however, there is evidence that many noncitzens are able to engage in unauthorized employment. For example, in an August 2005 study, the SSA Office of Inspector General found 85 cases involving noncitizens who were not authorized to work in the United States from its review of 100 randomly selected cases of 1,382 records involving individuals who had earnings posted to their Social Security earnings records from work done prior to receiving their SSN in 2000. SSA Has Tightened the Criteria for Issuing Nonwork SSNs SSNs were originally created to record workers’ earnings; however, SSA has assigned them to individuals over the years for various nonwork purposes (called “nonwork SSNs”), such as general identification. In recent years, SSA has tightened the criteria for assigning such SSNs. SSA also assigns SSNs to noncitizens who are authorized to work in the United States, which are known as work-authorized SSNs. In fiscal year 2005, SSA issued 1.1 million original SSNs to noncitizens, fewer than 15,000 of which were nonwork SSNs. As of 2003, SSA had assigned some 7 million nonwork SSNs. SSA started tightening the requirements for assigning nonwork SSNs in 1996 when the Internal Revenue Service began assigning taxpayer identification numbers to assist individuals who did not qualify for a SSN in filing their taxes. SSA further tightened the requirements for assigning nonwork SSNs, primarily due to the terrorist attacks of September 11, 2001, limiting them only to noncitizens when (1) a federal statute or regulation requires that they be assigned an SSN to receive a particular benefit or service to which they are entitled or (2) a state or local law requires that they be assigned an SSN to receive entitled public assistance benefits. SSA Provided Guidance and Training to Its Staff to Implement Section 211, but Lacked Internal Controls for Assuring Proper Determinations SSA has issued guidance and provided training to assist staff in processing benefit claims covered by Section 211; however, we found some improper determinations by staff and a lack of internal controls for detecting such errors. The claims with improper determinations consisted of 17 claims involving workers who were assigned nonwork SSNs after 2003, which should not have been approved, and 1 claim that was improperly disapproved. SSA agreed with our assessment and attributed the errors to staff’s lack of familiarity with the new Section 211 requirements. Additionally, we found that letters sent to claimants to inform them of disapproval decisions did not always provide them with information on their right to appeal the decision and other required information. SSA Provided Guidance and Training to Assist Staff in Properly Applying Section 211 SSA has provided guidance and training to assist staff in reaching proper determinations for claims covered by Section 211. With the SSPA’s passage in March 2004 and retroactive effective date of January 1, 2004, SSA acted quickly to provide guidance to its field offices by issuing an emergency message on Section 211 in April 2004. This message explained the various provisions of the new law and instructed staff to hold all noncitizen claims that could have a potential Section 211 issue until detailed guidance could be developed. In August 2004, SSA issued detailed guidance through its Program Operations Manual System (POMS). The guidance explained the new requirements for approving claims under Section 211 and provided several hypothetical scenarios to illustrate how the guidance should be applied. Some SSA regional offices provided additional written guidance on Section 211. For example, one regional office provided staff with guidance that compared claims processing procedures in effect before the passage of the SSPA with those required under Section 211. Although SSA’s benefit application process is the same for citizens and noncitizens, Section 211 imposes additional requirements for claims based on a noncitizen worker assigned an SSN after 2003. For such claims, SSA’s guidance on Section 211 directs field office staff to determine if the worker has work authorization or a record of prior entry into the United States for certain work purposes. This determination is in addition to the existing requirement that noncitizens residing in the United States who file for benefits are lawfully present to receive benefit payments or meet other conditions to receive benefit payments outside of the United States. To process applications for benefits, SSA field office staff meet with applicants to explain the benefits for which they might qualify and review the evidence supporting the claim. After a claims representative makes the initial determination, a field office supervisor or an experienced colleague reviews the claim for the appropriateness of the decision. Once a claims determination is made, SSA requires that field office staff send applicants a letter notifying them of the decision. For those claims disallowed as a result of Section 211 in which the primary worker lacked an SSN, SSA guidance requires field office staff to send a copy of the disallowance letter to agency headquarters. SSA headquarters uses this information to monitor the number of such cases, because there is currently no way to track this information in SSA’s system without an SSN. SSA also provided training to field office staff to assist them in properly applying Section 211. In September 2004, SSA headquarters provided interactive video training on the SSPA, as part of its monthly training for newly issued transmittals, which included a general discussion of the requirements of Section 211, among other topics. SSA later circulated a written summary of the broadcast to field offices for training purposes. In November 2004, SSA headquarters issued a transmittal to its 800-number call centers to assist staff in addressing inquiries about Section 211. Additionally, managers at three of the four field offices we visited told us they used peer group discussions and more specific training to supplement the headquarters training. One field office manager developed and administered a test to assess staff’s understanding of the Section 211 requirements. SSA Lacked Internal Controls to Detect Erroneously Decided Claims and Several Disapproval Letters Lacked Required Information As part of our review, SSA provided us with records on all of the approximately 177,000 approved and disapproved claims that involved noncitizen workers—and therefore possibly covered by Section 211—that had been decided from January 2004 to December 2005. (See table 1.) These records included information on the type of claim, when the SSN was assigned, and whether the claimants were lawfully present. The majority of these claims were for retirement or disability benefits, which made up roughly 94 percent of all claims. In assessing SSA’s claims determinations we found that 18 were erroneous. These 18 were; 17 approved claims based on noncitizen workers who had been assigned a nonwork SSN after 2003; and 1 disapproved claim in which SSA erroneously applied Section 211 to a survivor’s parent who was not the primary worker. In 17 of the 19 approved cases we reviewed in which the primary workers had been assigned a nonwork SSN after 2003, we found that the determinations were erroneous because the workers lacked the work authorization or past qualifying work experience required under Section 211. Our review of SSA’s records for the 17 erroneously decided claims showed that SSA paid benefits for 13 of the claims. In total, over the period of 2004 and 2005, SSA paid out approximately $110,000 for these claims, almost all of which was in the form of recurring monthly payments. For the remaining four claims, SSA never began benefit payments due to beneficiaries’ lack of lawful presence or other reasons. In discussing the erroneously approved cases with SSA officials, they agreed that the cases had been improperly decided and said that the errors possibly resulted from some claims representatives’ lack of familiarity with the new requirements of Section 211. Also, in an earlier discussion with SSA officials, we asked whether they had considered installing an automated systems control to identify potentially erroneous claims. The officials told us that although the agency indeed considered such a control, SSA management decided that it was not needed due to the low number of claims involving Section 211 that had been processed overall. For the 41 claims disapproved as a result of Section 211, we found that proper determinations had been made in all but one case. In assessing these cases, we reviewed all of the case file documentation. The documentation in some cases included only the letter notifying the claimant(s) of the disapproval decision, and in other cases this letter and a combination of other documents such as wage and earnings statements and immigration documentation. SSA disapproved 38 of the 41 claims because the primary worker lacked work authorization and had never been issued an SSN. Although the workers for the remaining three claims had been assigned SSNs after 2003, their claims were disapproved because they lacked work authorization. In several of the cases, it appeared that that the primary workers had been employed in the United States and had paid Social Security taxes as documented by wage and earnings statements and other tax information included in the files. In some instances, the claimants said that the SSN that the worker had used had been made up or belonged to someone else. For the one claim that was incorrectly decided, SSA based its decision on a survivor’s claim for a child on the widow’s lack of an SSN, instead of the primary worker who had been assigned an SSN prior to 2004. After further review of this claim, SSA officials agreed that the claim had been improperly disapproved based on Section 211, but stated that the claim would remain in a disapproved status pending additional evidence supporting the child’s relationship to the deceased worker. In reviewing the 41 letters sent to claimants to inform them of disapproval decisions based on Section 211, we found that SSA staff did not always provide the claimants with information on their appeals rights and other required information. For example, in most cases, the letters did not inform claimants of their right to representation for appeals or refer them to a pamphlet explaining their right to question the decision as required by SSA’s guidance. Also, in several cases, the letters did not apprise claimants of their right to appeal the decision or provide instructions on how to file an appeal. SSA field managers and staff told us that these inconsistencies occur because they lack a standardized format for preparing such disapproval letters. They suggested that automating the letters would help ensure that they provide all required information to claimants. When claimants do not receive such information, they could fail to file an appeal or secure representation on their behalf. As a result, claimants who might be found eligible for benefits upon appeal would not receive benefits to which they may be entitled. Section 211 Has Impacted a Small but Potentially Growing Number of Claims, but May Not Restrict Benefits to Certain Temporary Noncitizen Workers Though its impact may grow over time, Section 211 has not yet significantly reduced benefits to noncitizens; the law’s restrictions, however, may not prevent benefits for certain temporary noncitizen workers who could engage in work not authorized by their visas. As of December 2005, SSA had disapproved only 41 claims of some 72,000 disapproved noncitizen-related claims due to Section 211 because SSA determined that the workers involved in the claims lacked necessary work authorization. While the number of disapproved claims could increase as more noncitizens file for retirement or disability claims in the coming years, there are still certain temporary workers who, upon receiving an SSN, could engage in employment not authorized by their visas. If these noncitizens remain in the country long enough after their visas expire, they could potentially earn enough work credits in such employment to eventually qualify for benefits. Section 211 Has Not Yet Significantly Reduced the Number of Noncitizens Receiving Benefits Because Section 211 does not apply to claims based on noncitizen workers assigned SSNs prior to 2004, the law has not significantly reduced the number of noncitizens receiving benefits. However, the number of disapproved claims will likely increase as unauthorized workers file for benefits in the coming years. During 2004 and 2005, SSA disallowed roughly 72,000 of some 177,000 claims involving noncitizen workers, of which only 41 were disallowed because they lacked the necessary work authorization required under Section 211. In addition to the Section 211 exemptions, according to SSA officials, the minimal impact of the law to date may also be a result of unauthorized workers not applying for benefits after concluding that they would not be eligible. As of December 2005, SSA approved roughly 60 percent of the approximately 177,000 claims, almost all of which involved noncitizens who were assigned a work-authorized SSN prior to 2004. Our review also showed that SSA disallowed roughly 72,000 benefit claims involving a noncitizen worker, almost always due to reasons other than Section 211. Almost 54,000 (74 percent) were disapproved because the primary worker upon whom the claim was based did not have sufficient work credits to qualify for disability benefits, which requires fewer than the 40 work credits generally required for retirement benefits. In addition, approximately 19,000 (26 percent) claims were disapproved because the primary worker did not have sufficient work credits to qualify the claimant(s) for retirement or survivor benefits (fig. 1). Although SSA has disallowed only 41 claims as a result of Section 211 requirements, the number will increase in future years as more unauthorized workers reach retirement age or become disabled. While the 41 disallowed claims affected workers who had applied for retirement or disability benefits, they predominantly affected claimants applying for survivor benefits. In fact, 31 of the 41 claims were for survivor benefits. These claims in several cases involved survivors who were U.S. citizens. In some of these cases, survivors of deceased workers were denied benefits because the worker did not meet the requirements of Section 211, even though the worker had enough work credits to qualify the claimants for survivor benefits. While SSA data for the approximately 105,000 claims approved during 2004 and 2005 shows that 97 percent of the workers assigned SSNs before 2004 had work authorized SSNs, there are millions of noncitizens assigned nonwork SSNs before 2004 who may qualify for benefits in the coming years because Section 211 does not affect them. As figure 2 shows, 3,130 claims were made based on noncitizen workers issued nonwork SSNs before 2004. Section 211 May Not Restrict Benefits to Certain Noncitizen Temporary Workers Who Engage in Unauthorized Work Even with Section 211 restrictions, opportunities may still exist for certain noncitizens assigned SSNs after 2003 to collect benefits without current work authorization. For example, some temporary workers—often referred to as nonimmigrants—legally admitted into the United States may receive benefits based on work not authorized by their visas. Currently, the Social Security Act directs SSA to take steps to issue SSNs to certain noncitizen visa holders granted permission to work in the United States by DHS under certain temporary visas. Such noncitizens include, among others, college students, camp counselors, and international dignitaries. (We selected certain visa categories under which noncitizens temporarily in the United States were most likely to receive a work authorized SSN based on information received from SSA. See app. II for a detailed description of the nonimmigrant classifications we used.) Between 2000 and 2004, SSA issued approximately 1 million SSNs to these noncitizens, and as shown in figure 3, the number of these SSNs substantially increased after 2001. By using their work authorized SSN, these workers could engage in employment covered by Social Security, but not authorized by their visa (which is considered illegal employment). If these workers accumulate enough work credits by overstaying their visas and meet age and other entitlement requirements, they would qualify for benefits based on the work authorized designation of their SSN. SSA’s Office of the Inspector General estimated that out of the approximately 795,000 temporary visa holders that had received an SSN regardless of their visa type during fiscal year 2000 alone, some 32,000 had either continued working after their immigration status expired or may have allowed someone else to use their SSN to work after they left the United States. SSA officials acknowledged that it was possible for these temporary workers to obtain benefits by using their SSN to engage in employment not authorized under their visa. However, they said that the likelihood of this occurring was low, because such individuals would probably not stay in the country long enough to accrue sufficient work credits or meet lawful presence requirements. As demonstrated by the Office of the Inspector General report, however, temporary visa holders do, in many instances, continue working after their visas expire. Also, if temporary visa holders accrue sufficient work credits and meet other eligibility requirements, they may be able receive benefits without meeting the lawful presence requirement under certain conditions. For example, such temporary visa holders could receive benefits if they apply for benefits outside of the United States if they are citizens of certain countries that have agreements with the United States permitting such payments. Should such instances occur, SSA would be limited in its ability to detect them because it does not have the mechanisms to distinguish between individuals’ authorized and unauthorized employment. Conclusions Section 211 has imposed new restrictions on the payment of Social Security benefits to noncitizens who work without authorization, but, not surprisingly, few have been denied benefits thus far. Under the law, noncitizens may continue to have earnings from unauthorized employment credited toward their benefits entitlement if they received their SSN in 2003 or earlier, or if their nonwork SSN was assigned after 2003 and they later obtain work authorization. Over time, however, this provision of the law will likely exert a greater impact on benefits paid based on unauthorized work. Although Section 211 will not prevent all such benefit payments, as in the instance regarding certain temporary visa holders, the new law is making a small but potentially growing difference. It will be important for SSA to continue to monitor the law’s impact and, to the extent practicable, identify the remaining situations permitting benefit payments based on unauthorized work if they prove significant and measurable. Meanwhile, SSA needs to take actions to ensure that Section 211 is properly administered. Our findings show that, in implementing Section 211, SSA has taken steps to prevent the payment of benefits for claims involving workers who lack work authorization, but additional actions are needed to ensure that claims are properly decided and that all claimants receive necessary information concerning the decision. Because we identified 17 claims that had been approved in error, developing an internal control to identify potentially erroneous claims decisions could reduce future errors. Additionally, it is important that SSA staff receive additional training on the proper application of Section 211 for claims approved after 2004 in which workers lack work authorization. Without such measures, benefits may be paid to those who are not entitled to them and denied to those who are. Given the fact that over time the number of unauthorized workers reaching retirement age or becoming disabled will likely increase and therefore be subject to Section 211, these measures could help SSA ensure the integrity of the Social Security program and avoid erroneous payments. Also, with regard to disapproved claims, SSA has not developed a way to ensure that all unsuccessful applicants receive information on both their right to appeal the decision and information regarding whom to contact for questions about the decision—as required by its own policy. As a result, applicants who do not receive such information may not understand that they can appeal the decision, the process for filing an appeal, and the time frame within which such action must be taken. Recommendations for Executive Action To assure proper benefit eligibility determinations and appeals processes, we recommend that the Commissioner of Social Security: establish a control to identify potentially erroneous claims decisions for unauthorized workers assigned SSNs after 2003, such as an electronic edit check to identify such claims; provide enhanced training to staff to assist them in properly processing claims covered by Section 211; and develop a standardized format for disapproval letters to ensure that staff provide applicants with all required information regarding the disapproval decision. Agency Comments We obtained written comments on a draft of this report from the Commissioner of SSA. SSA’s comments are reproduced in appendix III. SSA also provided additional technical comments, which have been included in the report as appropriate. SSA agreed with our recommendations and discussed various actions it is taking to address them. In response to our first recommendation, SSA stated that it had implemented a new edit check into its Disability Insured Status Calculator Online program to screen for whether individuals meet the disability insured status rules. To assist staff in making proper claims determinations, SSA stated that the edit check generates an alert when an individual’s SSN issue date is January 1, 2004, or later, and provides staff with a copy of the claims processing procedures relating to Section 211. While we commend SSA for its swift implementation of this action, we believe that this improvement still leaves room for erroneous claims determinations to go undetected. One reason for this is that SSA’s action only provides such alerts for disability claims, potentially leaving thousands of retirement, dependent, and survivor claims susceptible to error. Also, this action still relies only on SSA staff to make proper determinations. However, as our review demonstrated, this step alone is not sufficient to detect claims that were improperly decided. Therefore, we believe that SSA should install an automated systems edit to identify potentially erroneous claims decisions as we recommended. In response to our second recommendation, SSA stated that it was updating its claims processing procedures relating to Section 211 of the SSPA and would provide staff with training on the new update when it is completed. Regarding our third recommendation, SSA stated that it would require staff to use a notice that provides standardized appeals language and information on the disapproval decision, as part of its update to the Section 211 guidance. This notice is located in SSA’s Distributed Online Correspondence System, which is separate from the Program Operations Manual System that contains Section 211 guidance. While existing guidance on Section 211 instructs staff to include appeals language and other required information in letters explaining disapproval decisions, it does not provide the exact language that staff are to include in the letters. Consequently, staff must use their discretion in determining what language should be included. As our review found, this resulted in several letters that did not provide unsuccessful claimants with information on their right to appeal the disapproval decision and other required information. While providing staff with such standardized language is a step forward, it will require SSA staff to combine language from the Section 211 guidance explaining why the worker did not meet the requirements of Section 211 with the standardized language from the notice. We believe that having staff prepare the letters using information from two different places could increase the likelihood that all required information may not be included. Thus, we still believe that a standardized letter containing all of the required information regarding the disapproval decision is needed. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days after its issue date. At that time, we will send copies of this report to the Commissioner of SSA, the Secretary of DHS, and the Commissioner of IRS, and other interested parties. Copies will also be made available at no charge on GAO’s Web site at http://www.gao.gov. If you have questions concerning this report, please call me on (202) 512-7215. Contact points for our Offices of Congressional Relations and Public Affairs, respectively, are Gloria Jarmon, who may be reached on (202) 512-4470, and Paul Anderson, who may be reached on (202) 512-4800. Appendix I: Objectives, Scope, and Methodology In assessing the Social Security Administration’s (SSA) implementation of Section 211 and the adequacy of its policies and procedures, we reviewed the law and discussed its legal interpretation with GAO and SSA attorneys. We also reviewed prior GAO, SSA Office of Inspector General (OIG), Congressional Research Service (CRS), and other reports on the new law and related issues. We also reviewed various documents detailing SSA’s guidance on Section 211. In particular, we examined relevant sections of SSA’s Program Operations Manual System (POMS) that explained the procedures for processing claims covered by Section 211. We obtained information from officials in SSA headquarters in Baltimore, Maryland, and the four field offices we visited (Williamsburg Field Office in Brooklyn, New York, and the Culver City, Redlands, and Porterville Field Offices in California) on the training provided to staff. We selected the four field offices because of the geographic proximity of multiple offices in a single state and because the information that we had at the time of our visits showed that the offices had individually or collectively—within their region—processed a large number of claims that had been disapproved as a result of Section 211 requirements. To ascertain whether SSA made proper decisions for claims involving primary workers who were noncitizens, we reviewed: (1) all 19 approved claims in which SSA had assigned a nonwork SSN to the noncitizen workers after 2003, and (2) all 41 disapproved claims in which SSA had reached its decision as a result of the Section 211 requirements. To identify claims possibly covered by Section 211, we obtained data on claims that SSA had approved for benefit payments involving noncitzen workers between January 2004 and December 2005. SSA provided information on these claims from its electronic Master Beneficiary Record file, which maintains data on all benefit claims. From these files, we obtained data such as the filing date for the claim, the type of SSN assigned to the primary worker, the date the SSN was assigned to the worker, the type of claim, among other pieces of information. We reviewed these data from the Master Beneficiary Record for the 19 approved claims and discussed each of the claims with SSA officials. For the 41 claims that had been disapproved due to Section 211 requirements, we reviewed all of the available documentation associated with each claim and discussed the claims with SSA officials. The file documentation in some cases included only the letter notifying the claimant(s) of the disapproval decision, and in other cases, a combination of other documents such as earnings statements and immigration documents. Additionally, we discussed with managers and staff in the four SSA field offices we visited the claims that they had disapproved based on Section 211. We did not review any approved cases in the four field offices, because information on the approved cases for the offices was not available at the time. To more generally assess the extent to which Section 211 had impacted the payment of benefits for claims that involved primary workers who were noncitizens—and therefore possibly covered by Section 211—we obtained data on all such claims that SSA had decided between January 2004 and December 2005. This data showed that SSA had decided a total of approximately 177,000 claims, of which some 105,000 had been approved and 72,000 had been disapproved. To determine if there are circumstances under which certain noncitizens could still receive benefits based on unauthorized employment, we interviewed SSA headquarters officials and managers and staff in the four field offices we visited. We also obtained data from SSA on certain noncitizens issued temporary work visas that make them eligible to receive work-authorized SSNs. SSA officials identified 23 temporary visa categories that qualify individuals for such SSNs (app. II lists the 23 visa categories). We obtained data from SSA on the number of SSNs it had assigned to individuals for each of the visa types between 2000 and 2004. SSA’s data showed that it had assigned almost 1 million SSNs to these temporary workers. We compared SSA’s data to the number of temporary work visas that the Department of State had issued for the 23 visa types between 2000 and 2004 and found that SSA’s overall numbers were reasonable. We also discussed with officials at the Internal Revenue Service and the Department of Homeland Security their policies regarding noncitizens issued temporary work visas. We conducted our work between February 2005 and January 2006 in accordance with generally accepted government auditing standards. Appendix II: Examples of Temporary Visa Categories Issued SSNs Appendix III: Comments from the Social Security Administration Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments The following team members made key contributions to this report: Blake Ainsworth, Assistant Director, Susan Bernstein, Mary Crenshaw, Jason Holsclaw, Kevin Jackson, Mimi Nguyen, Daniel Schwimer, Vanessa Taylor, and Paul Wright.
Continued high levels of unauthorized immigrant workers in the United States have fostered concerns about whether they should be eligible for Social Security benefits. Until recently, the Social Security Administration (SSA) allowed noncitizens to collect benefits, regardless of their work authorization status, provided that they met certain legal presence requirements. However, in March 2004, Congress passed the Social Security Protection Act, which under Section 211, requires that noncitizens assigned a Social Security number (SSN) after 2003 have work authorization from current or past qualifying work to collect benefits. This report describes (1) the steps SSA has taken to implement Section 211 and how effective SSA's policies and procedures are in preventing improper benefit decisions, and (2) how Section 211 has affected the payment of benefits to unauthorized workers. SSA has issued guidance and provided training to assist staff in processing benefit claims under Section 211, but the absence of certain internal controls has allowed some errors to go undetected. SSA issued detailed guidance in August 2004 and subsequently provided staff with training on the law, which some SSA field offices supplemented with additional training. Although SSA's policies and procedures were fairly detailed, GAO found several incorrect claims determinations and a lack of internal review for preventing them. With regard to the provisions of Section 211, GAO found that SSA improperly approved 17 of the 19 claims that involved noncitizen workers who had been issued SSNs after 2003 and who lacked required work authorization. GAO also found that 1 of the 41 claims that SSA disapproved was improper. SSA officials stated that the improper determinations were likely due to staff's unfamiliarity with the new requirements. In addition, GAO found that letters sent to claimants informing them of disapproval decisions did not always contain all required information. Because Section 211 does not apply to noncitizens who were assigned SSNs before 2004, few noncitizens have been affected by the law thus far. Only 41 (less than 1 percent) of the approximately 72,000 noncitizen-related claims SSA disapproved during 2004 and 2005 were due to Section 211. It is likely that the number of disapprovals based on the law will grow as more unauthorized workers file for benefits in coming years. However, opportunities may exist for certain noncitizens who receive their SSNs after 2003 to collect benefits without current work authorization. For example, noncitizens who are issued SSNs under temporary work visas may be able to engage in work not authorized under their visas and subsequently claim benefits based on that work. Although SSA officials told GAO the likelihood of this occurring was low, the SSA Inspector General reported in 2005 that a significant number of temporary visa holders overstayed their visas.
GAO_GAO-15-539T
Background BIE’s Indian education programs derive from the federal government’s trust responsibility to Indian tribes, a responsibility established in federal statutes, treaties, court decisions, and executive actions. It is the policy of the United States to fulfill this trust responsibility for educating Indian children by working with tribes to ensure that education programs are of the highest quality, among other things. In accordance with this trust responsibility, Interior is responsible for providing a safe and healthy environment for students to learn. BIE’s mission is to provide Indian students with quality education opportunities. Students attending BIE schools generally must be members of federally recognized Indian tribes, or descendants of members of such tribes, and reside on or near federal Indian reservations. All BIE schools—both tribally-operated and BIE-operated—receive almost all of their funding to operate from federal sources, namely, Interior and Education. Specifically, these elementary and secondary schools received approximately $830 million in fiscal year 2014—including about 75 percent, or about $622 million from Interior and about 24 percent, or approximately $197 million, from Education. BIE schools also received small amounts of funding from other federal agencies (about 1 percent), mainly the Department of Agriculture, which provides reduced-price or free school meals for eligible low-income children. (See fig. 1). While BIE schools are primarily funded through Interior, they receive annual formula grants from Education, similar to public schools. Specifically, schools receive Education funds under Title I, Part A of the Elementary and Secondary Education Act (ESEA) of 1965, as amended, and the Individuals with Disabilities Education Act. Title I—the largest funding source for kindergarten through grade 12 under ESEA—provides funding to expand and improve educational programs in schools with students from low-income families and may be used for supplemental services to improve student achievement, such as instruction in reading and mathematics. An Education study published in 2012 found that all BIE schools were eligible for Title I funding on a school-wide basis because they all had at least 40 percent of children from low-income households in school year 2009-10. Further, BIE schools receive Individuals with Disabilities Education Act funding for special education and related services, such as physical therapy or speech therapy. BIE schools tend to have a higher percent of students with special needs than students in public schools nationally. BIE schools’ educational functions are primarily the responsibility of BIE, while their administrative functions are divided mainly between two other Interior offices. The Bureau of Indian Education develops educational policies and procedures, supervises program activities, and approves schools’ expenditures. Three Associate Deputy Directors are responsible for overseeing multiple BIE local education offices that work directly with schools to provide technical assistance. Some BIE local offices also have their own facility managers. The Office of the Deputy Assistant Secretary of Management oversees many of BIE’s administrative functions, including acquisitions and contract services, financial management, budget formulation, and property management. This office is also responsible for developing policies and procedures and providing technical assistance and funding to Bureau of Indian Affairs (BIA) regions and BIE schools to address their facility needs. Professional staff in this division—including engineers, architects, facility managers, and support personnel—are tasked with providing expertise in all facets of the facility management process. The Bureau of Indian Affairs administers a broad array of social services and other supports to tribes at the regional level. Regarding school facility management, BIA oversees the day-to-day implementation and administration of school facility construction and repair projects through its regional field offices. Currently there are 12 regional offices, and 9 of them have facility management responsibilities.health and safety inspections to ensure compliance with relevant These responsibilities include performing school requirements and providing technical assistance to BIE schools on facility issues. In September 2013, we reported that BIE student performance on national and state assessments and graduation rates were below those of Indian students in public schools. For example, in 2011, 4th grade estimated average reading scores were 22 points lower for BIE students than for Indian students in public schools. In 4th grade mathematics, BIE students scored 14 points lower, on average, than Indian students in public schools in 2011. (See fig. 2.) We also reported that 8th grade students in 2011 had consistently lower scores on average than Indian students in public schools. Furthermore, students in BIE schools had relatively low rates of graduation from high school compared to Indian students in public schools in the 2010-2011 school year. Specifically, the graduation rate for BIE students for that year was 61 percent—placing BIE students in the bottom half among graduation rates for Indian students in states where BIE schools are located. In these states, the Indian student graduation rates ranged from 42 to 82 percent. Organizational Fragmentation and Poor Communication Undermine Indian Affairs’ Administration of BIE Schools Indian Affairs’ administration of BIE schools—which has undergone multiple realignments over the past 10 years—is fragmented. In addition to BIE, multiple offices within BIA and the Office of the Deputy Assistant Secretary of Management have responsibilities for educational and administrative functions for BIE schools. Notably, when the Assistant Secretary for Indian Affairs was asked at a February 2015 hearing to clarify the responsibilities that various offices have over BIE schools, he responded that the current structure is “a big part of the problem” and that the agency is currently in the process of realigning the responsibilities various entities have with regard to Indian education, adding that it is a challenging and evolving process. Indian Affairs provided us with a chart on offices with a role in supporting and overseeing just BIE school facilities, which shows numerous offices across three organizational divisions. (See fig. 3.) The administration of BIE schools has undergone several reorganizations over the years to address persistent concerns with operational effectiveness and efficiency. In our 2013 report, we noted that for a brief period from 2002 to 2003, BIE was responsible for its own administrative functions, according to BIE officials. However, in 2004 its administrative functions were centralized under the Office of the Deputy Assistant Secretary for Management. More recently, in 2013 Indian Affairs implemented a plan to decentralize some administrative responsibilities for schools, delegating certain functions to BIA regions. Further, in June 2014, the Secretary of the Interior issued an order to restructure BIE by the start of school year 2014-2015 to centralize the administration of schools, decentralize services to schools, and increase the capacity of tribes to directly operate them, among other goals. Currently, Indian Affairs’ restructuring of BIE is ongoing. In our 2013 report, we found that the challenges associated with the fragmented administration of BIE schools were compounded by recurrent turnover in leadership over the years, including frequent changes in the tenure of acting and permanent assistant secretaries of Indian Affairs from 2000 through 2013. We also noted that frequent leadership changes may complicate efforts to improve student achievement and negatively affect an agency’s ability to sustain focus on key initiatives. Indian Affairs’ administration of BIE schools has also been undermined by the lack of a strategic plan for guiding its restructuring of BIE’s administrative functions and carrying out BIE’s mission to improve education for Indian students. We have previously found that key practices for organizational change suggest that effective implementation of a results-oriented framework, such as a strategic plan, requires agencies to clearly establish and communicate performance goals, measure progress toward those goals, determine strategies and resources to effectively accomplish the goals, and use performance information to make the decisions necessary to improve performance.We noted in our 2013 report that BIE officials said that developing a strategic plan would help its leadership and staff pursue goals and collaborate effectively to achieve them. Indian Affairs agreed with our recommendation to develop such a plan and recently reported it had taken steps to do so. However, the plan has yet to be finalized. Fragmented administration of schools may also contribute to delays in providing materials and services to schools. For example, our previous work found that the Office of the Deputy Assistant Secretary for Management’s lack of knowledge about the schools’ needs and expertise in relevant education laws and regulations resulted in critical delays in procuring and delivering school materials and supplies, such as textbooks. In another instance, we found that the Office of the Deputy Assistant Secretary for Management’s processes led to an experienced speech therapist’s contract being terminated at a BIE school in favor of a less expensive contract with another therapist. However, because the new therapist was located in a different state and could not travel to the school, the school was unable to fully implement students’ individualized education programs in the timeframe required by the Individuals with Disabilities Education Act. In addition, although BIE accounted for approximately 34 percent of Indian Affairs’ budget, several BIE officials reported that improving student performance was often overshadowed by other agency priorities, which hindered Indian Affairs’ staff from seeking and acquiring expertise in education issues. In our 2013 report, we also found that poor communication among Indian Affairs offices and with schools about educational services and facilities undermines administration of BIE schools. According to school officials we interviewed, communication between Indian Affairs’ leadership and BIE is weak, resulting in confusion about policies and procedures. We have reported that working relations between BIE and the Office of the Deputy Assistant Secretary for Management’s leadership are informal and sporadic, and BIE officials noted having difficulty obtaining timely updates from the Office of the Deputy Assistant Secretary for Management on its responses to requests for services from schools. In addition, there is a lack of communication between Indian Affairs’ leadership and schools. BIE and school officials in all four states we visited reported that they were unable to obtain definitive answers to policy or administrative questions from BIE’s leadership in Washington, For example, school officials in one state D.C. and Albuquerque, NM.we visited reported that they requested information from BIE’s Albuquerque office in the 2012-2013 school year about the amount of Individuals with Disabilities Education Act funds they were to receive. The Albuquerque office subsequently provided them three different dollar amounts. The school officials were eventually able to obtain the correct amount of funding from their local BIE office. Similarly, BIE and school officials in three states reported that they often do not receive responses from BIE’s Washington, D.C. and Albuquerque offices to questions they pose via email or phone. Further, one BIE official stated that meetings with BIE leadership are venues for conveying information from management to the field, rather than opportunities for a two-way dialogue. We testified recently that poor communication has also led to confusion among some BIE schools about the roles and responsibilities of the various Indian Affairs’ offices responsible for facility issues. For example, the offices involved in facility matters continue to change, due partly to two re-organizations of BIE, BIA, and the Office of the Deputy Assistant Secretary for Management over the past 2 years. BIE and tribal officials at some schools we visited said they were unclear about what office they should contact about facility problems or to elevate problems that are not addressed. At one school we visited, a BIE school facility manager submitted a request in February 2014 to replace a water heater so that students and staff would have hot water in the elementary school. However, the school did not designate this repair as an emergency. Therefore, BIA facility officials told us that they were not aware of this request until we brought it to their attention during our site visit in December 2014. Even after we did so, it took BIE and BIA officials over a month to approve the purchase of a new water heater, which cost about $7,500. As a result, students and staff at the elementary school went without hot water for about a year. We have observed difficulties in providing support for the most basic communications, such as the availability of up-to-date contact information for BIE and its schools. For example, BIE schools and BIA regions use an outdated national directory with contact information for BIE and school officials, which was last updated in 2011. This may impair communications, especially given significant turnover of BIE and school staff. It may also hamper the ability of schools and BIA officials to share timely information with one another about funding and repair priorities. In one BIA region we visited, officials have experienced difficulty reaching certain schools by email and sometimes rely on sending messages by fax to obtain schools’ priorities for repairs. This situation is inconsistent with federal internal control standards that call for effective internal communication throughout an agency. In 2013, we recommended that Interior develop a communication strategy for BIE to update its schools and key stakeholders of critical developments. We also recommended that Interior include a communication strategy—as part of an overall strategic plan for BIE—to improve communication within Indian Affairs and between Indian Affairs and BIE staff. Indian Affairs agreed to these two recommendations and recently reported taking some steps to address them. However, it did not provide us with documentation that shows it has fully implemented the recommendations. Staff Capacity to Support Schools Is Limited Limited staff capacity poses another challenge to addressing BIE school needs. According to key principles of strategic workforce planning, the appropriate geographic and organizational deployment of employees can further support organizational goals and strategies and enable an organization to have the right people with the right skills in the right place. In 2013 we reported that staffing levels at BIA regional offices were not adjusted to meet the needs of BIE schools in regions with varying numbers of schools, ranging from 2 to 65. Therefore, we noted that it is important to ensure that each BIA regional office has an appropriate number of staff who are familiar with education laws and regulations and school-related needs to support the BIE schools in its region. Consequently, in 2013 we recommended that Indian Affairs revise its strategic workforce plan to ensure that its employees providing administrative support to BIE have the requisite knowledge and skills to help BIE achieve its mission and are placed in the appropriate offices to ensure that regions with a large number of schools have sufficient support. Indian Affairs agreed to implement the recommendation but has not yet done so. BIA regional offices also have limited staff capacity for addressing BIE school facility needs due to steady declines in staffing levels for over a decade, gaps in technical expertise, and limited institutional knowledge. For example, our preliminary analysis of Indian Affairs data shows that about 40 percent of BIA regional facility positions are currently vacant, including regional facility managers, architects, and engineers who typically serve as project managers for school construction and provide technical expertise. Our work and other studies have cited the lack of capacity of Indian Affairs’ facility staff as a longstanding agency challenge. Further, officials at several schools we visited said they face similar staff capacity challenges. For example, at one elementary school we visited, the number of maintenance employees has decreased over the past decade from six employees to one full-time employee and a part- time assistant, according to school officials. As a result of the staffing declines, school officials said that facility maintenance staff may sometimes defer needed maintenance. Within BIE, we also found limited staff capacity in another area of school operations—oversight of school expenditures. As we reported in November 2014, the number of key local BIE officials monitoring these expenditures had decreased from 22 in 2011 to 13, due partly to budget cuts. These officials had many additional responsibilities for BIE schools similar to school district superintendents of public schools, such as providing academic guidance. As a result, the remaining 13 officials had an increased workload, making it challenging for them to effectively oversee schools. For example, we found that one BIE official in North Dakota was also serving in an acting capacity for an office in Tennessee and was responsible for overseeing and providing technical assistance to schools in five other states—Florida, Louisiana, Maine, Mississippi, and North Carolina. Further, we reported that the challenges that BIE officials confront in overseeing school expenditures are exacerbated by a lack of financial expertise and training. For example, although key local BIE officials are responsible for making important decisions about annual audit findings, such as whether school funds are being spent appropriately, they are not auditors or accountants. Additionally, as we reported in November 2014, some of these BIE officials had not received recent training on financial oversight. Without adequate staff and training, we reported that BIE will continue struggling to adequately monitor school expenses. Consequently, we recommended in 2014 that Indian Affairs develop a comprehensive workforce plan to ensure that BIE has an adequate number of staff with the requisite knowledge and skills to effectively oversee BIE school expenditures. Indian Affairs agreed with our recommendation but has not yet taken any action. Inconsistent Accountability Hampers Management of School Construction and Monitoring of School Spending Our work has shown that another management challenge, inconsistent accountability, hinders Indian Affairs in the areas of (1) managing school construction and (2) monitoring overall school expenditures. Specifically, this challenge hinders its ability to ensure that Indian students receive a quality education in a safe environment that is conducive to learning. Inconsistent Accountability for School Construction In our February 2015 testimony on BIE school facilities, we reported that Indian Affairs had not provided consistent accountability on some recent school construction projects. According to agency and school officials we interviewed, some recent construction projects, including new roofs and buildings, went relatively well, while others faced numerous problems. The problems we found with construction projects at some schools suggest that Indian Affairs is not fully or consistently using management practices to ensure contractors perform as intended. For example, officials at three schools said they encountered leaks with roofs installed within the past 11 years. At one BIE-operated school we visited, Indian Affairs managed a project in which a contractor completed a $3.5 million project to replace roofs in 2010, but the roofs have leaked since their installation, according to agency documents. These leaks have led to mold in some classrooms and numerous ceiling tiles having to be removed throughout the school. (See fig. 4.) In 2011, this project was elevated to a senior official within Indian Affairs, who was responsible for facilities and construction. He stated that the situation was unacceptable and called for more forceful action by the agency. Despite numerous subsequent repairs of these roofs, school officials and regional Indian Affairs officials told us in late 2014 that the leaks and damage to the structure continue. They also said that they were not sure what further steps, if any, Indian Affairs would take to resolve the leaks or hold the contractors or suppliers accountable, such as filing legal claims against the contractor or supplier if appropriate. At another school we visited, construction problems included systems inside buildings as well as building materials. For example, in the cafeteria’s kitchen at one BIE-operated school, a high voltage electrical panel was installed next to the dishwashing machine, which posed a potential electrocution hazard. School facility staff told us that although the building inspector and project manager for construction approved this configuration before the building opened, safety inspectors later noted that it was a safety hazard. (See fig 5.) In South Dakota, a school we visited recently encountered problems constructing a $1.5 million building for bus maintenance and storage using federal funds. According to Indian Affairs and school officials, although the project was nearly finished at the time of our visit in December 2014, Indian Affairs, the school, and the contractor still had not resolved various issues, including drainage and heating problems. Further, part of the new building for bus maintenance has one hydraulic lift, but the size of the building does not allow a large school bus to fit on the lift when the exterior door is closed because the building is not long enough. Thus, staff using the lift would need to maintain or repair a large bus with the door open, which is not practical in the cold South Dakota winters. (See fig. 6.) According to Indian Affairs officials, part of the difficulty with this federally- funded project resulted from the school’s use of a contractor responsible for both the design and construction of the project, which limited Indian Affairs’ ability to oversee it. Indian Affairs officials said that this arrangement, known as “design-build,” may sometimes have advantages, such as faster project completion times, but may also give greater discretion to the contractor responsible for both the design and construction of the building. For example, Indian Affairs initially raised questions about the size of the building to store and maintain buses. However, agency officials noted that the contractor was not required to incorporate Indian Affairs’ comments on the building’s design or obtain its approval for the project’s design, partly because Indian Affairs’ policy does not appear to address approval of the design in a “design-build” project. Further, neither the school nor Indian Affairs used particular financial incentives to ensure satisfactory performance by the contractor. Specifically, the school already paid the firm nearly the full amount of the project before final completion according to school officials, leaving it little financial leverage over the contractor. We will continue to monitor such issues as we complete our ongoing work on BIE school facilities and consider any recommendations that may be needed to address these issues. Uneven Accountability for School Spending In our 2014 report on BIE school spending, we found that BIE’s oversight did not ensure that school funds were spent appropriately on educational services, although external auditors had determined that there were serious financial management issues at some schools. Specifically, auditors identified $13.8 million in unallowable spending by 24 BIE schools as of July 2014. Additionally, in one case, an annual audit found that a school lost about $1.2 million in federal funds that were illegally transferred to an offshore bank account. The same school had accumulated at least another $6 million in federal funds in a U.S. bank account. As of June 2014, BIE had not determined how the school accrued that much in unspent federal funds. Further, instead of using a risk-based approach to its monitoring efforts, BIE indicated that it relies primarily on ad hoc suggestions by staff regarding which schools to target for greater oversight. For example, BIE failed to increase its oversight of expenditures at one school where auditors found that the school’s financial statements had to be adjusted by about $1.9 million and found unreliable accounting of federal funds during a 3-year period we reviewed. We recommended that Indian Affairs develop a risk-based approach to oversee school expenditures to focus BIE’s monitoring activities on schools that auditors have found to be at the greatest risk of misusing federal funds. However, Indian Affairs agreed but has not yet implemented this recommendation. In addition, we found that BIE did not use certain tools to monitor school expenditures. For example, BIE did not have written procedures to oversee schools’ use of Indian School Equalization Program funds, which accounted for almost half of their total operating funding in fiscal year 2014. In 2014, we recommended that Indian Affairs develop written procedures, including for Interior’s Indian School Equalization Program, to consistently document their monitoring activities and actions they have taken to resolve financial weaknesses identified at schools. While Indian Affairs generally agreed, it has not yet taken this action. Without a risk- based approach and written procedures to overseeing school spending— both integral to federal internal control standards—there is little assurance that federal funds are being used for their intended purpose to provide BIE students with needed instructional and other educational services. In conclusion, Indian Affairs has been hampered by systemic management challenges related to BIE’s programs and operations that undermine its mission to provide Indian students with quality education opportunities and safe environments that are conducive to learning. In light of these management challenges, we have recommended several improvements to Indian Affairs on its management of BIE schools. While Indian Affairs has generally agreed with these recommendations and reported taking some steps to address them, it has not yet fully implemented them. Unless steps are promptly taken to address these challenges to Indian education, it will be difficult for Indian Affairs to ensure the long-term success of a generation of students. We will continue to monitor these issues as we complete our ongoing work and consider any additional recommendations that may be needed to address these issues. Chairman Rokita, Ranking Member Fudge, and Members of the Subcommittee, this concludes my prepared statement. I will be pleased to answer any questions that you may have. GAO Contact and Staff Acknowledgments For future contact regarding this testimony, please contact Melissa Emrey-Arras at (617) 788-0534 or [email protected]. Key contributors to this testimony were Elizabeth Sirois (Assistant Director), Edward Bodine, Matthew Saradjian, and Ashanta Williams. Also, providing legal or technical assistance were James Bennett, David Chrisinger, Jean McSween, Jon Melhus, Sheila McCoy, and James Rebbe. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
BIE is responsible for providing quality education opportunities to Indian students. It currently oversees 185 schools, serving about 41,000 students on or near Indian reservations. Poor student outcomes raise questions about how well BIE is achieving its mission. In September 2013, GAO reported that BIE student performance has been consistently below that of Indian students in public schools. This testimony discusses Indian Affairs' management challenges in improving Indian education, including (1) its administration of schools, (2) staff capacity to address schools' needs, and 3) accountability for managing school construction and monitoring school spending. This testimony is based on GAO reports issued in September 2013 and November 2014, as well as GAO's February 2015 testimony, which presents preliminary results from its ongoing review of BIE school facilities. A full report on school facilities will be issued later this year. GAO reviewed relevant laws and regulations; analyzed agency data; and conducted site visits to schools, which were selected based on their geographic diversity and other factors. GAO has made several recommendations in its earlier reports; it is not making any new recommendations in this statement. GAO has reported for several years on how systemic management challenges within the Department of the Interior's Office of the Assistant Secretary–Indian Affairs (Indian Affairs) continue to hamper efforts to improve Bureau of Indian Education (BIE) schools. Over the past 10 years, Indian Affairs has undergone several organizational realignments, resulting in multiple offices across different units being responsible for BIE schools' education and administrative functions. Indian Affairs' fragmented organization has been compounded by frequent turnover in its leadership over a 13-year period and its lack of a strategic plan for BIE. Further, fragmentation and poor communication among Indian Affairs offices has led to confusion among schools about whom to contact about problems, as well as delays in the delivery of key educational services and supplies, such as textbooks. Key practices for organizational change suggest that agencies develop a results-oriented framework, such as a strategic plan, to clearly establish and communicate performance goals and measure their progress toward them. In 2013, GAO recommended that Interior develop a strategic plan for BIE and a strategy for communicating with schools, among other recommendations. Indian Affairs agreed with and reported taking some steps to address the two recommendations. However, it has not fully implemented them. Limited staff capacity poses another challenge to addressing BIE school needs. According to key principles for effective workforce planning, the appropriate deployment of employees enables organizations to have the right people, with the right skills, in the right place. However, Indian Affairs data indicate that about 40 percent of its regional facility positions, such as architects and engineers, are vacant. Similarly, in 2014, GAO reported that BIE had many vacancies in positions to oversee school spending. Further, remaining staff had limited financial expertise and training. Without adequate staff and training, Indian Affairs will continue to struggle in monitoring and supporting schools. GAO recommended that Interior revise its workforce plan so that employees are placed in the appropriate offices and have the requisite knowledge and skills to better support schools. Although Indian Affairs agreed with this recommendation, it has not yet implemented it. Inconsistent accountability hampers management of BIE school construction and monitoring of school spending. Specifically, GAO has found that Indian Affairs did not consistently oversee some construction projects. For example, at one school GAO visited, Indian Affairs spent $3.5 million to replace multiple roofs in 2010. The new roofs already leak, causing mold and ceiling damage, and Indian Affairs has not yet adequately addressed the problems, resulting in continued leaks and damage to the structure. Inconsistent accountability also impairs BIE's monitoring of school spending. In 2014, GAO found that BIE does not adequately monitor school expenditures using written procedures or a risk-based monitoring approach, contrary to federal internal control standards. As a result, BIE failed to provide effective oversight of schools when they misspent millions of dollars in federal funds. GAO recommended that the agency develop written procedures and a risk-based approach to improve its monitoring. Indian Affairs agreed but has yet to implement these recommendations.
GAO_GAO-08-211
Background Information security is a critical consideration for any organization that depends on information systems and computer networks to carry out its mission or business. It is especially important for government agencies, where maintaining the public’s trust is essential. The dramatic expansion in computer interconnectivity and the rapid increase in the use of the Internet have revolutionized the way our government, our nation, and much of the world communicate and conduct business. Although this expansion has created many benefits for agencies such as IRS in achieving their missions and providing information to the public, it also exposes federal networks and systems to various threats. The Federal Bureau of Investigation has identified multiple sources of threats, including foreign nation states engaged in information warfare, domestic criminals, hackers, virus writers, and disgruntled employees or contractors working within an organization. In addition, the U.S. Secret Service and the CERT Coordination Center studied insider threats, and stated in a May 2005 report that “insiders pose a substantial threat by virtue of their knowledge of, and access to, employer systems and/or databases.” Without proper safeguards, systems are unprotected from individuals and groups with malicious intent who can intrude and use their access to obtain sensitive information, commit fraud, disrupt operations, or launch attacks against other computer systems and networks. These concerns are well founded for a number of reasons, including the dramatic increase in reports of security incidents, the ease of obtaining and using hacking tools, and steady advances in the sophistication and effectiveness of attack technology. For example, the Office of Management and Budget (OMB) cited a total of 5,146 incidents reported to the U.S. Computer Emergency Readiness Team (US-CERT) by federal agencies during fiscal year 2006, an increase of 44 percent from the previous fiscal year. Our previous reports, and those by inspectors general, describe persistent information security weaknesses that place federal agencies, including IRS, at risk of disruption, fraud, or inappropriate disclosure of sensitive information. Accordingly, we have designated information security as a governmentwide high-risk area since 1997, a designation that remains in force today. Recognizing the importance of securing federal agencies’ information systems, Congress enacted the Federal Information Security Management Act (FISMA) in December 2002 to strengthen the security of information and systems within federal agencies. FISMA requires each agency to develop, document, and implement an agencywide information security program for the information and systems that support the operations and assets of the agency, using a risk-based approach to information security management. Such a program includes developing and implementing security plans, policies, and procedures; testing and evaluating the effectiveness of controls; assessing risk; providing specialized training; planning, implementing, evaluating, and documenting remedial action to address information security deficiencies; and ensuring continuity of operations. IRS has demanding responsibilities in collecting taxes, processing tax returns, and enforcing the nation’s tax laws, and relies extensively on computerized systems to support its financial and mission-related operations. In fiscal years 2007 and 2006, IRS collected about $2.7 trillion and $2.5 trillion, respectively, in tax payments; processed hundreds of millions of tax and information returns; and paid about $292 billion and $277 billion, respectively, in refunds to taxpayers. Further, the size and complexity of IRS adds unique operational challenges. The agency employs tens of thousands of people in 10 service center campuses, 3 computing centers, and numerous other field offices throughout the United States. IRS also collects and maintains a significant amount of personal and financial information on each American taxpayer. The confidentiality of this sensitive information must be protected; otherwise, taxpayers could be exposed to loss of privacy and to financial loss and damages resulting from identity theft or other financial crimes. The Commissioner of Internal Revenue has overall responsibility for ensuring the confidentiality, integrity, and availability of the information and information systems that support the agency and its operations. FISMA requires the chief information officers (CIO) at federal agencies to be responsible for developing and maintaining an information security program. Within IRS, this responsibility is delegated to the Chief of Mission Assurance and Security Services (MA&SS). The Chief of MA&SS is responsible for developing policies and procedures regarding information technology security; establishing a security awareness and training program; conducting security audits; coordinating the implementation of logical access controls into IRS systems and applications; providing physical and personnel security; and, among other things, monitoring IRS security activities. To help accomplish these goals, MA&SS has developed and published information security policies, guidelines, standards, and procedures in the Internal Revenue Manual, the Law Enforcement Manual, and other documents. The Modernization and Information Technology Services organization, led by the CIO, is responsible for developing security controls for systems and applications; conducting annual tests of systems; implementing, testing, and validating the effectiveness of remedial actions; ensuring that continuity of operations requirements are addressed for all applications and systems it owns; and mitigating technical vulnerabilities and validating the mitigation strategy. In July 2007, IRS began undergoing an organizational realignment that dissolved MA&SS and moved responsibilities for managing the servicewide information security program to a newly created position— the Associate CIO for Cybersecurity. Objectives, Scope, and Methodology The objectives of our review were to determine (1) the status of IRS’s actions to correct or mitigate previously reported information security weaknesses and (2) whether controls over key financial and tax processing systems were effective in ensuring the confidentiality, integrity, and availability of financial and sensitive taxpayer information. This review was performed in connection with our audit of IRS’s financial statements for the purpose of supporting our opinion on internal controls over the preparation of those statements. To determine the status of IRS’s actions to correct or mitigate previously reported information security weaknesses, we identified and reviewed its information security policies, procedures, practices, and guidance. We reviewed prior GAO reports to identify previously reported weaknesses and examined IRS’s corrective action plans to determine for which weaknesses IRS reported corrective actions as being completed. For those instances where IRS reported it had completed corrective actions, we assessed the effectiveness of those actions. We evaluated IRS’s implementation of these corrective actions for two data centers, and one additional facility. To determine whether controls over key financial and tax processing systems were effective, we tested the effectiveness of information security controls at three data centers. We concentrated our evaluation primarily on threats emanating from sources internal to IRS’s computer networks and focused on three critical applications and their general support systems that directly or indirectly support the processing of material transactions that are reflected in the agency’s financial statements. Our evaluation was based on our Federal Information System Controls Audit Manual, which contains guidance for reviewing information system controls that affect the confidentiality, integrity, and availability of computerized information. Using National Institute of Standards and Technology (NIST) standards and guidance, and IRS’s policies, procedures, practices, and standards, we evaluated controls by testing the complexity and expiration of passwords on servers to determine if strong password management was enforced; analyzing users’ system authorizations to determine whether they had more permissions than necessary to perform their assigned functions; observing data transmissions across the network to determine whether sensitive data were being encrypted; observing whether system security software was logging successful testing and observing physical access controls to determine if computer facilities and resources were being protected from espionage, sabotage, damage, and theft; inspecting key servers and workstations to determine whether critical patches had been installed or were up-to-date; and examining access responsibilities to determine whether incompatible functions were segregated among different individuals. Using the requirements identified by FISMA, which establish key elements for an effective agencywide information security program, we evaluated IRS’s implementation of its security program by analyzing IRS’s risk assessment process and risk assessments for key IRS systems to determine whether risks and threats were documented; analyzing IRS’s policies, procedures, practices, and standards to determine their effectiveness in providing guidance to personnel responsible for securing information and information systems; analyzing security plans to determine if management, operational, and technical controls were in place or planned and that security plans were updated; examining training records for personnel with significant responsibilities to determine if they received training commensurate with those responsibilities; analyzing test plans and test results for key IRS systems to determine whether management, operational, and technical controls were tested at least annually and based on risk; observing IRS’s process to correct weaknesses and determining whether remedial action plans complied with federal guidance; and examining contingency plans for key IRS systems to determine whether those plans had been tested or updated. We also reviewed or analyzed previous reports from the Treasury Inspector General for Tax Administration (TIGTA) and GAO; and discussed with key security representatives and management officials whether information security controls were in place, adequately designed, and operating effectively. IRS Has Made Limited Progress in Correcting Previously Reported Weaknesses IRS has made limited progress toward correcting previously reported information security weaknesses. It has corrected or mitigated 29 of the 98 information security weaknesses that we reported as unresolved at the time of our last review. IRS corrected weaknesses related to access controls and personnel security, among others. For example, it has implemented controls for user IDs for certain critical servers by assigning each user a unique logon account and password and removing unneeded accounts (guest-level); improved physical protection for its procurement system by limiting computer room access to only those individuals needing it to perform their duties; developed a security plan for a key financial system; and updated servers that had been running unsupportable operating systems. In addition, IRS has made progress in improving its information security program. For example, the agency is in the process of completing an organizational realignment and has several initiatives underway that are designed to improve information security such as forming councils and committees to foster coordination and collaboration on information technology security policies, procedures, and practices. IRS also has established six enterprisewide objectives for improving information security, including initiatives for protecting and encrypting data, securing information technology assets, and building security into new applications. Although IRS has moved to correct previously identified security weaknesses, 69 of them—or about 70 percent—remain open or unmitigated. For example, IRS continues to, among other things, use passwords that are not complex, grant excessive electronic access to individuals not warranting such allow sensitive data to cross its internal network unencrypted, allow changes to occur on the mainframe that are not properly monitored ineffectively remove physical access authorizations into sensitive areas, install patches in an untimely manner, and improperly segregate incompatible duties. Such weaknesses increase the risk of compromise of critical IRS systems and information. Significant Weaknesses Continue to Place Financial and Taxpayer Information at Risk In addition to this limited progress, other significant weaknesses in controls intended to restrict access to data and systems, as well as other information security controls continue to threaten the confidentiality and availability of its financial and tax processing systems and information, and limit assurance of the integrity and reliability of its financial and taxpayer information. Unresolved, previously reported weaknesses and newly identified ones increase the risk of unauthorized disclosure, modification, or destruction of financial and sensitive taxpayer information. IRS Did Not Sufficiently Control Access to Information Resources A basic management objective for any organization is to protect the resources that support its critical operations from unauthorized access. Organizations accomplish this objective by designing and implementing controls that are intended to prevent, limit, and detect unauthorized access to computing resources, programs, information, and facilities. Inadequate access controls diminish the reliability of computerized information and increase the risk of unauthorized disclosure, modification, and destruction of sensitive information and disruption of service. Access controls include those related to user identification and authentication, authorization, cryptography, audit and monitoring, and physical security. IRS did not ensure that it consistently implemented effective access controls in each of these areas, as the following sections in this report demonstrate. Controls for Identifying and Authenticating Users Were Not Consistently Enforced A computer system must be able to identify and authenticate different users so that activities on the system can be linked to specific individuals. When an organization assigns unique user accounts to specific users, the system is able to distinguish one user from another—a process called identification. The system also must establish the validity of a user’s claimed identity by requesting some kind of information, such as a password, that is known only by the user—a process known as authentication. The combination of identification and authentication— such as user account/password combinations—provides the basis for establishing individual accountability and for controlling access to the system. The Internal Revenue Manual requires IRS to enforce strong passwords for authentication (defined as a minimum of eight characters, containing at least one numeric or special character, and a mixture of at least one uppercase and one lower case letter). In addition, IRS policy states that user accounts should be removed from the system or application if users have not logged on in 90 days. Furthermore, the Internal Revenue Manual requires that passwords be protected from unauthorized disclosure when stored. IRS did not always enforce strong password management on systems at the three sites reviewed. For example, several user account passwords on UNIX systems did not meet password length or complexity requirements. Allowing weak passwords increases the likelihood that passwords will be compromised and used by unauthorized individuals to gain access to sensitive IRS information. In addition, user accounts for servers supporting the administrative accounting system had not been used in approximately 180 days, but still remained active at all three sites. Allowing inactive user accounts to remain on the system increases the likelihood of unauthorized individuals using these dormant accounts to gain access to sensitive IRS data. Further, password and associated user IDs were stored in clear text on an intranet Web site which was accessible by unauthenticated users. As a result, individuals accessing this Web site could view these passwords and use them to gain unauthorized access to IRS systems. Such access could be used to alter data flowing to and from the agency’s administrative accounting system. Users Were Routinely Given More System Access Than Needed to Perform Their Jobs Authorization is the process of granting or denying access rights and permissions to a protected resource, such as a network, a system, an application, a function, or a file. A key component of granting or denying access rights is the concept of “least privilege.” Least privilege is a basic principle for securing computer resources and information. This principle means that users are granted only those access rights and permissions they need to perform their official duties. To restrict legitimate users’ access to only those programs and files they need to do their work, organizations establish access rights and permissions. “User rights” are allowable actions that can be assigned to users or to groups of users. File and directory permissions are rules that regulate which users can access a particular file or directory and the extent of that access. To avoid unintentionally authorizing users’ access to sensitive files and directories, an organization must give careful consideration to its assignment of rights and permissions. IRS policy states that the configuration and use of system utilities are based on least privilege and are limited to those individuals that require them to perform their assigned functions. IRS permitted excessive access to systems by granting rights and permissions that gave users more access than they needed to perform their assigned functions. For example, one data center allowed all mainframe users access to powerful system management functions including storage management and mainframe hardware configurations. In addition, the center did not tightly restrict the ability to modify mainframe operating system configurations. Approximately 60 persons had access to commands that could allow them to make significant changes to the operating system, increasing the risk of inadvertent or deliberate disruption of system operations. Furthermore, IRS did not properly restrict file permission privileges. Excessive file privileges were given to an administrative accounting subsystem’s file transfer account. As a result, any user with access to accounts on this server could gain unauthorized access to other servers within the administrative accounting system infrastructure. Sensitive Data Were Not Always Encrypted Cryptography underlies many of the mechanisms used to enforce the confidentiality and integrity of critical and sensitive information. A basic element of cryptography is encryption. Encryption can be used to provide basic data confidentiality and integrity by transforming plain text into cipher text using a special value known as a key and a mathematical process known as an algorithm. IRS policy requires the use of encryption for transferring sensitive but unclassified information between IRS facilities. The National Security Agency also recommends disabling protocols that do not encrypt information, such as user ID and password combinations, transmitted across the network. IRS did not always ensure that sensitive data were protected by encryption. Although IRS had an initiative underway to encrypt its laptops, certain data were not encrypted. For example, at two data centers, administrator access to a key IRS application contained unencrypted data logins. These unencrypted logins could reveal usernames, passwords, and other credentials. By not encrypting data, IRS is at increased risk that an unauthorized individual could gain unwarranted access to its systems and/or sensitive information. Logging Procedures Did Not Effectively Capture Changes to Mainframe Datasets To establish individual accountability, monitor compliance with security policies, and investigate security violations, it is crucial to determine what, when, and by whom specific actions have been taken on a system. Organizations accomplish this by implementing system or security software that provides an audit trail—logs of system activity—that they can use to determine the source of a transaction or attempted transaction and to monitor users’ activities. The way in which organizations configure system or security software determines the nature and extent of information that can be provided by the audit trail. To be effective, organizations should configure their software to collect and maintain audit trails that are sufficient to track security-relevant events. IRS policy requires that audit records be created, protected, and retained to enable the monitoring, analysis, investigation, and reporting of unlawful, unauthorized, or inappropriate information system activity. Although IRS had implemented logging capabilities for the servers reviewed, it did not effectively capture changes to datasets on the mainframe, which supports the agency’s general ledger for tax administration. Specifically, it did not configure its security software to log successful changes to datasets that contain parameters and procedures on the mainframe used to support production operations of the operating system, system utilities, and user applications. By not recording changes to these datasets, IRS is at increased risk that unapproved or inadvertent changes that compromise security controls or disrupt operations are made and not detected. Weaknesses in Physical Security Controls Reduced Their Effectiveness Physical security controls are essential for protecting computer facilities and resources from vandalism and sabotage, theft, accidental or deliberate destruction, and unauthorized access and use. Physical security controls should prevent, limit, and detect access to facility grounds, buildings, and sensitive work areas and the agency should periodically review the access granted to computer facilities and resources to ensure this access is still appropriate. Examples of physical security controls include perimeter fencing, surveillance cameras, security guards, and locks. The absence of adequate physical security protections could lead to the loss of life and property, the disruption of functions and services, and the unauthorized disclosure of documents and information. NIST requires that designated officials within the organization review and approve the access list and authorization credentials. Similarly, IRS policy requires that branch chiefs validate the need of individuals to access a restricted area based on authorized access lists, which are prepared monthly. To further address physical security, the Internal Revenue Manual requires periodic review of all mechanical key records. Although IRS has implemented physical security controls, certain weaknesses reduce the effectiveness of these controls in protecting and controlling physical access to assets at IRS facilities, such as the following: One data center allowed at least 17 individuals access to sensitive areas without justifying a need based on their job duties. The same data center did not always remove physical access authorizations into sensitive areas in a timely manner for employees who no longer needed it to perform their jobs. For example, a manager reviewed an access listing dated March 2007 and identified 54 employees whose access was to be removed; however, at the time of our site visit in June 2007, 29 of the 54 employees still had access. Another data center did not perform monthly reviews of an authorized access list to verify that employees continued to warrant access to secure computing areas; according to agency officials, they perform a biannual review every 6 months or whenever a change occurs instead. The same data center also did not perform a periodic review of records accounting for mechanical keys used to gain access to sensitive areas. As a result, IRS is at increased risk of unauthorized access to, and disclosure of, financial and taxpayer information, inadvertent or deliberate disruption of services, and destruction or loss of computer resources. Weaknesses in Other Information Security Controls Increased Risk In addition to access controls, other important controls should be in place to ensure the confidentiality, integrity, and availability of an organization’s information. These controls include policies, procedures, and techniques for securely configuring information systems and segregating incompatible duties. Weaknesses in these areas increase the risk of unauthorized use, disclosure, modification, or loss of IRS’s information and information systems. Configuration Management Policies Were Not Fully Implemented The purpose of configuration management is to establish and maintain the integrity of an organization’s work products. Organizations can better ensure that only authorized applications and programs are placed into operation by establishing and maintaining baseline configurations and monitoring changes to these configurations. According to IRS policy, changes to baseline configurations should be monitored and controlled. Patch management, a component of configuration management, is an important factor in mitigating software vulnerability risks. Up-to-date patch installation can help diminish vulnerabilities associated with flaws in software code. Attackers often exploit these flaws to read, modify, or delete sensitive information; disrupt operations; or launch attacks against other organizations’ systems. According to NIST, the practice of tracking patches allows organizations to identify which patches are installed on a system and provides confirmation that the appropriate patches have been applied. IRS’s patch management policy also requires that patches be implemented in a timely manner and that critical patches are applied within 72 hours to minimize vulnerabilities. IRS did not always effectively implement configuration management policies. For example, one data center did not ensure that its change control system properly enforced change controls to two key applications residing on the mainframe. The current configuration could allow individuals to make changes without being logged by the agency’s automated configuration management system. Furthermore, servers at these locations did not have critical patches installed in a timely manner. For example, at the time of our site visit in July 2007, one site had not installed critical patches released in February 2007 on two servers. As a result, IRS has limited assurance that only authorized changes are being made to its systems and that they are protected against new vulnerabilities. Incompatible Duties Were Not Always Appropriately Segregated Segregation of duties refers to the policies, procedures, and organizational structures that help ensure that no individual can independently control all key aspects of a process or computer-related operation and thereby gain unauthorized access to assets or records. Often, organizations segregate duties by dividing responsibilities among two or more individuals or organizational groups. This diminishes the likelihood that errors and wrongful acts will go undetected, because the activities of one individual or group will serve as a check on the activities of the other. Inadequate segregation of duties increases the risk that erroneous or fraudulent transactions could be processed, improper program changes implemented, and computer resources damaged or destroyed. The Internal Revenue Manual requires that IRS divide and separate duties and responsibilities of incompatible functions among different individuals, so that no individual shall have all of the necessary authority and system access to disrupt or corrupt a critical security process. IRS did not always properly segregate incompatible duties. For example, mainframe system administration functions were not appropriately segregated. IRS configured a user group that granted access to a broad range of system functions beyond the scope of any single administrator’s job duties. Granting this type of access to individuals who do not require it to perform their official duties increases the risk that sensitive information or programs could be improperly modified, disclosed, or deleted. In addition, at one data center, physical security staff who set user proximity card access to sensitive areas were also allowed to determine whether employees needed access or not, rather than leaving the decision to cognizant managers. As a result, staff could be allowed improper access to sensitive areas. IRS Has Not Fully Implemented Its Information Security Program A key reason for the information security weaknesses in IRS’s financial and tax processing systems is that it has not yet fully implemented its agencywide information security program to ensure that controls are effectively established and maintained. FISMA requires each agency to develop, document, and implement an information security program that, among other things, includes periodic assessments of the risk and magnitude of harm that could result from the unauthorized access, use, disclosure, disruption, modification, or destruction of information and information systems; policies and procedures that (1) are based on risk assessments, (2) cost- effectively reduce risks, (3) ensure that information security is addressed throughout the life cycle of each system, and (4) ensure compliance with applicable requirements; plans for providing adequate information security for networks, facilities, security awareness training to inform personnel of information security risks and of their responsibilities in complying with agency policies and procedures, as well as training personnel with significant security responsibilities for information security; periodic testing and evaluation of the effectiveness of information security policies, procedures, and practices, performed with a frequency depending on risk, but no less than annually, and that include testing of management, operational, and technical controls for every system identified in the agency’s required inventory of major information systems; a process for planning, implementing, evaluating, and documenting remedial action to address any deficiencies in its information security policies, procedures, or practices; and plans and procedures to ensure continuity of operations for information systems that support the operations and assets of the agency. Although IRS continued to make important progress in developing and documenting a framework for its information security program, key components of the program had not been fully or consistently implemented. Although a Risk Assessment Process Was Implemented, Potential Risks Were Not Always Assessed According to NIST, risk is determined by identifying potential threats to the organization and vulnerabilities in its systems, determining the likelihood that a particular threat may exploit vulnerabilities, and assessing the resulting impact on the organization’s mission, including the effect on sensitive and critical systems and data. Identifying and assessing information security risks are essential to determining what controls are required. Moreover, by increasing awareness of risks, these assessments can generate support for the policies and controls that are adopted in order to help ensure that these policies and controls operate as intended. OMB Circular A-130, appendix III prescribes that risk be reassessed when significant changes are made to computerized systems—or at least every 3 years. Consistent with NIST guidance, IRS requires its risk assessment process to detail the residual risk assessed and potential threats, and to recommend corrective actions for reducing or eliminating the vulnerabilities identified. Although IRS had implemented a risk assessment process, it did not always effectively evaluate potential risks for the systems we reviewed. The six risk assessments that we reviewed were current, documented residual risk assessed and potential threats, and recommended corrective actions for reducing or eliminating the vulnerabilities they identified. However, IRS did not identify many of the vulnerabilities that we identify in this report and did not assess the risks associated with them. As a result, potential risks to these systems may be unknown. We have previously identified this weakness and recommended that the agency update its risk assessments to include vulnerabilities we identified. IRS is in the process of taking corrective action. Although IRS Policies and Procedures Were Generally Adequate, Guidance for Logging Mainframe Activity Was Unclear Another key element of an effective information security program is to develop, document, and implement risk-based policies, procedures, and technical standards that govern security over an agency’s computing environment. If properly implemented, policies and procedures should help reduce the risk that could come from unauthorized access or disruption of services. Technical security standards provide consistent implementation guidance for each computing environment. Developing, documenting, and implementing security policies are the important primary mechanisms by which management communicates its views and requirements; these policies also serve as the basis for adopting specific procedures and technical controls. In addition, agencies need to take the actions necessary to effectively implement or execute these procedures and controls. Otherwise, agency systems and information will not receive the protection that the security policies and controls should provide. IRS has developed and documented information security policies, standards, and guidelines that generally provide appropriate guidance to personnel responsible for securing information and information systems; however, guidance for securing mainframe systems was not always clear. For example, the Internal Revenue Manual does not always specify when successful system changes should be logged. Further, although IRS policy provides general requirements for protection of audit logs, the manual for mainframe security software does not provide detailed guidance on what logs to protect and how to protect them. As a result, IRS has reduced assurance that these system changes are being captured and that its systems and the information they contain, including audit logs, are being sufficiently protected. Security Plans Adequately Documented Management, Operational, and Technical Controls An objective of system security planning is to improve the protection of information technology resources. A system security plan provides an overview of the system’s security requirements and describes the controls that are in place or planned to meet those requirements. OMB Circular A- 130 requires that agencies develop system security plans for major applications and general support systems, and that these plans address policies and procedures for providing management, operational, and technical controls. Furthermore, IRS policy requires that security plans describing the security controls in place or planned for its information systems be developed, documented, implemented, reviewed annually, and updated a minimum of every 3 years or whenever there is a significant change to the system. The six security plans we reviewed documented the management, operational, and technical controls in place at the time the plans were written, and the more recent plans mapped those controls directly to controls prescribed by NIST. According to IRS officials, at the time of our review, they were in the process of updating two of these plans to more accurately reflect the current operating environment. The remaining four plans appropriately reflected the current operating environment. Although Training Was Provided, Employees with Significant Security Responsibilities at One Center Did Not Receive the Needed Training People are one of the weakest links in attempts to secure systems and networks. Therefore, an important component of an information security program is providing required training so that users understand system security risks and their own role in implementing related policies and controls to mitigate those risks. IRS policy requires that personnel performing information technology security duties meet minimum continuing professional education hours in accordance with their roles. Personnel performing technical security roles are required by IRS to have 12, 8, or 4 hours of specialized training per year, depending on their specific role. Although IRS has made progress in providing security personnel with a job-related training curriculum, IRS did not ensure that all employees with significant security responsibilities received adequate training. For example, based on the documentation we reviewed, all 40 employees selected at one data center met the required minimum training hours; however, 6 of 10 employees reviewed at another center did not. According to IRS officials, these six employees with significant security responsibilities were not identified by their managers for the required training. Until managers identify individuals requiring specialized training, IRS is at increased risk that individuals will not receive the training necessary to perform their security-related responsibilities. Although Controls Were Tested and Evaluated, Tests Were Not Always Comprehensive Another key element of an information security program is to test and evaluate policies, procedures, and controls to determine whether they are effective and operating as intended. This type of oversight is a fundamental element because it demonstrates management’s commitment to the security program, reminds employees of their roles and responsibilities, and identifies and mitigates areas of noncompliance and ineffectiveness. Although control tests and evaluations may encourage compliance with security policies, the full benefits are not achieved unless the results improve the security program. FISMA requires that the frequency of tests and evaluations be based on risks and occur no less than annually. IRS policy also requires periodic testing and evaluation of the effectiveness of information security policies and procedures, as well as reviews to ensure that the security requirements in its contracts are implemented and enforced. IRS tested and evaluated information security controls for each of the systems we reviewed. The more current tests and evaluations had detailed methodologies, followed NIST guidance, and documented the effectiveness of the tested controls. However, the scopes of these tests were not sufficiently comprehensive to identify significant vulnerabilities. For example, although IRS and GAO examined controls over the same systems, we identified unencrypted passwords on an internal Web site that IRS had not. Our test results also showed that contractors did not always follow agency security policies and procedures. To illustrate, contractors had inappropriately stored clear-text passwords and sensitive documents on internal agency Web sites. Although IRS had numerous procedures to provide contractor oversight, it had not detected its contractors’ noncompliance with its policies. Because IRS had not identified these weaknesses, it has limited assurance that appropriate controls were being effectively implemented. Remedial Action Plans Were Not Always Complete, and Corrective Actions Were Not Effective A remedial action plan is a key component described in FISMA. Such a plan assists agencies in identifying, assessing, prioritizing, and monitoring progress in correcting security weaknesses that are found in information systems. In its annual FISMA guidance to agencies, OMB requires agencies’ remedial action plans, also known as plans of action and milestones, to include the resources necessary to correct an identified weaknesses. According to IRS policy, the agency should document weaknesses found during security assessments as well as document any planned, implemented, and evaluated remedial actions to correct any deficiencies. The policy further requires that IRS track the status of resolution of all weaknesses and verify that each weakness is corrected. IRS has developed and implemented a remedial action process to address deficiencies in its information security policies, procedures, and practices. However, this remedial action process was not working as intended. For example, IRS had identified weaknesses but did not always identify necessary resources to fix them. Specifically, we reviewed remedial action plans for five of the six systems and found that plans for four of them had not identified what, if any, resources were necessary to support the corrective actions. Subsequent to our site visits, IRS provided additional information on resources to support corrective actions for three of them. In addition, the verification process used to determine whether remedial actions were implemented was not always effective. IRS indicated that it had corrected or mitigated 39 of the 98 previously reported weaknesses. However, of those 39 weaknesses, 10 still existed at the time of our review. Furthermore, one facility had actually corrected less than half of the weaknesses reported as being resolved. We have previously identified a similar weakness and recommended that IRS implement a revised remedial action verification process that ensures actions are fully implemented, but the condition continued to exist at the time of our review. Without a sound remediation process, IRS will not have assurance that the proper resources will be applied to known vulnerabilities or that those vulnerabilities will be properly mitigated. Contingency Plans Were Not Always Complete or Tested Continuity of operations planning, which includes contingency planning, is a critical component of information protection. To ensure that mission- critical operations continue, it is necessary to be able to detect, mitigate, and recover from service disruptions while preserving access to vital information. It is important that these plans be clearly documented, communicated to potentially affected staff, and updated to reflect current operations. In addition, testing contingency plans is essential to determine whether the plans will function as intended in an emergency situation. FISMA requires that agencywide information security programs include plans and procedures to ensure continuity of operations. IRS contingency planning policy requires that essential IRS business processes be identified and that contingency plans be tested at least annually. Although the systems reviewed had contingency plans, the plans were not always complete or tested. For example, for three of the six plans, IRS had not identified essential business processes. Further, the agency had not annually tested two of the plans, which were both dated September 2005. IRS informed us that these issues will be addressed during current certifications and accreditations for those systems. However, until IRS identifies these essential processes and sufficiently tests the plans, increased risk exists that it will not be able to effectively recover and continue operations when an emergency occurs. Conclusions IRS has made only limited progress in correcting or mitigating previously reported weaknesses, implementing controls over key financial systems, and developing and documenting a framework for its agencywide information security program. Information security weaknesses—both old and new—continue to impair the agency’s ability to ensure the confidentiality, integrity, and availability of financial and taxpayer information. These deficiencies represent a material weakness in IRS’s internal controls over its financial and tax processing systems. A key reason for these weaknesses is that the agency has not yet fully implemented critical elements of its agencywide information security program. The financial and taxpayer information on IRS systems will remain particularly vulnerable to insider threats until the agency (1) fully implements a comprehensive agencywide information security program that includes enhanced policies and procedures, appropriate specialized training, comprehensive tests and evaluations, sufficient contractor oversight, updated remedial action plans, and a complete continuity of operations process; and (2) begins to address weaknesses across the service, its facilities, and computing resources. As a result, financial and taxpayer information is at increased risk of unauthorized disclosure, modification, or destruction, and IRS management decisions may be based on unreliable or inaccurate financial information. Recommendations for Executive Action To help establish effective information security over key financial processing systems, we recommend that you take the following seven actions to implement an agencywide information security program: Update policies and procedures for configuring mainframe operations to ensure they provide the necessary detail for controlling and logging changes. Identify individuals with significant security responsibilities to ensure they receive specialized training. Expand scope for testing and evaluating controls to ensure more comprehensive testing. Enhance contractor oversight to better ensure that contractors’ noncompliance with IRS information security policies is detected. Update remedial action plans to ensure that they include what, if any, resources are required to implement corrective actions. Identify and prioritize critical IRS business processes as part of contingency planning. Test contingency plans at least annually. We are also making 46 detailed recommendations in a separate report with limited distribution. These recommendations consist of actions to be taken to correct specific information security weaknesses related to user identification and authentication, authorization, cryptography, audit and monitoring, physical security, configuration management, and segregation of duties. Agency Comments In providing written comments (reprinted in app. I) on a draft of this report, the Acting Commissioner of Internal Revenue agreed that IRS has not yet fully implemented critical elements of its agencywide information security program, and stated that the security and privacy of taxpayer information is of great concern to the agency. She recognized that there is significant work to be accomplished to address IRS’s information security deficiencies, and stated that the agency is taking aggressive steps to correct previously reported weaknesses and improve its overall information security program. She also noted that IRS has taken many actions to strengthen its information security program, such as installing automatic disk encryption on its total deployed inventory of approximately 52,000 laptops, and creating a team of security and computer experts to improve mainframe controls. Further, she stated that the agency is committed to securing its computer environment, and will develop a detailed corrective action plan addressing each of our recommendations. This report contains recommendations to you. As you know, 31 U.S.C. 720 requires the head of a federal agency to submit a written statement of the actions taken on our recommendations to the Senate Committee on Homeland Security and Governmental Affairs and to the House Committee on Oversight and Government Reform not later than 60 days from the date of the report and to the House and Senate Committees on Appropriations with the agency’s first request for appropriations made more than 60 days after the date of this report. Because agency personnel serve as the primary source of information on the status of recommendations, GAO requests that the agency also provide it with a copy of your agency’s statement of action to serve as preliminary information on the status of open recommendations. We are sending copies of this report to interested congressional committees and the Secretary of the Treasury. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions regarding this report, please contact Gregory Wilshusen at (202) 512-6244 or Nancy Kingsbury at (202) 512-2700. We can also be reached by e-mail at [email protected] and [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix II. Appendix I: Comments from the Internal Revenue Service Appendix II: GAO Contacts and Staff Acknowledgments Staff Acknowledgments In addition to the persons named above, Gerard Aflague, Bruce Cain, Larry Crosland, Mark Canter, Denise Fitzpatrick, David Hayes (Assistant Director), Nicole Jarvis, Jeffrey Knott (Assistant Director), George Kovachick, Kevin Metcalfe, Eugene Stevens, and Amos Tevelow made key contributions to this report.
The Internal Revenue Service (IRS) relies extensively on computerized systems to carry out its demanding responsibilities to collect taxes (about $2.7 trillion in fiscal year 2007), process tax returns, and enforce the nation's tax laws. Effective information security controls are essential to ensuring that financial and taxpayer information is adequately protected from inadvertent or deliberate misuse, fraudulent use, improper disclosure, or destruction. As part of its audit of IRS's fiscal years 2007 and 2006 financial statements, GAO assessed (1) IRS's actions to correct previously reported information security weaknesses and (2) whether controls were effective in ensuring the confidentiality, integrity, and availability of financial and sensitive taxpayer information. To do this, GAO examined IRS information security policies and procedures, guidance, security plans, reports, and other documents; tested controls over key financial applications at three IRS data centers; and interviewed key security representatives and management officials. IRS made limited progress toward correcting previously reported information security weaknesses. It has corrected or mitigated 29 of the 98 information security weaknesses that GAO reported as unresolved at the time of its last review. For example, IRS implemented controls for user IDs for certain critical servers, improved physical protection for its procurement system, developed a security plan for a key financial system, and upgraded servers that had been using obsolete operating systems. In addition, IRS established enterprisewide objectives for improving information security, including initiatives for protecting and encrypting data, securing information technology assets, and building security into new applications. However, about 70 percent of the previously identified information security weaknesses remain unresolved. For example, IRS continues to, among other things, use passwords that are not complex, grant excessive access to individuals who do not need it, and install patches in an untimely manner. In addition to this limited progress, other significant weaknesses in various controls continue to threaten the confidentiality and availability of IRS's financial processing systems and information, and limit assurance of the integrity and reliability of its financial and taxpayer information. IRS has not consistently implemented effective controls to prevent, limit, or detect unauthorized access to computing resources from within its internal network. For example, IRS did not always (1) enforce strong password management for properly identifying and authenticating users, (2) authorize user access to only permit access needed to perform job functions, (3) encrypt sensitive data, (4) effectively monitor changes on its mainframe, and (5) physically protect its computer resources. In addition, IRS faces risks to its financial and taxpayer information due to weaknesses in implementing its configuration management policies, as well as appropriately segregating incompatible job duties. Accordingly, GAO has reported a material weakness in IRS's internal controls over its financial and tax processing systems. A key reason for the weaknesses is that the agency has not yet fully implemented its agencywide information security program to ensure that controls are effectively established and maintained. As a result, IRS is at increased risk of unauthorized disclosure, modification, or destruction of financial and taxpayer information.
GAO_GAO-08-162
Background In response to various attacks, State has continually assessed and updated its security standards and physical security measures at posts around the world. After the 1998 embassy bombings in Nairobi, Kenya, and Dar es Salaam, Tanzania, State initiated the Capital Security Construction program (also referred to as the New Embassy Compound program), a multiyear effort to replace approximately 200 facilities with new facilities that meet State’s updated security standards. As of the end of fiscal year 2007, State had obligated more than $5.9 billion for this program, awarded contracts for the construction of 78 new embassy and consulate compounds, and completed more than 50 new facilities. State currently plans to contract for 80 more new facilities through 2014. To complement its efforts to move overseas U.S. government employees into more secure facilities, State initiated efforts to enhance physical security at existing facilities. After the 1998 embassy bombings, State initiated a new physical security upgrades program called the World-Wide Security Upgrade Program, which focused on enhancing perimeter security measures. In response to the September 11 terrorist attacks, State focused on ensuring that embassies and consulates had adequate safe areas for staff in case of an attack on the facilities. Since 2004, State has taken a more comprehensive approach to physical security upgrades by reviewing the entire range of physical security needs at posts through CSUP. State has identified the following four goals for CSUP: to provide physical security protection to the extent practical for existing facilities; to provide physical security upgrades to meet current security standards for those facilities that will not be replaced by a NEC in the near-term; to initiate physical security upgrades at facilities that are not part of the chancery compound, including annexes, public diplomacy facilities, and warehouses; and to provide security upgrades to nongovernmental facilities (“soft targets”) frequented by U.S. citizens. From fiscal year 1999 through 2007, State had obligated more than $1.2 billion for security upgrades. Since fiscal year 2004 and the initiation of CSUP, OBO has undertaken approximately 55 major projects costing over $1 million that enhance physical security at posts that are not going to be replaced with a new facility in the near future, if at all. OBO’s Long-Range Overseas Buildings Plan calls for it to undertake an average of 13 major CSUP projects per year through 2012. CSUP provides several categories of security upgrades to help posts meet physical security standards, such as perimeter security measures (including anti-climb walls, fences, compound access control facilities, bollards, cameras, and security lighting); forced entry/ballistic resistant doors and windows; safe areas for U.S. personnel in case of emergency; and stand-alone mail screening facilities. In addition, OBO has obligated approximately $58 million per year of CSUP funds for minor post-managed security upgrade projects, such as minor residential security upgrades, maintenance, repair, and replacement of existing forced entry/ballistic resistant doors and windows, and modular mail screening facilities. The Overseas Security Policy Board, which includes representatives from more than 20 U.S. intelligence, foreign affairs, and other agencies, is responsible for considering, developing, and promoting security policies and standards that affect U.S. government agencies under the authority of the Chief of Mission at a post. This responsibility includes reviewing and issuing uniform guidance on physical security standards for embassies, consulates, and other overseas office space. State incorporates the board’s physical security standards in its “Foreign Affairs Handbook” and “Foreign Affairs Manual.” With respect to existing office buildings, the standards apply to the maximum extent feasible or practicable. State has identified five key Overseas Security Policy Board standards to protect overseas diplomatic office facilities against terrorism and other dangers (see fig. 1). First, the Secure Embassy Construction and Counterterrorism Act of 1999 requires that office facilities be at least 100 feet from uncontrolled areas, such as a street where vehicles can pass without being checked by security officials. This distance is meant to help protect the buildings and occupants against threats such as bomb blasts. Second, State requires high perimeter walls or fences that are difficult to climb, thereby deterring those who might attack the compound on foot. Third, State requires anti- ram barriers to ensure that vehicles cannot breach the facility perimeter to get close to the building and detonate a bomb. The fourth standard requires blast-resistant construction techniques and materials. These materials include reinforced concrete and steel construction and blast- resistant windows. Coupled with a 100-foot setback, blast-resistant construction provides the best possible protection against vehicle bomb attack, according to DS officials. State’s fifth security standard is controlled access of pedestrians and vehicles at the perimeter of a compound. Compound access control facilities allow guards to screen personnel and visitors before they enter the compound to verify that they have legitimate business at the embassy or consulate and that they bring nothing onto the compound that could be potentially harmful or used to surreptitiously gather intelligence. Similarly, the facilities allow guards to search vehicles before they are permitted to enter the compound. CSUP Planning Process Balances Security Needs of Posts and Includes Input from Stakeholders OBO has a threat- and vulnerability-based planning process for its CSUP projects that includes input from DS’s analysis of security threats and vulnerabilities and from post officials. The DS analysis currently focuses on embassy and consulate compounds, though DS is developing a risk- based prioritization process that considers the number of personnel, threats, and vulnerabilities at each facility, including off-compound facilities. OBO has improved its process for developing projects by conducting more comprehensive needs assessments of posts, including off-compound facilities, early in the design phase. OBO Planning Reflects DS Security Analysis and Input from Post OBO prioritizes which posts will receive upgrades based in part on assessments from DS of the physical security conditions and threat levels at each post. Each year, DS ranks all 262 posts based on their threat levels and vulnerabilities. With input from posts’ security officers and the intelligence community, DS determines the threat level for terrorism and political violence. DS also determines the vulnerabilities of each post in several categories, including protection from chemical and biological attack, seismic and blast resistance, the strength of the construction and façade, and the amount of setback. Once these determinations are made, DS ranks the posts. The resulting list of rankings is used by OBO and other stakeholders to plan NEC projects. For CSUP planning, posts that are scheduled for an NEC project within the next 2 to 3 years are removed from the list, and DS and OBO reevaluate the list, factoring in the number of people at post, to create a priority list for CSUP projects. OBO then modifies the list to balance various factors. First, OBO removes facilities that cannot be further upgraded, such as many leased facilities. Second, OBO adds facilities that may have been removed, such as vulnerable off-compound facilities at posts where NEC projects are planned. Third, OBO has security engineers conduct a thorough assessment of each post’s needs. Fourth, OBO alters the list to account for external factors, such as difficulty getting a host government’s approval on a project, which would move a project down the list. Finally, OBO develops its 6-year list of CSUP projects based on expected funds and places these projects in the Long-Range Overseas Buildings Plan. If OBO experiences budget constraints, it will delay projects—moving future projects to subsequent fiscal years—rather than reduce their scope, according to State officials. Once a project is placed on the Long-Range Overseas Buildings Plan, an OBO team undertakes an assessment visit to the post to determine what the project should include. OBO consults with DS and the post and reviews Office of Inspector General security inspections in order to determine the scope of the project. One year prior to a project’s start date, OBO then develops an initial planning survey in which OBO seeks agreement between its engineers and the post’s Regional Security Officer. The initial planning survey is then sent in draft form for approval by OBO and post officials, including the Regional Security Officer, administrative officer, and facilities manager. Once this process is completed, OBO works with its contract design firm to develop conceptual design plans. State’s contracting offices use these plans to advertise for bids to complete the design and construct the improvements using a design-build contract. After a firm has been awarded the contract, it will develop and submit interim and then final plans for OBO’s review. OBO consults with post officials, including the Regional Security Officer, in reviewing the designs to help ensure that proposed upgrades meet each post’s security needs before giving the firm authorization to proceed with construction. DS Priority Assessments Focus on Main Compounds, but Efforts Are Being Made to Address All Post Facilities According to OBO and DS officials, the DS physical security assessment is currently based on the physical security needs of each post’s main compound but does not factor in the security of facilities located outside the main embassy or consulate compound, even though hundreds of such facilities exist. We noted that, in several cases, these off-compound facilities lacked required physical security measures. For example, we found that one post compound, following the conclusion of its CSUP project, met most security standards, but a nearby off-compound office facility did not have setback, blast–resistant walls and windows, a controlled access facility for pedestrians and vehicles, a safe area, and other security features. OBO and DS are currently working to better address the needs of all facilities, including the hundreds of annexes located off compound, and improve CSUP project prioritization. OBO officials commented that newer projects take into account the needs of all facilities at a post, whether they are on compound or not. For example, at one post we visited, we saw a CSUP project for an off-compound office facility. Moreover, DS is developing a new risk-based process to prioritize CSUP projects that will rate the vulnerabilities of each overseas building with office space, including annexes, and factor in the number of personnel and threat levels to better set priorities. According to a DS official, the formula needs to be validated and, if successful, staff needs to be trained on its use before beginning implementation. State expects to complete these steps by March 2008. OBO Has Taken Steps to Conduct More Comprehensive Needs Assessments during Project Design OBO is taking additional steps to more comprehensively address post security needs and improve CSUP planning processes. According to OBO, CSUP initially focused on perimeter security, but as new standards have been put in place and perimeter projects completed, the program has broadened its focus to ensure that posts meet all physical security standards to the extent feasible. For example, in 2004, terrorists rushed on foot past the barriers blocking a car being inspected at the vehicular gate of the consulate in Jeddah, Saudi Arabia. In response, State began to install additional fencing and a secondary gate, called a man trap, at vehicle entry points at posts to prevent attackers on foot from accessing the compounds. Moreover, the Overseas Security Policy Board is currently considering the addition of a new security standard requiring man traps. In addition, OBO officials noted that they meet monthly to improve processes for project planning and execution, including those involving CSUP. One result of these meetings has been a decision to conduct OBO’s initial planning surveys earlier in the design process to gain a better understanding of post’s security needs. Another result of these meetings is that OBO created a more comprehensive survey instrument to better identify all vulnerabilities at the post for consideration in the CSUP project. CSUP Projects Generally Completed within Contractual Time Frames and Costs, and OBO Has Project Management Procedures to Help Ensure Completion While most CSUP projects we reviewed have been completed within their contractual time frames and costs, OBO found it necessary to modify all but one of the contracts to extend project time frames, adjust costs, or both. Since the beginning of fiscal year 2004, OBO has contracted for 47 projects valued at $1 million or more that were subsequently completed by September 30, 2007. In reviewing schedule performance data, we found that 96 percent of projects were completed within 30 days of their contractual completion date (see fig. 2). However, we found that OBO modified the contracts to extend their completion dates for 81 percent of the projects. On average, OBO extended the contracts by 4 months—an average increase of 26 percent. Many of these extensions did not result in increased costs to the government. For each of the 47 projects, OBO paid the contractor the amount specified in the fixed-price contracts—an average project cost of $2.6 million. In reviewing cost data, we found that OBO increased the contract cost for 34 projects, at an average increase of 17 percent, and decreased the contract cost for 11 projects, at an average decrease of 5 percent (see fig. 2). The net change in the cost of the 47 projects was an increase of $10 million. Cost increases were generally due to changes in the scope of the projects, while cost decreases were generally due to a reduction in expected local tax costs. Our past assessments of domestic government renovation projects found that work on existing facilities presented a number of difficulties and challenges, making renovations especially susceptible to cost increases stemming from unexpected conditions. We found that, for such projects, government agencies generally budget 5 to 15 percent of project cost for unexpected changes. OBO cited factors outside the contractor’s control as the cause of most of the delays and cost increases, such as unusually lengthy local permitting processes, previously unidentified underground utilities that needed to be moved, design changes that OBO made during construction work, and project changes requested by the post. For example, OBO extended the deadline 10 months for completion of perimeter fencing upgrades and a new CAC facility at a U.S. consulate in Asia because of delays in receiving approval from local authorities to proceed with the work. In addition, in response to a request from officials at a U.S. embassy in Europe, OBO added to the scope of the planned CSUP project, including a new CAC facility, and modified the contract to pay the contractor an additional $874,000 for the added work. However, in cases where OBO found that contractor error was the cause of a delay or cost increase, OBO held the contractor accountable. For example, at a U.S. mission in Europe, OBO found instances where the contractor’s work did not conform to contract specifications and required the contractor to redo the work. OBO did not compensate the contractor for the additional costs associated with replacing the substandard work. Similarly, at a U.S. consulate in Europe, the contractor was more than 6 months late in completing the security upgrades; OBO, therefore, assessed the contractor a penalty of almost $60,000. OBO has project management procedures to help ensure the security upgrades it contracted for are completed and have enhanced posts’ compliance with physical security standards. For each CSUP project, OBO assigns a project manager who is responsible for the effective completion of the project. However, because CSUP projects are generally small and OBO has limited resources, project managers are not usually able to be on site full time during the project. Project managers visit posts to ensure the work contracted for is being done and, in many cases, rely on post officials, including the Regional Security Officers and facility managers, to provide additional monitoring of the work. In our visits to 11 posts, we found that, in most cases, the work called for in the projects had been done or was under way. However, at one location, we found that one component of the project—strengthening the room where the post’s emergency generator is located—was removed from the scope of the project because, according to post officials, it would have unexpectedly required creating new office space to relocate people during the work, adding costs that could not be covered by the CSUP budget. OBO decided to remove this work from the scope of the project and initiate a new project in the future to address this physical security need. CSUP Has Enhanced Physical Security, but Site Conditions at Many Posts Limit Ability to Adhere to All Security Standards Completed CSUP projects have achieved their objective of enhancing the security at posts by bringing posts in better compliance with security standards. Major CSUP projects have enhanced physical security at 47 embassies and consulates since fiscal year 2004, and OBO currently expects to complete all major CSUP projects, barring extensive changes to current security standards or expected funding, by 2018. CSUP security enhancements have encompassed constructing compound access control facilities at the perimeter of the compounds at 25 posts (see fig. 3 for an example); building safe areas for post officials in case of attack at 25 posts; improving compound walls, fencing, and barriers at 22 posts (see fig. 4 for examples); and strengthening the interior walls and doors that create a “hard line” that separates American staff from visitors at 8 posts. At the 11 posts we visited with ongoing or completed CSUP projects, we found that the projects had enhanced posts’ compliance with State’s physical security standards as detailed in the “Foreign Affairs Handbook” and “Foreign Affairs Manual.” The projects we viewed added or enhanced pedestrian and vehicle access points, replaced perimeter fencing to meet anti-climb requirements, installed bollards and barriers at key points to meet anti-ram requirements, built safe areas for post officials in case of attack, enhanced the hard line separating post employees from visitors, and installed forced entry/ballistic-resistant windows and doors. Nevertheless, without building a new facility, many posts are unable to meet all security standards for a variety of reasons beyond the scope of CSUP. We found that none of the posts we visited adhered fully with current security standards because of conditions that were outside the scope of CSUP projects. For example, most of the posts we visited were located in dense urban areas that prevented them from achieving a 100- foot setback from the street, one of the key security standards (see fig. 5 for an example). OBO and DS officials acknowledged that, at many locations, it is not feasible to increase the setback by acquiring land and closing off nearby streets. In other cases, officials stated the buildings themselves were not structurally capable of handling heavy forced entry/ballistic-resistant windows or other upgrades. And in other cases, officials commented that host nations or cities would not allow certain upgrades to be implemented, such as removing trees to create a clear zone around the embassy or changing the facade of historic buildings. Finally, current plans for the NEC program do not include the replacement of 61 of 262 embassies and consulates. Several of these facilities were built after physical security standards were strengthened in response to terrorist attacks against U.S. facilities in Beirut, Lebanon, in the 1980s. State officials acknowledged that other facilities may not be replaced due to cost and political concerns. As a result, many buildings and their occupants may remain vulnerable to attack. Agency Comments and Our Evaluation The Department of State provided written comments on a draft of this report, which are reproduced in appendix II. State agreed with our findings, noting that the report accurately describes State’s CSUP efforts. State also provided us with technical suggestions and clarifications that we have addressed in this report, as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested Members of Congress and the Secretary of State. We also will make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact Charles Michael Johnson, Jr., at (202) 512-7331 or [email protected]. Contact points for our Office of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Scope and Methodology To discuss the factors that the Bureau of Overseas Buildings Operations (OBO) considers as it plans and prioritizes Compound Security Upgrades Program (CSUP) projects, we reviewed Department of State (State) prioritization and planning documents concerning the assignment of post threat levels, assessments of the security vulnerabilities of posts, and CSUP. We discussed CSUP prioritization and planning, as well as changes to those processes in response to recent attacks, with officials from OBO and State’s Bureau of Diplomatic Security (DS) in Washington, D.C, and overseas, including post officials, including Deputy Chiefs of Mission, Regional Security Officers, facilities managers, and General Services Officers, and with contractors overseas. In addition, we reviewed past GAO audit work on related issues. (See Related GAO Products at the end of this report.) To help confirm the accuracy of our analysis, we discussed our findings with State personnel involved in CSUP. To assess the extent to which CSUP projects met cost and schedule projections, we analyzed data that OBO provided specifically for the purposes of our review. Our scope included all 47 projects contracted since fiscal year 2004, completed by the end of fiscal year 2007, and valued at $1 million or more and, therefore, excluded smaller projects such as those designed to enhance the security of schools and other non-U.S. government properties frequented by U.S. personnel and their dependents. For each CSUP project, OBO provided data on the originally contracted completion date and cost, the modifications to the contracted completion date and cost, and the actual date of substantial completion and final contract cost for completed projects. We reviewed contracting documents to verify that the data were sufficiently reliable for the purposes of this report. To assess the extent to which CSUP projects included the security upgrades called for in the contract, we reviewed OBO’s project management procedures. We interviewed project managers in Washington, D.C., and facilities managers, administrative officers, and regional security officers at 11 posts to verify the role and responsibilities of the project managers. We also inspected the ongoing or completed CSUP work at these posts to verify that the projects encompassed all of the security upgrades called for under the contract. To review the extent to which State’s CSUP efforts have enhanced posts’ ability to comply with State’s physical security standards, we reviewed the project authorization memoranda, contract modifications, and OBO summary document on each of the 47 CSUP projects. These documents allowed us to identify the type of physical security upgrades that were installed at all 47 facilities. We discussed over 50 completed, ongoing, and planned projects with OBO officials. To confirm our initial findings, we traveled to 11 posts in Latin America, Europe, and the Middle East that had recently completed or ongoing CSUP projects. We selected these countries to ensure regional coverage, a range of project types, and a mix of ongoing and completed projects; however, as this was not a generalizeable sample, our findings do not necessarily apply to all posts. We are not naming the specific countries we visited for this review due to security concerns. We developed a physical security needs checklist based upon State’s “Foreign Affairs Handbook,” “Foreign Affairs Manual,” and OBO’s own needs assessment documentation. We applied our checklist consistently at all 11 posts. Our checklist did not, however, attempt to assess State’s procedures for utilizing physical security upgrades. For example, the checklist did not assess whether posts use new CACs properly to screen vehicles or people. At each post, we conducted a review of the security needs and received briefings on the recently completed, ongoing, or planned CSUP projects. We met with relevant post personnel, including Deputy Chiefs of Mission, Regional Security Officers, facilities managers, and General Services Officers, as well as contractors to discuss the physical security needs at post, CSUP project management and implementation, and post-specific limitations to receiving certain physical security upgrades. We conducted this performance audit from November 2006 through January 2008, in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings based on our audit objectives. Appendix II: Comments from the Department of State Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition the individual named above, David C. Maurer, Assistant Director; Michael J. Courts, Assistant Director; Valérie L. Nowak; Thomas M. Costa; Martin H. de Alteriis; Michael W. Armes; Leslie K. Locke; Ramon J. Rodriguez; Joseph P. Carney; Ian A. Ferguson; Etana Finkler; and Jason L. Bair made key contributions to this report. Related GAO Products Embassy Construction: State Has Made Progress Constructing New Embassies, but Better Planning Is Needed for Operations and Maintenance Requirements. GAO-06-641. Washington, D.C.: June 30, 2006. Overseas Security: State Department Has Not Fully Implemented Key Measures to Protect U.S. Officials from Terrorist Attacks Outside of Embassies. GAO-05-688T. Washington, D.C.: May 10, 2005. Overseas Security: State Department Has Not Fully Implemented Key Measures to Protect U.S. Officials from Terrorist Attacks Outside of Embassies. GAO-05-642. Washington, D.C.: May 9, 2005. Embassy Construction: Achieving Concurrent Construction Would Help Reduce Costs and Meet Security Goals. GAO-04-952. Washington, D.C.: September 28, 2004. Embassy Construction: State Department Has Implemented Management Reforms, but Challenges Remain. GAO-04-100. Washington, D.C.: November 4, 2003. Overseas Presence: Conditions of Overseas Diplomatic Facilities. GAO- 03-557T. Washington, D.C.: March 20, 2003. Embassy Construction: Better Long-Term Planning Will Enhance Program Decision-making. GAO-01-11. Washington, D.C.: January 22, 2001. State Department: Overseas Emergency Security Program Progressing, but Costs Are Increasing. GAO/NSIAD-00-83. Washington, D.C.: March 8, 2000.
Following the 1998 embassy bombings, the Department of State (State) determined that more than 85 percent of diplomatic facilities did not meet security standards and were vulnerable to terrorist attacks. State's Bureau of Overseas Buildings Operations (OBO) has undertaken a program to replace or upgrade the security of these facilities. As of 2007, OBO had constructed more than 50 new embassies and moved nearly 15,000 staff to safer facilities. However, most remaining facilities will not be replaced in the near term. To address these facilities, OBO has obligated about $140 million per year for its Compound Security Upgrade Program (CSUP). GAO was asked to (1) describe the process that OBO follows to prioritize and plan CSUP projects, including stakeholder involvement; (2) determine the extent to which CSUP projects met contracted cost and time frames and whether OBO has procedures to ensure security upgrades are installed; and (3) assess whether State's CSUP efforts have enhanced posts' abilities to comply with State's physical security standards. To address these objectives, GAO reviewed pertinent State documents, met with State officials in Washington, D.C., and overseas, and traveled to 11 posts in Latin America, Europe, and the Middle East. State provided written comments on a draft of this report and agreed with our findings. OBO has a threat- and vulnerability-based process for prioritizing which posts receive CSUP projects and a planning process that utilizes input from State's Bureau of Diplomatic Security (DS) and post officials. DS assessments are currently based on physical security of each post's main compound, although many posts have facilities located outside the compound. DS is developing a prioritization process that will factor in the number of personnel, threat levels, and vulnerabilities at each facility, including those off compound. OBO has improved its planning processes by conducting a comprehensive survey of posts' physical security needs, including off-compound facilities. GAO found that 96 percent of 47 projects undertaken since fiscal year 2004 were completed within 30 days of their contractual completion date. However, OBO modified 81 percent of the contracts to extend their completion dates. GAO also found that while OBO paid the contractors the amount specified in the contracts, contract modifications resulted in cost adjustments to all but two contracts, which GAO found in prior work is not uncommon in government renovation projects. OBO cited factors outside the contractors' control as the cause of most delays and cost increases, such as lengthy local permitting issues. To help ensure security upgrades contracted for are completed, OBO assigns a project manager who is responsible for the project's completion and relies on regional and post officials to provide additional monitoring. CSUP projects have enhanced posts' compliance with physical security standards by constructing compound access control facilities, safe areas for post personnel, and compound walls and barriers. However, at the 11 posts GAO visited, site conditions prevented them from adhering fully with standards. For example, more than one post's urban location prevented it from achieving a 100-foot setback from the street, a key security standard. As a result, many buildings and their occupants may remain vulnerable to attack.
GAO_GAO-09-831T
Background Our analysis of initial estimates of Recovery Act spending provided by the Congressional Budget Office (CBO) suggested that about $49 billion would be outlayed to states and localities by the federal government in fiscal year 2009, which runs through September 30. However, our analysis of the latest information available on actual federal outlays reported on www.recovery.gov indicates that in the 4 months since enactment, the federal Treasury has paid out approximately $29 billion to states and localities, which is about 60 percent of payments estimated for fiscal year 2009. Although this pattern may not continue for the remaining 3-1/2 months, at present spending is slightly ahead of estimates. More than 90 percent of the $29 billion in federal outlays has been provided through the increased Federal Medical Assistance Percentage (FMAP) grant awards and the State Fiscal Stabilization Fund administered by the Department of Education. Figure 1 shows the original estimate of federal outlays to states and localities under the Recovery Act compared with actual federal outlays as reported by federal agencies on www.recovery.gov. According to the Office of Management and Budget (OMB), an estimated $149 billion in Recovery Act funding will be obligated to states and localities in fiscal year 2009. Our work for our July bimonthly report focused on nine federal programs, selected primarily because they have begun disbursing funds to states and include programs with significant amounts of Recovery Act funds, programs receiving significant increases in funding, and new programs. Recovery Act funding of some of these programs is intended for further disbursement to localities. Together, these nine programs are estimated to account for approximately 87 percent of federal Recovery Act outlays to state and localities in fiscal year 2009. Figure 2 shows the distribution by program of anticipated federal Recovery Act spending in fiscal year 2009 to states and localities. States and Localities Are Using Recovery Act Funds for Purposes of the Act and to Help Address Fiscal Stresses Increased FMAP Has Helped States Finance Their Growing Medicaid Programs, but Concerns Remain about Compliance with Recovery Act Provisions The Recovery Act provides eligible states with an increased FMAP for 27 months between October 1, 2008, and December 31, 2010. On February 25, 2009, CMS made increased FMAP grant awards to states, and states may retroactively claim reimbursement for expenditures that occurred prior to the effective date of the Recovery Act. For the third quarter of fiscal year 2009, the increases in FMAP for the 16 states and the District of Columbia compared with the original fiscal year 2009 levels are estimated to range from 6.2 percentage points in Iowa to 12.24 percentage points in Florida, with the FMAP increase averaging almost 10 percentage points. When compared with the first two quarters of fiscal year 2009, the FMAP in the third quarter of fiscal year 2009 is estimated to have increased in 12 of the 16 states and the District. From October 2007 to May 2009, overall Medicaid enrollment in the 16 states and the District increased by 7 percent. In addition, each of the states and the District experienced an enrollment increase during this period, with the highest number of programs experiencing an increase of 5 percent to 10 percent. However, the percentage increase in enrollment varied widely ranging from just under 3 percent in California to nearly 20 percent in Colorado. With regard to the states’ receipt of the increased FMAP, all 16 states and the District had drawn down increased FMAP grant awards totaling just over $15.0 billion for the period of October 1, 2008 through June 29, 2009, which amounted to 86 percent of funds available. In addition, except for the initial weeks that increased FMAP funds were available, the weekly rate at which the sample states and the District have drawn down these funds has remained relatively constant. States reported that they are using or are planning to use the funds that have become freed up as a result of increased FMAP for a variety of purposes. Most commonly, states reported that they are using or planning to use freed-up funds to cover their increased Medicaid caseload, to maintain current benefits and eligibility levels, and to help finance their respective state budgets. Several states noted that given the poor economic climate in their respective states, these funds were critical in their efforts to maintain Medicaid coverage at current levels. Medicaid officials from many states and the District raised concerns about their ability to meet the Recovery Act requirements and, thus, maintain eligibility for the increased FMAP. While officials from several states spoke positively about CMS’s guidance related to FMAP requirements, at least nine states and the District reported they wanted CMS to provide additional guidance regarding (1) how they report daily compliance with prompt pay requirements, (2) how they report monthly on increased FMAP spending, and (3) whether certain programmatic changes would affect their eligibility for funds. For example, Medicaid officials from several states told us they were hesitant to implement minor programmatic changes, such as changes to prior authorization requirements, pregnancy verifications, or ongoing rate changes, out of concern that doing so would jeopardize their eligibility for increased FMAP. In addition, at least three states raised concerns that glitches related to new or updated information systems used to generate provider payments could affect their eligibility for these funds. Specifically, Massachusetts Medicaid officials said they are implementing a new provider payment system that will generate payments to some providers on a monthly versus daily basis and would like guidance from CMS on the availability of waivers for the prompt payment requirement. A CMS official told us that the agency is in the process of finalizing its guidance to states on reporting compliance with the prompt payment requirement of the Recovery Act, but did not know when this guidance would be publicly available. However, the official noted that, in the near term, the agency intends to issue a new Fact Sheet, which will include questions and answers on a variety of issues related to the increased FMAP. Due to the variability of state operations, funding processes, and political structures, CMS has worked with states on a case-by-case basis to discuss and resolve issues that arise. Specifically, communications between CMS and several states indicate efforts to clarify issues related to the contributions to the state share of Medicaid spending by political subdivisions or to rainy-day funds. States Are Using Highway Infrastructure Funds Mainly for Pavement Improvements and Are Generally Complying with Recovery Act Requirements The Recovery Act provides funding to the states for restoration, repair, and construction of highways and other eligible surface transportation projects. The act requires that 30 percent of these funds be suballocated for projects in metropolitan and other areas of the state. In March 2009, $26.7 billion was apportioned to all 50 states and the District of Columbia (District) for highway infrastructure and other eligible projects. As of June 25, 2009, $15.9 billion of the funds had been obligated for over 5,000 projects nationwide, and $9.2 billion had been obligated for nearly 2,600 projects in the 16 states and the District that are the focus of GAO’s review. Almost half of Recovery Act highway obligations nationwide have been for pavement improvements. Specifically, $7.8 billion of the $ 15.9 billion obligated nationwide as of June 25, 2009 is being used for projects such as reconstructing or rehabilitating deteriorated roads, including $3.6 billion for road resurfacing projects. Many state officials told us they selected a large percentage of resurfacing and other pavement improvement projects because they did not require extensive environmental clearances, were quick to design, could be quickly obligated and bid, could employ people quickly, and could be completed within 3 years. In addition, $2.7 billion, or about 17 percent of Recovery Act funds nationally, has been obligated for pavement-widening projects and around 10 percent has been obligated for the replacement, improvement or rehabilitation of bridges. As of June 25, 2009, $233 million had been reimbursed nationwide by the Federal Highway Administration (FHWA) and $96.4 million had been reimbursed in the 16 states and the District. States are just beginning to get projects awarded so that contractors can begin work, and U.S. Department of Transportation (DOT) officials told us that although funding has been obligated for more than 5,000 projects, it may be months before states can request reimbursement. Once contractors mobilize and begin work, states make payments to these contractors for completed work, and may request reimbursement from FHWA. FHWA told us that once funds are obligated for a project, it may take 2 or more months for a state to bid and award the work to a contractor and have work begin. According to state officials, because an increasing number of contractors are looking for work, bids for Recovery Act contracts have come in under estimates. State officials told us that bids for the first Recovery Act contracts were ranging from around 5 percent to 30 percent below the estimated cost. Several state officials told us they expect this trend to continue until the economy substantially improves and contractors begin taking on enough other work. Funds appropriated for highway infrastructure spending must be used as required by the Recovery Act. States are required to do the following: Ensure that 50 percent of apportioned Recovery Act funds are obligated within 120 days of apportionment (before June 30, 2009) and that the remaining apportioned funds are obligated within 1 year. The 50 percent rule applies only to funds apportioned to the state and not to the 30 percent of funds required by the Recovery Act to be suballocated, primarily based on population, for metropolitan, regional, and local use. The Secretary of Transportation is to withdraw and redistribute to other states any amount that is not obligated within these time frames. Give priority to projects that can be completed within 3 years and to projects located in economically distressed areas. These areas are defined by the Public Works and Economic Development Act of 1965, as amended. According to this act, to qualify as an economically distressed area, an area must meet one or more of three criteria related to income and unemployment based on the most recent federal or state data. Certify that the state will maintain the level of spending for the types of transportation projects funded by the Recovery Act that it planned to spend the day the Recovery Act was enacted. As part of this certification, the governor of each state is required to identify the amount of funds the state plans to expend from state sources from February 17, 2009, through September 30, 2010. All states have met the first Recovery Act requirement that 50 percent of their apportioned funds are obligated within 120 days. Of the $18.7 billion nationally that is subject to this provision, 69 percent was obligated as of June 25 2009. The percentage of funds obligated nationwide and in each of the states included in our review is shown in figure 3. The second Recovery Act requirement is to give priority to projects that can be completed within 3 years and to projects located in economically distressed areas. Officials from most states reported they expect all or most projects funded with Recovery Act funds to be completed within 3 years. We found that due to the need to select projects and obligate funds quickly, many states first selected projects based on other factors and only later identified to what extent these projects fulfilled the requirement to give priority to projects in economically distressed areas. According to the American Association of State Highway and Transportation Officials, in December 2008, states had already identified more than 5,000 “ready-to- go” projects as possible selections for federal stimulus funding, 2 months prior to enactment of the Recovery Act. Officials from several states also told us they had selected projects prior to the enactment of the Recovery Act and that they only gave consideration to economically distressed areas after they received guidance from DOT. States also based project selection on other priorities. State officials we met with said they considered factors based on their own state priorities, such as geographic distribution and a project’s potential for job creation or other economic benefits. The use of state planning criteria or funding formulas to distribute federal and state highway funds was one factor that we found affected states’ implementation of the Recovery Act’s prioritization requirements. According to officials in North Carolina, for instance, the state used its statutory Equity Allocation Formula to determine how highway infrastructure investment funds would be distributed. Similarly, in Texas, state officials said they first selected highway preservation projects by allocating a specific amount of funding to each of the state’s 25 districts, where projects were identified that addressed the most pressing needs. Officials then gave priority for funding to those projects that were in economically distressed areas. We also found some instances of states developing their own eligibility requirements using data or criteria not specified in the Public Works and Economic Development Act, as amended. According to the act, the Secretary of Commerce, not individual states, has the authority to determine the eligibility of an area that does not meet the first two criteria of the act. In each of these cases, FHWA approved the use of the states’ alternative criteria, but it is not clear on what authority FHWA approved these criteria. For example: Arizona based the identification of economically distressed areas on home foreclosure rates and disadvantaged business enterprises—data not specified in the Public Works Act. Arizona officials said they used alternative criteria because the initial determination of economic distress based on the act’s criteria excluded three of Arizona’s largest and most populous counties, which also contain substantial areas that, according to state officials, are clearly economically distressed and include all or substantial portions of major Indian reservations and many towns and cities hit especially hard by the economic downturn. Illinois based its classification on increases in the number of unemployed persons and the unemployment rate, whereas the act bases this determination on how a county’s unemployment rate compares with the national average unemployment rate. According to FHWA, Illinois opted to explore other means of measuring recent economic distress because the initial determination of economic distress based on the act’s criteria did not appear to accurately reflect the recent economic downturn in the state. Illinois’s use of alternative criteria resulted in 21 counties being identified as economically distressed that would not have been so classified following the act’s criteria. In commenting on a draft of our report, DOT agreed that states must give priority to projects located in economically distressed areas, but said that states must balance all the Recovery Act project selection criteria when selecting projects including giving preference to activities that can be started and completed expeditiously, using funds in a manner that maximizes job creation and economic benefit, and other factors. While we agree with DOT that there is no absolute primacy of economically distressed area projects in the sense that they must always be started first, the specific directives in the act that apply to highway infrastructure are that priority is to be given to projects that can be completed in 3 years, and are located in economically distressed areas. DOT also stated that the basic approach used by selected states to apply alternative criteria is consistent with the Public Works and Economic Development Act and its implementing regulations on economically distressed areas because it makes use of flexibilities provided by the Act to more accurately reflect changing economic conditions. However the result of DOT’s interpretation would be to allow states to prioritize projects based on criteria that are not mentioned in the highway infrastructure investment portion of the Recovery or the Public Works Acts without the involvement of the Secretary or Department of Commerce. We plan to continue to monitor states’ implementation of the economically distressed area requirements and interagency coordination at the federal level in future reports. Finally, the states are required to certify that they will maintain the level of state effort for programs covered by the Recovery Act. With one exception, the states have completed these certifications, but they face challenges. Maintaining a state’s level of effort can be particularly important in the highway program. We have found that the preponderance of evidence suggests that increasing federal highway funds influences states and localities to substitute federal funds for funds they otherwise would have spent on highways. As we previously reported, substitution makes it difficult to target an economic stimulus package so that it results in a dollar-for-dollar increase in infrastructure investment. Most states revised the initial certifications they submitted to DOT. As we reported in April, many states submitted explanatory certifications—such as stating that the certification was based on the “best information available at the time”—or conditional certifications, meaning that the certification was subject to conditions or assumptions, future legislative action, future revenues, or other conditions. On April 22, 2009, the Secretary of Transportation sent a letter to each of the nation’s governors and provided additional guidance, including that conditional and explanatory certifications were not permitted, and gave states the option of amending their certifications by May 22. Each of the 16 states and District selected for our review resubmitted their certifications. According to DOT officials, the department has concluded that the form of each certification is consistent with the additional guidance, with the exception of Texas. Texas submitted an amended certification on May 27, 2009, which contained qualifying language explaining that the Governor could not certify any expenditure of funds until the legislature passed an appropriation act. According to DOT officials, as of June 25, 2009, the status of Texas’ revised certification remains unresolved. Texas officials told us the state plans to submit a revised certification letter, removing the qualifying language. For the remaining states, while DOT has concluded that the form of the revised certifications is consistent with the additional guidance, it is currently evaluating whether the states’ method of calculating the amounts they planned to expend for the covered programs is in compliance with DOT guidance. States face drastic fiscal challenges, and most states are estimating that their fiscal year 2009 and 2010 revenue collections will be well below estimates. In the face of these challenges, some states told us that meeting the maintenance-of-effort requirements over time poses significant challenges. For example, federal and state transportation officials in Illinois told us that to meet its maintenance-of-effort requirements in the face of lower-than-expected fuel tax receipts, the state would have to use general fund or other revenues to cover any shortfall in the level of effort stated in its certification. Mississippi transportation officials are concerned about the possibility of statewide, across-the-board spending cuts in 2010. According to the Mississippi transportation department’s budget director, the agency will try to absorb any budget reductions in 2010 by reducing administrative expenses to maintain the state’s level of effort. Most States We Visited Have Received State Fiscal Stabilization Funds and Have Planned to Allocate Most Education Stabilization Funds to LEAs The Recovery Act created a State Fiscal Stabilization Fund (SFSF) in part to help state and local governments stabilize their budgets by minimizing budgetary cuts in education and other essential government services, such as public safety. Beginning in March 2009, the Department of Education issued a series of fact sheets, letters, and other guidance to states on the SFSF. Specifically, a March fact sheet, the Secretary’s April letter to Governors, and program guidance issued in April and May mention that the purposes of the SFSF include helping stabilize state and local budgets, avoiding reductions in education and other essential services, and ensuring LEAs and public IHEs have resources to “avert cuts and retain teachers and professors.” The documents also link educational progress to economic recovery and growth and identify four principles to guide the distribution and use of Recovery Act funds: (1) spend funds quickly to retain and create jobs; (2) improve student achievement through school improvement and reform; (3) ensure transparency, public reporting, and accountability; and (4) invest one-time Recovery Act funds thoughtfully to avoid unsustainable continuing commitments after the funding expires, known as the “funding cliff.” After meeting assurances to maintain state support for education at least at fiscal year 2006 levels, states are required to use the education stabilization fund to restore state support to the greater of fiscal year 2008 or 2009 levels for elementary and secondary education, public IHEs, and, if applicable, early childhood education programs. States must distribute these funds to school districts using the primary state education formula but maintain discretion in how funds are allocated to public IHEs. If, after restoring state support for education, additional funds remain, the state must allocate those funds to school districts according to the Elementary and Secondary Education Act of 1965 (ESEA), Title I, Part A funding formula. On the other hand, if a state’s education stabilization fund allocation is insufficient to restore state support for education, then a state must allocate funds in proportion to the relative shortfall in state support to public school districts and public IHEs. Education stabilization funds must be allocated to school districts and public IHEs and cannot be retained at the state level. Once education stabilization funds are awarded to school districts and public IHEs, they have considerable flexibility over how they use those funds. School districts are allowed to use education stabilization funds for any allowable purpose under ESEA, the Individuals with Disabilities Education Act (IDEA), the Adult Education and Family Literacy Act, or the Carl D. Perkins Career and Technical Education Act of 2006 (Perkins Act), subject to some prohibitions on using funds for, among other things, sports facilities and vehicles. In particular, Education’s guidance states that because allowable uses under the Impact Aid provisions of ESEA are broad, school districts have discretion to use education stabilization funds for a broad range of things, such as salaries of teachers, administrators, and support staff, and purchases of textbooks, computers, and other equipment. The Recovery Act allows public IHEs to use education stabilization funds in such a way as to mitigate the need to raise tuition and fees, as well as for the modernization, renovation, and repair of facilities, subject to certain limitations. However, the Recovery Act prohibits public IHEs from using education stabilization funds for such things as increasing endowments; modernizing, renovating, or repairing sports facilities; or maintaining equipment. Education’s SFSF guidance expressly prohibits states from placing restrictions on LEAs’ use of education stabilization funds, beyond those in the law, but allows states some discretion in placing limits on how IHEs may use these funds. The SFSF provides states and school districts with additional flexibility, subject to certain conditions, to help them address fiscal challenges. For example, the Secretary of Education is granted authority to permit waivers of state maintenance-of-effort (MOE) requirements if a state certified that state education spending will not decrease as a percentage of total state revenues. Education issued guidance on the MOE requirement, including the waiver provision, on May 1, 2009. Also, the Secretary may permit a state or school district to treat education stabilization funds as nonfederal funds for the purpose of meeting MOE requirements for any program administered by Education, subject to certain conditions. Education, as of June 29, 2009, has not provided specific guidance on the process for states and school districts to apply for the Secretary’s approval. States have broad discretion over how the $8.8 billion in the SFSF government services fund are used. The Recovery Act provides that these funds must be used for public safety and other government services and that these services may include assistance for education, as well as modernization, renovation, and repairs of public schools or IHEs. On April 1, 2009, Education made at least 67 percent of each state’s SFSF funds available, subject to the receipt of an application containing state assurances, information on state levels of support for education and estimates of restoration amounts, and baseline data demonstrating state status on each of the four education reform assurances. If a state could not certify that it would meet the MOE requirement, Education required it to certify that it will meet requirements for receiving a waiver—that is, that education spending would not decrease relative to total state revenues. In determining state level of support for elementary and secondary education, Education required states to use their primary formula for distributing funds to school districts but also allowed states some flexibility in broadening this definition. For IHEs, states have some discretion in how they establish the state level of support, with the provision that they cannot include support for capital projects, research and development, or amounts paid in tuition and fees by students. In order to meet statutory requirements for states to establish their current status regarding each of the four required programmatic assurances, Education provided each state with the option of using baseline data Education had identified or providing another source of baseline data. Some of the data provided by Education was derived from self-reported data submitted annually by the states to Education as part of their Consolidated State Performance Reports (CSPR), but Education also relied on data from third parties, including the Data Quality Campaign (DQC), the National Center for Educational Achievement (NCEA), and Achieve. Education has reviewed applications as they arrive for completeness and has awarded states their funds once it determined all assurances and required information had been submitted. Education set the application deadline for July 1, 2009. On June 24, 2009, Education issued guidance to states informing them they must amend their applications if there are changes to the reported levels of state support that were used to determine maintenance of effort or to calculate restoration amounts. As of June 30, 2009, of the 16 states and the District of Columbia covered by our review, only Texas had not submitted an SFSF application. Pennsylvania recently submitted an application but had not yet received funding. The remaining 14 states and the District of Columbia had submitted applications and Education had made available to them a total of about $17 billion in initial funding. As of June 26, 2009, only 5 of these states had drawn down SFSF Recovery Act funds. In total, about 25 percent of available funds had been drawn down by these states. Three of the selected states—Florida, Massachusetts, and New Jersey— said they would not meet the maintenance-of-effort requirements but would meet the eligibility requirements for a waiver and that they would apply for a waiver. Most of the states’ applications show that they plan to provide the majority of education stabilization funds to LEAs, with the remainder of funds going to IHEs. Several states and the District of Columbia estimated in their application that they would have funds remaining beyond those that would be used to restore education spending in fiscal years 2009 and 2010. These funds can be used to restore education spending in fiscal year 2011, with any amount left over to be distributed to LEAs. States have flexibility in how they allocate education stabilization funds among IHEs but, once they establish their state funding formula, not in how they allocate the funds among LEAs. Florida and Mississippi allocated funds among their IHEs, including universities and community colleges, using formulas based on factors such as enrollment levels. Other states allocated SFSF funds taking into consideration the budget conditions of the IHEs. Regarding LEAs, most states planned to allocate funds based on states’ primary funding formulae. Many states are using a state formula based on student enrollment weighted by characteristics of students and LEAs. For example, Colorado’s formula accounts for the number of students at risk while the formula used by the District allocates funds to LEAs using weights for each student based on the relative cost of educating students with specific characteristics. For example, an official from Washington, D.C. Public Schools said a student who is an English language learner may cost more to educate than a similar student who is fluent in English. States may use the government services portion of SFSF for education but have discretion to use the funds for a variety of purposes. Officials from Florida, Illinois, New Jersey, and New York reported that their states plan to use some or most of their government services funds for educational purposes. Other states are applying the funds to public safety. For example, according to state officials, California is using the government services fund for it corrections system, and Georgia will use the funds for salaries of state troopers and staff of forensic laboratories and state prisons. Officials in many school districts told us that SFSF funds would help offset state budget cuts and would be used to maintain current levels of education funding. However, many school district officials also reported that using SFSF funds for education reforms was challenging given the other more pressing fiscal needs. Although their plans are generally not finalized, officials in many school districts we visited reported that their districts are preparing to use SFSF funds to prevent teacher layoffs, hire new teachers, and provide professional development programs. Most school districts will use the funding to help retain jobs that would have been cut without SFSF funding. For example, Miami Dade officials estimate that the stabilization funds will help them save nearly two thousand teaching positions. State and school district officials in eight states we visited (California, Colorado, Florida, Georgia, Massachusetts, Michigan, New York, and North Carolina) also reported that SFSF funding will allow their state to retain positions, including teaching positions that would have been eliminated without the funding. In the Richmond County School System in Georgia, officials noted they plan to retain positions that support its schools, such as teachers, paraprofessionals, nurses, media specialists and guidance counselors. Local officials in Mississippi reported that budget-related hiring freezes had hindered their ability to hire new staff, but because of SFSF funding, they now plan to hire. In addition, local officials in a few states told us they plan to use the funding to support teachers. For example, officials in Waterloo Community and Ottumwa Community School Districts in Iowa as well as officials from Miami-Dade County in Florida cited professional development as a potential use of funding to support teachers. Although school districts are preventing layoffs and continuing to provide educational services with the SFSF funding, most did not indicate they would use these funds to pursue educational reform. School district officials cited a number of barriers, which include budget shortfalls, lack of guidance from states, and insufficient planning time. In addition to retaining and creating jobs, school districts have considerable flexibility to use these resources over the next 2 years to advance reforms that could have long-term impact. However, a few school district officials reported that addressing reform efforts was not in their capacity when faced with teacher layoffs and deep budget cuts. In Flint, Michigan, officials reported that SFSF funds will be used to cope with budget deficits rather than to advance programs, such as early childhood education or repairing public school facilities. According to the Superintendent of Flint Community Schools, the infrastructure in Flint is deteriorating, and no new school buildings have been built in over 30 years. Flint officials said they would like to use SFSF funds for renovating buildings and other programs, but the SFSF funds are needed to maintain current education programs. Officials in many school districts we visited reported having inadequate guidance from their state on using SFSF funding, making reform efforts more difficult to pursue. School district officials in most states we visited reported they lacked adequate guidance from their state to plan and report on the use of SFSF funding. Without adequate guidance and time for planning, school district officials told us that preparing for the funds was difficult. At the time of our visits, several school districts were unaware of their funding amounts, which, officials in two school districts said, created additional challenges in planning for the 2009-2010 school year. One charter school we visited in North Carolina reported that layoffs will be required unless their state notifies them soon how much SFSF funding they will receive. State officials in North Carolina, as well as in several other states, told us they are waiting for the state legislature to pass the state budget before finalizing SFSF funding amounts for school districts. IHEs Plan to Use SFSF Funds for Faculty Salaries and Other Purposes and Expect the Funds to Save Jobs and Mitigate Tuition Increases Although many IHEs had not finalized plans for using SFSF funds, the most common expected use for the funds at the IHEs we visited was to pay salaries of IHE faculty and staff. Officials at most of the IHEs we visited told us that, due to budget cuts, their institutions would have faced difficult reductions in faculty and staff if they were not receiving SFSF funds. Other IHEs expected to use SFSF funds in the future to pay salaries of certain employees during the year. Several IHEs we visited are considering other uses for SFSF funds. Officials at the Borough of Manhattan Community College in New York City want to use some of their SFSF funds to buy energy saving light bulbs and to make improvements in the college’s very limited space such as, by creating tutoring areas and study lounges. Northwest Mississippi Community College wants to use some of the funds to increase e-learning capacity to serve the institution’s rapidly increasing number of students. Several other IHEs plan to use some of the SFSF funds for student financial aid. Because many IHEs expect to use SFSF funds to pay salaries of current employees that they likely would not have been able to pay without the SFSF funds, IHEs officials said that SFSF funds will save jobs. Officials at several IHEs noted that this will have a positive impact on the educational environment such as, by preventing increases in class size and enabling the institutions to offer the classes that students need to graduate. In addition to preserving existing jobs, some IHEs anticipate creating jobs with SFSF funds. Besides saving and creating jobs at IHEs, officials noted that SFSF monies will have an indirect impact on jobs in the community. IHE officials also noted that SFSF funds will indirectly improve employment because some faculty being paid with the funds will help unemployed workers develop new skills, including skills in fields, such as health care, that have a high demand for trained workers. State and IHE officials also believe that SFSF funds are reducing the size of tuition and fee increases. Other Selected Programs Our report provides additional details on the use of Recovery Act funds for these three programs in the 16 selected states and the District. In addition to Medicaid FMAP, Highway Infrastructure Investment, and SFSF, we also reviewed six other programs receiving Recovery Act funds. These programs are: Title I, Part A of the Elementary and Secondary Education Act of 1965 (ESEA) Parts B and C of the Individuals with Disabilities Education Act (IDEA) Workforce Investment Act (WIA) Youth Program Public Housing Capital Fund Edward Byrne Memorial Justice Assistance Grant (JAG) Program Weatherization Assistance Program Additional detail regarding the states’ and localities’ use of funds for these programs is available in the full report also being released today, GAO-09-829. Individual state summaries for the 16 selected states and the District are accessible through GAO’s recovery page at www.gao.gov/recovery and in an electronic supplement, GAO-09-830SP. Recovery Act Funding Helped States Address Budget Challenges State revenue continued to decline and states used Recovery Act funding to reduce some of their planned budget cuts and tax increases to close current and anticipated budget shortfalls for fiscal years 2009 and 2010. Of the 16 states and the District, 15 estimate fiscal year 2009 general fund revenue collections will be less than in the previous fiscal year. For two of the selected states —Iowa and North Carolina—revenues were lower than projected but not less than the previous fiscal year. As shown in figure 4, data from the Bureau of Economic Analysis (BEA) also indicate that the rate of state and local revenue growth has generally declined since the second quarter of 2005, and the rate of growth has been negative in the fourth quarter of 2008 and the first quarter of 2009. Officials in most of the selected states and the District expect these revenue trends to contribute to budget gaps (estimated revenues less than estimated disbursements) anticipated for future fiscal years. All of the 16 states and the District forecasted budget gaps in state fiscal year 2009-2010 before budget actions were taken. Consistent with one of the purposes of the act, states’ use of Recovery Act funds to stabilize their budgets helped them minimize and avoid reductions in services as well as tax increases. States took a number of actions to balance their budgets in fiscal year 2009-2010, including sta layoffs, furloughs, and program cuts. The use of Recovery Act funds affected the size and scope of some states’ budgeting decisions, and many of the selected states reported they would have had to make further cuts to services and programs without the receipt of Recovery Act funds. For example, California, Colorado, Georgia, Illinois, Massachusetts, Michig New York, and Pennsylvania budget officials all stated that current or future budget cuts would have been deeper without the receipt of Recovery Act funds. Recovery Act funds helped cushion the impact of states’ planned budget actions but officials also cautioned that current revenue estimates indicate that additional state actions will be needed to balance future-year budgets. Future actions to stabilize state budgets will require continued awareness of the maintenance-of-effort (MOE) requirements for some federal programs funded by the Recovery Act. For example, Massachusetts officials expressed concerns regarding MOE requirements attached to federal programs, including those funded through the Recovery Act, as future across-the-board spending reductions could pose challenges for maintaining spending levels in these programs. State officials said that MOE requirements that require maintaining spending levels based upon prior-year fixed dollar amounts will pose more of a challenge than upholding spending levels based upon a percentage of program spending relative to total state budget expenditures. In addition, some states also reported accelerating their use of Recovery Act funds to stabilize deteriorating budgets. Many states, such as Colorado, Florida, Georgia, Iowa, New Jersey, and North Carolina, also reported tapping into their reserve or rainy-day funds in order to balance their budgets. In most cases, the receipt of Recov Act funds did not prevent the selected states from tapping into their reserve funds, but a few states reported that without the receipt of Recovery Act funds, withdrawals from reserve funds would have been greater. Officials from Georgia stated that although they have already used reserve funds to balance their fiscal year 2009 and 2010 budgets, they may use additional reserve funds if, at the end of fiscal year 2009, revenues are lower than the most recent projections. In contrast, New York officials stated they were able to avoid tapping into the state’s reserve funds due to the funds made available as a result of the increased Medicaid FMAP funds provided by the Recovery Act. Approaches to Developing Exit Strategies for End of Recovery Act Funding Influenced by Nature of State Budget Processes States’ approaches to developing exit strategies for the use of Recovery Act funds reflect the balanced-budget requirements in place for all of our selected states and the District. Budget officials referred to the temporary nature of the funds and fiscal challenges expected to extend beyond the timing of funds provided by the Recovery Act. Officials discussed a desire to avoid what they referred to as the “cliff effect” associated with the dates when Recovery Act funding ends for various federal programs. Budget officials in some of the selected states are preparing for the end of Recovery Act funding by using funds for nonrecurring expenditures and hiring limited-term positions to avoid creating long-term liabilities. A few states reported that although they are developing preliminary plans for the phasing out of Recovery Act funds, further planning has been delayed until revenue and expenditure projections are finalized. States Have Implemented Various Internal Control Programs: However, Single Audit Guidance and Reporting Does Not Adequately Address Recovery Act Risk Given that Recovery Act funds are to be distributed quickly, effective internal controls over use of funds are critical to help ensure effective and efficient use of resources, compliance with laws and regulations, and in achieving accountability over Recovery Act programs. Internal controls include management and program policies, procedures, and guidance that help ensure effective and efficient use of resources; compliance with laws and regulations; prevention and detection of fraud, waste, and abuse; and the reliability of financial reporting. Management is responsible for the design and implementation of internal controls and the states in our sample have a range of approaches for implementing their internal controls. Some states have internal control requirements in their state statutes and others have undertaken internal control programs as management initiatives. In our sample, 7 states - California, Colorado, Florida, Michigan, Mississippi, New York, and North Carolina –have statutory requirements for internal control programs and activities. An additional 9 states – Arizona, Georgia, Illinois, Iowa, Massachusetts, New Jersey, Ohio, Pennsylvania, and Texas – have undertaken various internal control programs. In addition, the District of Columbia has taken limited actions related to its internal control program. An effective internal control program helps manage change in response to shifting environments and evolving demands and priorities, such as changes related to implementing the Recovery Act. Risk assessment and monitoring are key elements of internal controls, and the states in our sample and the District have undertaken a variety of actions in these areas. Risk assessment involves performing comprehensive reviews and analyses of program operations to determine if internal and external risks exist and to evaluate the nature and extent of risks which have been identified. Approaches to risk analysis can vary across organizations because of differences in missions and the methodologies used to qualitatively and quantitatively assign risk levels. Monitoring activities include the systemic process of reviewing the effectiveness of the operation of the internal control system. These activities are conducted by management, oversight entities, and internal and external auditors. Monitoring enables stakeholders to determine whether the internal control system continues to operate effectively over time. Monitoring also provides information and feedback to the risk assessment process. Challenges Exist in Tracking Recovery Act Funds States and localities are responsible for tracking and reporting on Recovery Act funds. OMB has issued guidance to the states and localities that provides for separate identification—”tagging”—of Recovery Act funds so that specific reports can be created and transactions can be specifically identified as Recovery Act funds. The flow of federal funds to the states varies by program, the grantor agencies have varied grants management processes and grants vary substantially in their types, purposes, and administrative requirements. Several states and the District of Columbia have created unique codes for their financial systems in order to tag the Recovery Act funds. Most state and local program officials told us that they will apply existing controls and oversight processes that they currently apply to other program funds to oversee Recovery Act funds. In addition to being an important accountability mechanism, audit results can provide valuable information for use in management’s risk assessment and monitoring processes. The single audit report, prepared to meet the requirements of the Single Audit Act, as amended (Single Audit Act), is a source of information on internal control and compliance findings and the underlying causes and risks. The report is prepared in accordance with OMB’s implementing guidance in OMB Circular No. A-133, Audits of States, Local Governments, and Non-Profit Organizations, which provides guidance to auditors on selecting federal programs for audit and the related internal control and compliance audit procedures to be performed. In our April 23, 2009 report, we reported that the guidance and criteria in OMB Circular No. A-133 do not adequately address the substantial added risks posed by the new Recovery Act funding. Such risks may result from (1) new government programs, (2) the sudden increase in funds or programs that are new to the recipient entity, and (3) the expectation that some programs and projects will be delivered faster so as to inject funds into the economy. With some adjustment, the single audit could be an effective oversight tool for Recovery Act programs, addressing risks associated with all three of these factors. Our April 2009 report on the Recovery Act included recommendations that OMB adjust the current audit process to: focus the risk assessment auditors use to select programs to test for compliance with 2009 federal program requirements on Recovery Act funding; provide for review of the design of internal controls during 2009 over programs to receive Recovery Act funding, before significant expenditures in 2010; and evaluate options for providing relief related to audit requirements for low-risk programs to balance new audit responsibilities associated with the Recovery Act. Since April, although OMB has taken several steps in response to our recommendations, these actions do not sufficiently address the risks leading to our recommendations. To focus auditor risk assessments on Recovery Act-funded programs and to provide guidance on internal control reviews for Recovery Act programs, OMB is working within the framework defined by existing mechanisms—Circular No. A-133 and the Compliance Supplement. In this context, OMB has made limited adjustments to its single audit guidance and is planning to issue additional guidance in mid-July 2009. Focusing Auditors’ Program Risk Assessments on Programs with Recovery Act Funding On May 26, OMB issued the 2009 edition of the Circular A-133 Compliance Supplement. The new Compliance Supplement is intended to focus auditor risk assessment on Recovery Act funding by, among things (1) requiring that auditors specifically ask auditees about and be alert to expenditure of funds provided by the Recovery Act, and (2) providing an appendix that highlights some areas of the Recovery Act impacting single audits. The appendix adds a requirement that large programs and program clusters with Recovery Act funding cannot be assessed as low-risk for the purposes of program selection without clear documentation of the reasons they are considered low risk. It also calls for recipients to separately identify expenditures for Recovery Act programs on the Schedule of Expenditures of Federal Awards. However, OMB has not yet identified program groupings critical to auditors’ selection of programs to be audited for compliance with program requirements. OMB Circular A-133 relies heavily on the amount of federal expenditures in a program during a fiscal year and whether findings were reported in the previous period to determine whether detailed compliance testing is required for that year. Although OMB is considering ways to cluster programs for single audit selection to make it more likely that Recovery Act programs would be selected as major programs subject to internal control and compliance testing, the dollar formulas would not change under this plan. This approach may not provide sufficient assurance that smaller, but nonetheless significant, Recovery Act-funded programs would be selected for audit. In addition, the 2009 Compliance Supplement does not yet provide specific auditor guidance for new programs funded by the Recovery Act, or for new compliance requirements specific to Recovery Act funding within existing programs, that may be selected as major programs for audit. OMB acknowledges that additional guidance is called for and plans to address some Recovery Act-related compliance requirements by mid-July 2009. Reviewing the Design of Internal Controls over Recovery Act-funded Programs before Significant Expenditures in 2010 To provide additional focus on internal control reviews, OMB has drafted guidance it plans to finalize in July 2009 that indicates the importance of such reviews and encourages auditors to communicate weaknesses to management early in the audit process, but does not add requirements for auditors to take these steps. Addressing this recommendation through the existing audit framework, however, would not change the reporting timeframes and therefore would not address our concern that internal controls over Recovery Act programs should be reviewed before significant funding is expended. In addition, if the guidance is limited to major programs this may not adequately consider Recovery Act program risks. Further, if this is done within the current single audit framework and reporting timelines, the auditor evaluation of internal control and related reporting will occur too late—after significant levels of federal expenditures have already occurred. Providing relief to Balance Expected Increased Workload While OMB has noted the increased responsibilities falling on those responsible for performing single audits, it has not issued any proposals or plans to address this recommendation to date. A recent survey conducted by the staff of the National State Auditors’ Association (NSAA) highlighted the need for relief to over-burdened state audit organizations that have experienced staffing reductions and furloughs. OMB officials told us they are considering reducing auditor workload by decreasing the number of risk assessments of smaller federal programs. Auditors conduct these risk assessments as part of the planning process to identify which federal programs will be subject to detailed internal control and compliance testing. We believe that this step alone will not provide sufficient relief to balance out additional audit requirements for Recovery Act programs. Without action now audit coverage of Recovery Act programs will not be sufficient to address Recovery Act risks and the audit reporting that does occur will be after significant expenditures have already occurred. Congress is currently considering a bill that could provide some financial relief to auditors lacking the staff capacity necessary to handle the increased audit responsibilities associated with the Recovery Act. H.R. 2182 would amend the Recovery Act to provide for enhanced state and local oversight of activities conducted pursuant to the Act. As passed by the House, H.R. 2182 would allow state and local governments to set aside 0.5 percent of Recovery Act funds, in addition to funds already allocated to administrative expenditures, to conduct planning and oversight. Chairman Towns, Ranking Member Issa, and this Committee are to be commended for their leadership in crafting H.R. 2182. Single Audit Reporting Will Not Facilitate Timely Reporting of Recovery Act Program Findings and Risks The single audit reporting deadline is too late to provide audit results in time for the audited entity to take action on deficiencies noted in Recovery Act programs. The Single Audit Act requires that recipients submit their Single Audit reports to the federal government no later than nine months after the end of the period being audited. As a result an audited entity may not receive feedback needed to correct an identified internal control or compliance weakness until the latter part of the subsequent fiscal year. For example, states that have a fiscal year end of June 30th have a reporting deadline of March 31st, which leaves program management only 3 months to take corrective action on any audit findings before the end of the subsequent fiscal year. For Recovery Act programs, significant expenditure of funds could occur during the period prior to the audit report being issued. The timing problem is exacerbated by the extensions to the 9 month deadline that are routinely granted by the awarding agencies, consistent with OMB guidance. For example, 13 of the 17 states in our sample have a June 30 fiscal year end and 7 of these 13 states requested and received extensions for their March 31, 2009 submission requirement of their fiscal year 2008 reporting package. The Health and Human Services Office of Inspector General (HHS OIG) is the cognizant agency for most of the states, including all of the states selected for review under the Recovery Act. According to a HHS OIG official, beginning in May 2009 HHS IG adopted a policy of no longer approving requests for extensions of the due dates for single audit reporting package submissions. OMB officials have stated that they plan to eliminate allowing extensions of the reporting package, but have not issued any official guidance or memorandum to the agencies, OIGs, or federal award recipients. In order to realize the single audit’s full potential as an effective Recovery Act oversight tool, OMB needs to take additional action to focus auditors’ efforts on areas that can provide the most efficient, and most timely, results. As federal funding of Recovery Act programs accelerates in the next few months, we are particularly concerned that the Single Audit process may not provide the timely accountability and focus needed to assist recipients in making necessary adjustments to internal controls so that they achieve sufficient strength and capacity to provide assurances that the money is being spent as effectively as possible to meet program objectives. Efforts to Assess the Impact of Recovery Act Spending As recipients of Recovery Act funds and as partners with the federal government in achieving Recovery Act goals, states and local units of government are expected to invest Recovery Act funds with a high level of transparency and to be held accountable for results under the Recovery Act. Under the Recovery Act, direct recipients of the funds, including states and localities, are expected to report quarterly on a number of measures including the use of funds and an estimate of the number of jobs created and the number of jobs retained. These measures are part of the recipient reports required under section 1512(c) of the Recovery Act and will be submitted by recipients starting in October 2009. OMB guidance described recipient reporting requirements under the Recovery Act’s section 1512 as the minimum performance measures that must be collected, leaving it to federal agencies to determine additional information that would be required for oversight of individual programs funded by the Recovery Act, such as the Department of Energy Weatherization Assistance Program and the Department of Justice Edward Byrne Memorial Justice Assistance Grant (JAG) Program. In general, states are adapting information systems, issuing guidance, and beginning to collect data on jobs created and jobs retained, but questions remained about how to count jobs and measure performance under Recovery Act-funded programs. Over the last several months OMB met regularly with state and local officials, federal agencies, and others to gather input on the reporting requirements and implementation guidance. OMB also worked with the Recovery Accountability and Transparency Board to design a nationwide data collection system that will reduce information reporting burdens on recipients by simplifying reporting instructions and providing a user-friendly mechanism for submitting required data. OMB will be testing this system in July. In response to requests for more guidance on the recipient reporting process and required data, OMB, after soliciting responses from an array of stakeholders, issued additional implementing guidance for recipient reporting on June 22, 2009. In addition to other areas, the new OMB guidance clarifies that recipients of Recovery Act funds are required to report only on jobs directly created or retained by Recovery Act-funded projects, activities, and contracts. Recipients are not expected to report on the employment impact on materials suppliers (“indirect” jobs) or on the local community (“induced” jobs). The OMB guidance also provides additional instruction on estimating the number of jobs created and retained by Recovery Act funding. OMB’s guidance on the implementation of recipient reporting should be helpful in addressing answers to many of the questions and concerns raised by state and local program officials. However, federal agencies may need to do a better job of communicating the OMB guidance in a timely manner to their state counterparts and, as appropriate, issue clarifying guidance on required performance measurement. OMB’s guidance for reporting on job creation aims to shed light on the immediate uses of Recovery Act funding; however, reports from recipients of Recovery Act funds must be interpreted with care. For example, accurate, consistent reports will only reflect a portion of the likely impact of the Recovery Act on national employment, since Recovery Act resources are also made available through tax cuts and benefit payments. OMB noted that a broader view of the overall employment impact of the Recovery Act will be covered in the estimates generated by the Council of Economic Advisers (CEA) using a macro-economic approach. According to CEA, it will consider the direct jobs created and retained reported by recipients to supplement its analysis. Concluding Observations and Recommendations Since enactment of the Recovery Act in February 2009, OMB has issued three sets of guidance—on February 18, April 3 and, most recently, June 22, 2009 —to announce spending and performance reporting requirements to assist prime recipients and subrecipients of federal Recovery Act funds comply with these requirements. OMB has reached out to Congress, federal, state, and local government officials, grant and contract recipients, and the accountability community to get a broad perspective on what is needed to meet the high expectations set by Congress and the administration. Further, according to OMB’s June guidance they have worked with the Recovery Accountability and Transparency Board to deploy a nationwide data collection system at www.federalreporting.gov. As work proceeds on the implementation of the Recovery Act, OMB and the cognizant federal agencies have opportunities to build on the early efforts by continuing to address several important issues. These issues can be placed broadly into three categories, which have been revised from our last report to better reflect evolving events since April: (1) accountability and transparency requirements, (2) reporting on impact, and (3) communications and guidance. Accountability and Transparency Requirements Recipients of Recovery Act funding face a number of implementation challenges in this area. The act includes new programs and significant increases in funds out of normal cycles and processes. There is an expectation that many programs and projects will be delivered faster so as to inject funds into the economy, and the administration has indicated its intent to assure transparency and accountability over the use of Recovery Act funds. Issues regarding the Single Audit process and administrative support and oversight are important. Single Audit: The Single Audit process needs adjustments to provide appropriate risk-based focus and the necessary level of accountability over Recovery Act programs in a timely manner. In our April 2009 report, we reported that the guidance and criteria in OMB Circular No. A-133 do not adequately address the substantial added risks posed by the new Recovery Act funding. Such risks may result from (1) new government programs, (2) the sudden increase in funds or programs that are new to the recipient entity, and (3) the expectation that some programs and projects will be delivered faster so as to inject funds into the economy. With some adjustment, the Single Audit could be an effective oversight tool for Recovery Act programs because it can address risks associated with all three of these factors. April report recommendations: Our April report included recommendations that OMB adjust the current audit process to focus the risk assessment auditors use to select programs to test for compliance with 2009 federal program requirements on Recovery Act funding; provide for review of the design of internal controls during 2009 over programs to receive Recovery Act funding, before significant expenditures in 2010; and evaluate options for providing relief related to audit requirements for low- risk programs to balance new audit responsibilities associated with the Recovery Act. Status of April report recommendations: OMB has taken some actions and has other planned actions to help focus the program selection risk assessment on Recovery Act programs and to provide guidance on auditors’ reviews of internal controls for those programs. However, we remain concerned that OMB’s planned actions would not achieve the level of accountability needed to effectively respond to Recovery Act risks and does not provide for timely reporting on internal controls for Recovery Act programs. Therefore, we are re-emphasizing our previous recommendations in this area. To help auditors with single audit responsibilities meet the increased demands imposed on them by Recovery Act funding, we recommend that the Director of OMB take the following four actions: Consider developing requirements for reporting on internal controls during 2009 before significant Recovery Act expenditures occur as well as ongoing reporting after the initial report. Provide more focus on Recovery Act programs through the Single Audit to help ensure that smaller programs with high risk have audit coverage in the area of internal controls and compliance. Evaluate options for providing relief related to audit requirements for low-risk programs to balance new audit responsibilities associated with the Recovery Act. To the extent that options for auditor relief are not provided, develop mechanisms to help fund the additional Single Audit costs and efforts for auditing Recovery Act programs. Administrative Support and Oversight States have been concerned about the burden imposed by new requirements, increased accounting and management workloads, and strains on information systems and staff capacity at a time when they are under severe budgetary stress. April report recommendation: In our April report, we recommended that the director of OMB clarify what Recovery Act funds can be used to support state efforts to ensure accountability and oversight, especially in light of enhanced oversight and coordination requirements. Status of April report recommendation: On May 11, 2009, OMB released a memorandum clarifying how state grantees could recover administrative costs of Recovery Act activities. Matter for Congressional Consideration Because a significant portion of Recovery Act expenditures will be in the form of federal grants and awards, the Single Audit process could be used as a key accountability tool over these funds. However, the Single Audit Act, enacted in 1984 and most recently amended in 1996, did not contemplate the risks associated with the current environment where large amounts of federal awards are being expended quickly through new programs, greatly expanded programs, and existing programs. The current Single Audit process is largely driven by the amount of federal funds expended by a recipient in order to determine which federal programs are subject to compliance and internal control testing. Not only does this model potentially miss smaller programs with high risk, but it also relies on audit reporting 9 months after the end of a grantee’s fiscal year—far too late to preemptively correct deficiencies and weaknesses before significant expenditures of federal funds. Congress is considering a legislative proposal in this area and could address the following issues: To the extent that appropriate adjustments to the Single Audit process are not accomplished under the current Single Audit structure, Congress should consider amending the Single Audit Act or enacting new legislation that provides for more timely internal control reporting, as well as audit coverage for smaller Recovery Act programs with high risk. To the extent that additional audit coverage is needed to achieve accountability over Recovery Act programs, Congress should consider mechanisms to provide additional resources to support those charged with carrying out the Single Audit act and related audits. Reporting on Impact Under the Recovery Act, responsibility for reporting on jobs created and retained falls to nonfederal recipients of Recovery Act funds. As such, states and localities have a critical role in identifying the degree to which Recovery Act goals are achieved. Performance reporting is broader than the jobs reporting required under section 1512 of the Recovery Act. OMB guidance requires that agencies collect and report performance information consistent with the agency’s program performance measures. As described earlier in this report, some agencies have imposed additional performance measures on projects or activities funded through the Recovery Act. April report recommendation: In our April report, we recommended that given questions raised by many state and local officials about how best to determine both direct and indirect jobs created and retained under the Recovery Act, the Director of OMB should continue OMB’s efforts to identify appropriate methodologies that can be used to (1) assess jobs created and retained from projects funded by the Recovery Act; (2) determine the impact of Recovery Act spending when job creation is indirect; (3) identify those types of programs, projects, or activities that in the past have demonstrated substantial job creation or are considered likely to do so in the future and consider whether the approaches taken to estimate jobs created and jobs retained in these cases can be replicated or adapted to other programs. Status of April report recommendation: OMB has been meeting on a regular basis with state and local officials, federal agencies, and others to gather input on reporting requirements and implementation guidance and has worked with the Recovery Accountability and Transparency Board on a nationwide data collection system. On June 22, OMB issued additional implementation guidance on recipient reporting of jobs created and retained. This guidance is responsive to much of what we said in our April report. It states that there are two different types of jobs reports under the Recovery Act and clarifies that recipient reports are to cover only direct jobs created or retained. “Indirect” jobs (employment impact on suppliers) and “induced” jobs (employment impact on communities) will be covered in Council of Economic Advisers (CEA) quarterly reports on employment, economic growth, and other key economic indicators. Consistent with the statutory language of the act, OMB’s guidance states that these recipient reporting requirements apply to recipients who receive funding through discretionary appropriations, not to those receiving funds through either entitlement or tax programs or to individuals. It clarifies that the prime recipient and not the subrecipient is responsible for reporting section 1512 information on jobs created or retained. The June 2009 guidance also provides detailed instructions on how to calculate and report jobs as full- time equivalents (FTE). It also describes in detail the data model and reporting system to be used for the required recipient reporting on jobs. The guidance provided for reporting job creation aims to shed light on the immediate uses of Recovery Act funding and is reasonable in that context. It will be important, however, to interpret the recipient reports with care. As noted in the guidance, these reports are only one of the two distinct types of reports seeking to describe the jobs impact of the Recovery Act. CEA’s quarterly reports will cover the impact on employment, economic growth, and other key economic indicators. Further, the recipient reports will not reflect the impact of resources made available through tax provisions or entitlement programs. Recipients are required to report no later than 10 days after the end of the calendar quarter. The first of these reports is due on October 10, 2009. After prime recipients and federal agencies perform data quality checks, detailed recipient reports are to be made available to the public no later than 30 days after the end of the quarter. Initial summary statistics will be available on www.recovery.gov. The guidance explicitly does not mandate a specific methodology for conducting quality reviews. Rather, federal agencies are directed to coordinate the application of definitions of material omission and significant reporting error to “ensure consistency” in the conduct of data quality reviews. Although recipients and federal agency reviewers are required to perform data quality checks, none are required to certify or approve data for publication. It is unclear how any issues identified under data quality reviews would be resolved and how frequently data quality problems would have been identified in the reviews. We will continue to monitor this data quality and recipient reporting requirements. Our recommendations: To increase consistency in recipient reporting or jobs created and retained, the Director of OMB should work with federal agencies to have them provide program-specific examples of the application of OMB’s guidance on recipient reporting of jobs created and retained. This would be especially helpful for programs that have not previously tracked and reported such metrics. Because performance reporting is broader than the jobs reporting required by section 1512, the Director of OMB should also work with federal agencies—perhaps through the Senior Management Councils—to clarify what new or existing program performance measures—in addition to jobs created and retained—that recipients should collect and report in order to demonstrate the impact of Recovery Act funding. In addition to providing these additional types of program-specific examples of guidance, the Director of OMB should work with federal agencies to use other channels to educate state and local program officials on reporting requirements, such as Web- or telephone-based information sessions or other forums. Communications and Guidance Funding notification and program guidance: State officials expressed concerns regarding communication on the release of Recovery Act funds and their inability to determine when to expect federal agency program guidance. Once funds are released there is no easily accessible, real-time procedure for ensuring that appropriate officials in states and localities are notified. Because half of the estimated spending programs in the Recovery Act will be administered by nonfederal entities, states wish to be notified when funds are made available to them for their use as well as when funding is received by other recipients within their state that are not state agencies. OMB does not have a master timeline for issuing federal agency guidance. OMB’s preferred approach is to issue guidance incrementally. This approach potentially produces a more timely response and allows for mid- course corrections; however, this approach also creates uncertainty among state and local recipients responsible for implementing programs. We continue to believe that OMB can strike a better balance between developing timely and responsive guidance and providing a longer range time line that gives some structure to states’ and localities’ planning efforts. April report recommendation: In our April report, we recommended that to foster timely and efficient communications, the Director of OMB should develop an approach that provides dependable notification to (1) prime recipients in states and localities when funds are made available for their use, (2) states—where the state is not the primary recipient of funds but has a statewide interest in this information—and (3) all nonfederal recipients on planned releases of federal agency guidance and, if known, whether additional guidance or modifications are recommended. Status of April recommendation: OMB has made important progress in the type and level of information provided in its reports on Recovery.gov. Nonetheless, OMB has additional opportunities to more fully address the recommendations we made in April. By providing a standard format across disparate programs, OMB has improved its Funding Notification reports, making it easier for the public to track when funds become available. Agencies update their Funding Notification reports for each program individually whenever they make funds available. Both reports are available on www.recovery.gov. OMB has taken the additional step of disaggregating financial information, i.e., federal obligations and outlays by Recovery Act programs and by state in its Weekly Financial Activity Report. Our recommendation: The Director of OMB should continue to develop and implement an approach that provides easily accessible, real-time notification to (1) prime recipients in states and localities when funds are made available for their use, and (2) states—where the state is not the primary recipient of funds but has a statewide interest in this information. In addition, OMB should provide a long range time line for the release of federal guidance for the benefit of nonfederal recipients responsible for implementing Recovery Act programs. Recipient financial tracking and reporting guidance: In addition to employment related reporting, OMB’s guidance calls for the tracking of funds by the prime recipient, recipient vendors, and subrecipients receiving payments. OMB’s guidance also allows that “prime recipients may delegate certain reporting requirements to subrecipients.” Either the prime or sub-recipient must report the D-U-N-S number (or an acceptable alternative) for any vendor or sub-recipient receiving payments greater than $25 thousand. In addition, the prime recipient must report what was purchased and the amount, and a total number and amount for sub-awards of less than $25 thousand. By reporting the DUNS number, OMB guidance provides a way to identify subrecipients by project, but this alone does not ensure data quality. The approach to tracking funds is generally consistent with the Federal Funding Accountability and Transparency Act (FFATA). Like the Recovery Act, the FFATA requires a publicly available Web site— USAspending.gov—to report financial information about entities awarded federal funds. Yet, significant questions have been raised about the reliability of the data on USAspending.gov, primarily because what is reported by the prime recipients is dependent on the unknown data quality and reporting capabilities of their subrecipients. For example, earlier this year, more than 2 years after passage of FFATA, the Congressional Research Services (CRS) questioned the reliability of the data on USAspending.gov. We share CRS’s concerns associated with USAspending.gov, including incomplete, inaccurate, and other data quality problems. More broadly, these concerns also pertain to recipient financial reporting in accordance with the Recovery Act and its federal reporting vehicle, www.FederalReporting.gov, currently under development. Our recommendation: To strengthen the effort to track the use of funds, the Director of OMB should (1) clarify what constitutes appropriate quality control and reconciliation by prime recipients, especially for subrecipient data, and (2) specify who should best provide formal certification and approval of the data reported. Agency-specific guidance: DOT and FHWA have yet to provide clear guidance regarding how states are to implement the Recovery Act requirement that economically distressed areas are to receive priority in the selection of highway projects for funding. We found substantial variation both in how states identified areas in economic distress and how they prioritized project selection for these areas. As a result, it is not clear whether areas most in need are receiving priority in the selection of highway infrastructure projects, as Congress intended. While it is true that states have discretion in selecting and prioritizing projects, it is also important that this goal of the Recovery Act be met. Our recommendation: To ensure states meet Congress’s direction to give areas with the greatest need priority in project selection, the Secretary of Transportation should develop clear guidance on identifying and giving priority to economically distressed areas that are in accordance with the requirements of the Recovery Act and the Public Works and Economic Development Act of 1965, as amended, and more consistent procedures for the Federal Highway Administration to use in reviewing and approving states’ criteria. Agency Comments and Our Evaluation We received comments on a draft of our report from the U.S. Office of Management and Budget (OMB) and the U.S. Department of Transportation (DOT) on our report recommendations. U.S. Office of Management and Budget: OMB concurs with the overall objectives of our recommendations made to OMB in our report. OMB offered clarifications regarding the area of Single Audit and did not concur with some of our conclusions related to communications. What follows summarizes OMB’s comments and our responses. Single Audit Act OMB agreed with the overall objectives of our recommendations and offered clarifications regarding the areas of Single Audit. OMB also noted it believes that the new requirements for more rigorous internal control reviews will yield important short-term benefits and the steps taken by state and local recipients to immediately initiate controls will withstand increased scrutiny later in the process. OMB commented that it has already taken and is planning actions to focus program selection risk assessment on Recovery Act programs and to increase the rigor of state and local internal controls on Recovery Act activities. However, our report points out that OMB has not yet completed critical guidance in these areas. Unless OMB plans to change the risk assessment process conducted for federal programs under Circular A-133, smaller, but significantly risky programs under the Recovery Act may not receive adequate attention and scrutiny under the Single Audit process. OMB acknowledged that acceleration of internal control reviews could cause more work for state auditors, for which OMB and Congress should explore potential options for relief. This is consistent with the recommendations we make in this report. OMB also noted that our draft report did not offer a specific recommendation for achieving acceleration of internal control reporting. Because there are various ways to achieve the objective of early reporting on internal controls, we initially chose not to prescribe a specific method; however, such accelerated reporting could be achieved in various ways. For instance, OMB could require specific internal control certifications from federal award recipients meeting certain criteria as of a specified date, such as December 31, 2009, before significant Recovery Act expenditures occur. Those certifications could then be reviewed by the auditor as part of the regular single audit process. Alternatively, or in addition, OMB could require that the internal control portion of the single audit be completed early, with a report submitted 60 days after the recipient’s year end. We look forward to continuing our dialog with OMB on various options available to achieve the objective of early reporting on internal controls. We will also continue to review OMB’s guidance in the area of single audits as such guidance is being developed. Communications OMB has made important progress relative to some communications. In particular, we agree with OMB’s statements that it requires agencies to post guidance and funding information to agency Recovery Act websites, disseminates guidance broadly, and seeks out and responds to stakeholder input. In addition, OMB is planning a series of interactive forums to offer training and information to Recovery Act recipients on the process and mechanics of recipient reporting and they could also serve as a vehicle for additional communication. Moving forward and building on the progress it has made, OMB can take the following additional steps related to funding notification and guidance. First, OMB should require direct notification to key state officials when funds become available within a state. OMB has improved Funding Notification reports by providing a standard format across disparate programs, making it easier for the public to track when funds become available. However, it does not provide an easily accessible, real-time notification of when funds are available. OMB recognized the shared responsibilities of federal agencies and states in its April 3, 2009 guidance when it noted that federal agencies should expect states to assign a responsible office to oversee data collection to ensure quality, completeness, and timeliness of data submissions for recipient reporting. In return, states have expressed a need to know when funds flow into the state regardless of which level of government or governmental entity within the state receives the funding so that they can meet the accountability objectives of the Recovery Act. We continue to recommend more direct notification to (1) prime recipients in states and localities when funds are made available for their use, and (2) states-where the state is not the primary recipient of funds but has a statewide interest in this information. Second, OMB should provide a long range time line for the release of federal guidance. In an attempt to be responsive to emerging issues and questions from the recipient community, OMB’s preferred approach is to issue guidance incrementally. This approach potentially produces a more timely response and allows for mid-course corrections; however, this approach also creates uncertainty among state and local recipients. State and local officials expressed concerns that this incremental approach hinders their efforts to plan and administer Recovery Act programs. As a result, we continue to believe OMB can strike a better balance between developing timely and responsive guidance and providing some degree of a longer range time line so that states and localities can better anticipate which programs will be affected and when new guidance is likely to be issued. OMB’s consideration of a master schedule and its acknowledgement of the extraordinary proliferation of program guidance in response to Recovery Act requirements seem to support a more structured approach. We appreciate that a longer range time line would need to be flexible so that OMB could also continue to issue guidance and clarifications in a timely manner as new issues and questions emerge. U.S. Department of Transportation: DOT generally agreed to consider the recommendation that it develop clear guidance on identifying and giving priority to economically distressed areas and more consistent procedures for reviewing and approving states’ criteria. DOT agreed that states must give priority to projects located in economically distressed areas, but said that states must balance all the Recovery Act project selection criteria when selecting projects including giving preference to activities that can be started and completed expeditiously, using funds in a manner that maximizes job creation and economic benefit, and other factors. While we agree with DOT that there is no absolute primacy of economically distressed area projects in the sense that they must always be started first, the specific directives in the act that apply to highway infrastructure are that priority is to be given to projects that can be completed in 3 years, and are located in economically distressed areas. DOT also stated that the basic approach used by selected states to apply alternative criteria is consistent with the Public Works and Economic Development Act and its implementing regulations on economically distressed areas because it makes use of flexibilities provided by the Public Works Act to more accurately reflect changing economic conditions. However the result of DOT’s interpretation would be to allow states to prioritize projects based on criteria that are not mentioned in the highway infrastructure investment portion of the Recovery or the Public Works Acts without the involvement of the Secretary or Department of Commerce. We plan to continue to monitor states’ implementation of the economically distressed area requirements and interagency coordination at the federal level in future reports. Mr. Chairman, Representative Issa, and Members of the Committee this concludes my statement. I would be pleased to respond to any questions you may have. Contacts For further information on this testimony, please contact J. Christopher Mihm, Managing Director for Strategic Issues, on (202) 512-6806 or [email protected]. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony is based on a GAO report being released today--the second in response to a mandate under the American Recovery and Reinvestment Act of 2009 (Recovery Act). The report addresses: (1) selected states' and localities' uses of Recovery Act funds, (2) the approaches taken by the selected states and localities to ensure accountability for Recovery Act funds, and (3) states' plans to evaluate the impact of Recovery Act funds. GAO's work for the report is focused on 16 states and certain localities in those jurisdictions as well as the District of Columbia--representing about 65 percent of the U.S. population and two-thirds of the intergovernmental federal assistance available. GAO collected documents and interviewed state and local officials. GAO analyzed federal agency guidance and spoke with Office of Management and Budget (OMB) officials and with program officials at the Centers for Medicare and Medicaid Services, and the Departments of Education, Energy, Housing and Urban Development, Justice, Labor, and Transportation. Across the United States, as of June 19, 2009, Treasury had outlayed about $29 billion of the estimated $49 billion in Recovery Act funds projected for use in states and localities in fiscal year 2009. More than 90 percent of the $29 billion in federal outlays has been provided through the increased Medicaid Federal Medical Assistance Percentage (FMAP) and the State Fiscal Stabilization Fund (SFSF) administered by the Department of Education. GAO's work focused on nine federal programs that are estimated to account for approximately 87 percent of federal Recovery Act outlays in fiscal year 2009 for programs administered by states and localities. Increased Medicaid FMAP Funding All 16 states and the District have drawn down increased Medicaid FMAP grant awards of just over $15 billion for October 1, 2008, through June 29, 2009, which amounted to almost 86 percent of funds available. Medicaid enrollment increased for most of the selected states and the District, and several states noted that the increased FMAP funds were critical in their efforts to maintain coverage at current levels. States and the District reported they are planning to use the increased federal funds to cover their increased Medicaid caseload and to maintain current benefits and eligibility levels. Due to the increased federal share of Medicaid funding, most state officials also said they would use freed-up state funds to help cope with fiscal stresses. Highway Infrastructure Investment As of June 25, DOT had obligated about $9.2 billion for almost 2,600 highway infrastructure and other eligible projects in the 16 states and the District and had reimbursed about $96.4 million. Across the nation, almost half of the obligations have been for pavement improvement projects because they did not require extensive environmental clearances, were quick to design, obligate and bid on, could employ people quickly, and could be completed within 3 years. State Fiscal Stabilization Fund As of June 30, 2009, of the 16 states and the District, only Texas had not submitted an SFSF application. Pennsylvania recently submitted an application but had not yet received funding. The remaining 14 states and the District had been awarded a total of about $17 billion in initial funding from Education--of which about $4.3 billion has been drawn down. School districts said that they would use SFSF funds to maintain current levels of education funding, particularly for retaining staff and current education programs. They also said that SFSF funds would help offset state budget cuts. Accountability States have implemented various internal control programs; however, federal Single Audit guidance and reporting does not fully address Recovery Act risk. The Single Audit reporting deadline is too late to provide audit results in time for the audited entity to take action on deficiencies noted in Recovery Act programs. Moreover, current guidance does not achieve the level of accountability needed to effectively respond to Recovery Act risks. Finally, state auditors need additional flexibility and funding to undertake the added Single Audit responsibilities under the Recovery Act. Impact Direct recipients of Recovery Act funds, including states and localities, are expected to report quarterly on a number of measures, including the use of funds and estimates of the number of jobs created and the number of jobs retained. The first of these reports is due in October 2009. OMB--in consultation with a broad range of stakeholders--issued additional implementing guidance for recipient reporting on June 22, 2009, that clarifies some requirements and establishes a central reporting framework.
GAO_GAO-12-728
Background The December 26, 2004, earthquake and tsunami in the Indian Ocean near Indonesia left more than 200,000 dead and 40,000 reported missing and caused an estimated $10 billion in damages to property and infrastructure such as buildings, roads, and bridges. The Indonesian province of Aceh, about 150 miles from the epicenter of the earthquake, experienced the heaviest loss of lives and damage to property and infrastructure, largely along the west coast. Figure 1 shows the tsunami- affected countries; numbers of dead, missing, and displaced persons; and estimated damage. In Indonesia, the affected infrastructure included a major road and numerous bridges along the west coast—a key transportation artery in the region—that was destroyed in many locations and severely damaged in many others. Figure 2 shows two of the destroyed road sections in December 2005, 1 year after the tsunami. We began monitoring USAID’s delivery of assistance to the tsunami- affected countries, including its reconstruction of the Indonesian coastal road, in May 2005, and issued reports on our work in 2006 and 2007. In April 2006, we reported that USAID planned to construct and rehabilitate 150 miles of paved road between Banda Aceh and Meulaboh at an estimated cost of $245 million, or $1.6 million per mile, with an estimated completion date of September 2009. We noted that the initial plans and cost estimates for rehabilitating and constructing the road were based on limited site information because much of the road’s planned route was inaccessible. We also reported that costs and schedules for the road construction project might exceed initial estimates owing to several factors, including growing costs for materials and labor, long-standing civil conflict in the region, and difficulties that the Indonesian government might encounter in acquiring land parcels needed for the road right-of-way. The Japanese government agreed to rehabilitate the coastal road from Calang to Meulaboh. activities. In 2005 USAID allocated $245 million of this amount for the coastal road, and in 2006 USAID increased its allocation to $254 million. USAID reported that the final estimated cost, as of July 2012, was $256 million, or $2.8 million per mile. Figure 3 shows USAID’s initial and revised plans and completed results for the Indonesia coastal road. USAID Constructed Road under Several Major Contracts, with Numerous Factors Delaying Completion From August 2005 to September 2010, USAID awarded five contracts to reconstruct the coastal road in Aceh Province, Indonesia—three contracts for construction, one contract for design and supervision, and one contract for project management. Factors related to contractor performance as well as local conditions delayed USAID’s progress in designing and constructing the road and led to increased costs. Road Construction, Design and Supervision, and Project Management Were Performed under Five Contracts Construction of the 91 miles of road completed in April 2012 took place under two contracts—designated by USAID as the “priority” and “prime” contracts—for a combined estimated 83 miles of the road, and under a third “8-mile” contract for the remaining 8 miles. USAID also awarded two additional contracts for, respectively, the design and supervision of the road construction and the management of the project. Priority construction contract. In August 2005, USAID awarded a contract to an Indonesia firm, PT Wijaya Karya (WIKA).initially expected to take place from August 2005 to August 2006, WIKA’s construction work did not begin until October 2006. To expedite construction on certain “priority” sections of the road, USAID modified the contract to include, among other things, expanding the scope from 3 miles to 26 miles and extending the completion date from August 2006 to December 2007 for the 26 miles. In May 2008, USAID partially terminated the priority contract, removing 8 miles from the contract’s scope. Design and supervision contract. In November 2005, USAID awarded a contract to Parsons Global Services (Parsons), a U.S.- based multinational engineering firm, to design most of the road sections—about 88 miles—and supervise construction of 91 miles of road work. Project management contract. In April 2006, approximately 6 months after hiring its design and supervision contractor, USAID hired a U.S.-registered professional engineer as its overall Project Managerto the Project Manager. The Project Manager served as the project’s chief technical officer; was USAID’s principal interface with Indonesian government officials; advised in the development of the project’s design and, to ensure its conformance with design specifications, inspected construction work and directed changes as required prior to road sections being turned over to the Indonesian government. for the road construction project. Parsons reported directly Prime construction contract. In June 2007, USAID awarded the prime contract to SsangYong-Hutama Karya Joint Association (SsangYong-Hutama), a collaboration between a Korean firm and an Indonesian firm, for 65 miles of road construction in five noncontiguous sections. USAID had expected to award the prime construction contract in September 2006. However, its initial solicitation was restricted under USAID policy to U.S. firms, and USAID received only a single proposal, which the agency was unable to negotiate to an acceptable price. issued a second solicitation that, in an approved deviation from USAID policy, was opened to international firms. According to USAID officials, the second solicitation attracted interest from several prospective offerors and included a revised estimated completion date of March 2010, 6 months later than originally planned. In December 2006, USAID Eight-mile construction contract. In September 2010, USAID awarded a third construction contract to SsangYong Engineering & Construction (SsangYong) for the 8 contiguous miles removed from the priority contract, which was completed in January 2012. Table 1 shows the five major contracts that USAID awarded for the construction, design and supervision, and overall project management of the 91-mile road, as well as the contractors’ completed activities. USAID originally intended that the “prime” contract would be used to construct the majority of the originally planned 150-mile road. However, the inability to award the contract based on the initial solicitation caused USAID to both increase the scope of construction under the “priority” contract to 26 miles and reduce the scope of construction under the “prime” contract to 65 miles, reflecting the revised project goal of building a 91- mile road. Legend: N/A = not applicable. WIKA designed the initial 3-mile “priority” road section; Parsons designed the remaining 88-mile road sections. Figure 4 shows road sections, key features, and funding levels for each of the three road construction contractors. Figure 5 shows a timeline of events related to the five contracts. Factors related to contractors’ performance delayed USAID’s progress in designing and constructing the road and led to increased costs. Priority construction contractor. Lack of acceptable progress by the priority contractor resulted in USAID’s reducing the scope of the work, partially terminating the contract, and hiring a third construction contractor to complete the unfinished work. According to USAID, the mission determined that WIKA was not making satisfactory progress, owing in part to financial constraints and lack of equipment as well as WIKA’s changing its project leader three times. WIKA’s limited progress was primarily evident in one of the sections that comprised the priority contract. By May 2008, this 8-mile section was about 20 percent complete compared with other sections that were approximately 50 percent to 90 percent complete. Work in this section lagged in all areas of production including earthwork, concrete placement, and bridge and culvert construction. As a result, USAID eliminated this section from the scope of the priority contract through a termination action. USAID’s decision to eliminate this section from the priority contract enabled WIKA to concentrate resources on remaining sections and continue making progress. USAID later awarded the 8-mile section to another contractor, SsangYong. However, because of the lengthy processes involved in terminating its contract with WIKA and procuring a new contract, USAID did not award its contract with SsangYong until September 2010. Prime construction contractor. Slower-than-expected progress, as well as the correction of work that did not comply with specifications, contributed to delays in the prime contractor’s completion of the 65 miles of road construction. In addition, SsangYong-Hutama’s project leader was changed five times, according to USAID, which may also have contributed to the contractor’s repeatedly missing key schedule milestones. In addition, several local factors affected construction progress and costs. Delays in land acquisition. According to USAID, the Indonesian government had difficulty acquiring over 4,000 parcels of land needed for the new road alignment and right-of-way. The land was needed because, in many areas, the tsunami had changed the entire landscape such that parts of the road alignment, as it existed prior to the tsunami, were now underwater or otherwise inaccessible or unusable. In its efforts to acquire the land parcels, the Indonesian government experienced delays in determining ownership and locating owners because many owners had died in the tsunami. Also, in some instances, ownership documents, which were the only existing ownership records, were destroyed by the tsunami. Delays also occurred because, in some instances, the land parcels that were acquired were not contiguous and, as a result, construction contractors did not have a sufficient amount of land on which to initiate construction and store equipment and materials. For example, initiation of work to construct the priority 3-mile segment was delayed for approximately a year because the Indonesian government had acquired less than a quarter mile of right-of-way, significantly less land than USAID had expected to be available. Community opposition. Community opposition to the new road alignment resulted in delays, according to USAID. For example, construction was delayed because of disagreement between the Indonesian government and individuals and communities about the prices for land parcels. Also, for example, the proposed new alignment involved laying pavement over more than 600 gravesites. Upon learning that gravesites would be affected, some individuals and communities erected roadblocks and conducted demonstrations in opposition. To resolve the situation, Indonesian government officials, with USAID coordination assistance, had to negotiate settlements and identify and acquire new sites for the graves. Security problems. According to USAID, delays occurred because of security concerns and violence. For example, security threats caused delays in areas with a 30-year history of civil conflict between an insurgency group and the Indonesian government. Also, delays occurred when contractors received security threats, equipment was intentionally damaged, and workers were assaulted in land-value disputes. Figure 6 shows examples of community opposition that involved community protests, in some cases resulting in damage to contractor equipment. Flooding. During construction, delays resulted from flooding, caused by unusually heavy rains that destroyed temporary access to construction sites and to construction facilities where materials and equipment were located. In some instances, according to USAID, roads flooded even though drainage culverts had been built according to design specifications (see fig. 7). Subsequent to this flooding, the contractor corrected inaccurate design assumptions, caused by a lack of reliable historical climatological data for the area, and increased the capacities of some culverts. USAID Took Several Actions to Ensure Quality of Indonesia Road Design and Construction but Lacks Quality Assurance Mechanism for Sections Still under Warranty USAID’s actions to ensure the Indonesia road’s quality included, among others, hiring an experienced project manager and requiring 1-year warranties for completed road sections. USAID also required that the road’s design adhere to established quality standards and required inspections of road sections during and after construction. However, as of July 2012, USAID had not arranged for final inspections to ensure the quality of around 50 miles—about 55 percent—of the completed road that are still under warranty. USAID Established Organizational and Operational Controls to Help Ensure Quality To help ensure the quality of the road’s design and construction, USAID established organizational and operational controls by contracting with experienced personnel for key management positions and including a 1- year warranty in each construction contract. Project Manager. In April 2006, USAID hired a U.S.-registered professional engineer as its Project Manager for the entire road construction project. The Project Manager had previous experience managing several USAID infrastructure projects overseas as well as managing regional operations with a U.S. state’s department of transportation. The Project Manager served as the project’s chief technical officer and USAID’s principal interface with design/construction supervision and construction contractors and Indonesian government officials. Design and supervision. Approximately 6 months before hiring its Project Manager, USAID contracted with Parsons as the project’s construction management contractor to complete the design of most of the road and manage the supervision of its construction. According to a Parsons official, USAID’s hiring of a single firm to complete the design and management of construction supervision facilitated communication between design engineers and construction supervision staff and promoted quality in construction. USAID required that key Parsons design personnel have appropriate qualifications. For example, geotechnical, pavement, and structural designers were all required to be registered professional engineers with a minimum of 5 years experience on projects of a similar scope. In addition, USAID required that key Parsons’ staff in Indonesia have certain qualifications, such as skills and experience in contract administration, inspection, and quality monitoring, to help ensure that the work complied with specifications and conformed to standard construction practices. One-year warranty. USAID included a 1-year warranty period in all of its contracts with construction firms. Specifically, for a period of 1 year after each road section is completed, the contractor is required to correct any poor-quality or faulty work that USAID or Parsons finds during inspections.the Indonesian government until the contractor completes the corrective actions, and contractors are not released from their responsibilities until the Indonesian government formally accepts the section of road. USAID Required That Road Design Adhere to Established Quality Standards To promote quality in the road’s design, USAID required that Parsons adhere to established engineering standards. These standards define, among other things, key parameters such as lane and right-of-way widths, pavement structure, curve geometry, and weight-carrying capacity. To design pavement and bridge structures that would be capable of carrying anticipated vehicle loads, for example, design engineers used the widely accepted U.S. standards of the American Association of State Highway and Transportation Officials (AASHTO) as well as regionally and locally applicable Indonesian standards, such as those of the Association of Southeast Asian Nations. Use of AASHTO pavement design standards enabled engineers to determine the thickness of the road’s layers (aggregate base layers covered with an asphalt surface) that would be needed to sustain anticipated traffic volumes and vehicle weights over its 10-year design life. In addition, use of AASHTO and Indonesian bridge standards allowed engineers to determine appropriate structural configurations for bridges in consideration of site-specific traffic, thermal, and seismic conditions. Parsons also included several safety features in the design—for example, guardrails, warning signs, pavement markings, and protected walkways on bridges—that contributed to the road’s quality. Figure 8 illustrates these features. During construction, USAID’s Project Manager and Parsons took actions to help ensure quality by observing ongoing work, witnessing tests by construction contractors, conducting their own inspections, and requiring that the contractor correct any deficiencies or substandard work. For example, after determining that use of improper materials had resulted in the deterioration of approximately 6 miles of paved lanes, USAID directed the prime contractor to remove and replace these sections of the road. Parsons’ staff were involved in performing daily quality tests and conducting inspections. Parsons provided information to USAID’s Project Manager through frequent communication, correspondence, joint site reviews, and periodic reporting. USAID’s Project Manager and Parsons inspected road sections during construction, when construction was completed, and when the completed sections were handed over to the Indonesian government following the 1- year warranty period. Key project stakeholders—USAID’s Project Manager, Parsons, the construction contractor, and Indonesian government officials—attended these inspections. When an inspection identified deficiencies, USAID and Parsons ensured that they were corrected before the road section was formally turned over to the Indonesian government. For example, during our March 2012 visit to Aceh Province, Indonesia, we observed one of the construction contractors repairing defective pavement that USAID’s Project Manager had identified during an inspection near the end of the 1-year warranty for the affected section. Figure 9 shows a contractor using a milling machine to remove pavement on this section in preparation for corrective repaving. USAID currently lacks the capacity to ensure the quality of several sections of recently completed road that are still within the 1-year warranty period. Several sections totaling approximately 50 miles in length, or about 55 percent of the recently completed road, are currently under warranty—about 25 miles with warranties expiring at various times through the end of 2012 and about 25 miles with warranties expiring from January 2013 through April 2013. Figure 10 shows the locations of road sections with unexpired warranties, as of June 2012, and section expiration dates. USAID officials in Jakarta told us in April 2012 that USAID was considering rehiring the former Project Manager on an intermittent basis to perform inspections of the approximately 50 miles of road sections prior to expiration of the sections’ 1-year warranties. However, as of July 2012, USAID had not yet reached an agreement with the Project Manager or made other arrangements to inspect the sections. USAID Took Actions to Enhance Indonesia Road’s Sustainability, but Several Factors Could Decrease Its Life Expectancy To enhance the Indonesia road’s sustainability, USAID designed and constructed it to withstand heavy weights, included in the design several features intended to minimize environmental impact, and provided assistance to the Indonesian Directorate General of Highways (the Directorate). However, several factors, such as the Directorate’s limited capacity and resources and failure to restrict overweight vehicles, could lessen the road’s sustainability during its intended 10-year life expectancy. USAID Took Numerous Actions Intended to Enhance Road’s Sustainability USAID took several actions intended to enhance the road’s sustainability for the intended 10-year life expectancy. For example, in designing and constructing the road, USAID anticipated the effects of heavy trucks—the most significant factor affecting the rate of pavement deterioration. USAID also included in the road’s design the following features intended to enhance sustainability and minimize environmental impact: rock placement, known as armoring, along the shoreline to protect road from storm surges; shaped slopes and rock-fall retaining walls in mountainous areas to protect the road from damage; retainer and drainage systems to protect the road from rock falls and prevent flooding; concrete lining of drainage channels to prevent erosion; slope stabilization using Gabion baskets; galvanized steel bridge structures that do not require periodic painting. Figure 11 illustrates these features intended to enhance the road’s sustainability. In addition, USAID’s Project Manager developed an operations and maintenance plan for the Directorate. The plan included recommended practices such as establishing road maintenance facilities and placing equipment at three locations to maintain the road, with specific maintenance responsibilities for each site. The plan also outlined necessary maintenance tasks, such as patching pavement, repairing guardrails, and cleaning culverts and drains, and it provided a checklist of equipment needed by the Directorate. Further, USAID’s Project Manager suggested, among other things, that the Directorate limit the width and size of vehicles permitted to pass through narrow mountainous road sections. Figure 12 shows a truck passing through a narrow mountainous section of road that was repaved. USAID also took other actions, such as employing local workers and donating used vehicles, that could enhance the road’s sustainability. USAID encouraged contractors to employ Indonesian workers from Aceh Province during the construction work to enhance local skills and experience. For example, construction contractors employed Indonesian heavy equipment operators and truck drivers, and Parsons trained Indonesians to fill several key positions such as Finance Manager and Public Information/Media Specialist. Also, according to USAID officials, after the construction work was completed, USAID and Parsons provided the Directorate with used vehicles for use in maintaining the road. Several Factors Could Affect Road’s Sustainability during Intended 10-Year Life Expectancy Several factors—the Directorate’s limited capacity and resources, failure to restrict overweight vehicles, construction in the road right-of-way, and unauthorized access roads—could lessen the road’s sustainability for its intended 10-year life expectancy. Limited capacity and resources. Although the Directorate has provided a 5-year funding plan for the road and taken some actions to replace missing guardrails, the Directorate lacks some equipment as well as a sufficient number of staff for maintenance and repairs, according to USAID and Directorate officials. For example, USAID recommended in its checklist that the Directorate keep a jackhammer, compressor, and four vibrator rollers at each of three proposed maintenance facilities, but as of May 2012, the Directorate had not established the maintenance facilities and had not provided the equipment. Also, according to Directorate officials in Banda Aceh, the Directorate has a limited number of staff to maintain existing roads throughout Aceh Province. Overweight vehicles. The Directorate has not taken action to restrict the use of overweight vehicles on the road, which could reduce the road’s life expectancy. USAID’s design of the road with a 10-year life expectancy is based on certain assumptions concerning the impact of the number and weight of vehicles on the road’s deterioration. For example, USAID’s design assumes that the heaviest trucks anticipated (100,000-pound, 3-axle trucks) on the road will comprise less than 1 percent of total traffic; however, this low volume of heavy truck traffic will cause more than 60 percent of the road’s deterioration over its 10-year design life. A greater than expected volume of heavy truck traffic, or use of the road by trucks that exceed 100,000 pounds, will lead to a higher amount of deterioration and reduced life for the road. To thwart acceleration of pavement damage resulting from overweight vehicles using the road, USAID recommended that the Directorate use portable scales to weigh suspected overweight vehicles. However, as of May 2012, the Directorate had not taken any actions to weigh vehicles. During our March 2012 inspection of the road, we observed several heavily loaded trucks but saw no permanent weigh stations and saw no vehicles being weighed with portable scales. Construction in right-of-way. The road’s intended 10-year life expectancy is also based on keeping drainage culverts in the right-of- way clear of blockage, according to USAID’s Project Manager. However, the Directorate has not taken action to prevent the construction of buildings and has not removed existing buildings that have been constructed in the right-of-way. Such construction could obstruct drainage channels and cause erosion and flooding. According to USAID officials, USAID informed the Directorate of numerous instances where buildings had been or were being constructed in the right-of-way, but as of May 2012, the Indonesian government had not taken any actions to remove or prevent such construction. When we traveled the length of the road in March 2012, we observed a completed building that had been constructed in the right-of-way over the drainage channel. Unauthorized access roads. As of March 2012, according to USAID’s Project Manager, the Directorate had not taken action to prevent the creation of unauthorized access roads, which also can prevent proper drainage and cause erosion and flooding. These unauthorized access roads have been constructed by removing guardrail or moving soil, rocks, and other materials; in some instances, these access roads may obstruct drainage channels and cause erosion and flooding. During our March 2012 inspection of the road, we saw several access roads that had been built by removing guardrails and moving soil and other materials. Figure 13 shows a building that was constructed in the right-of-way and an unauthorized access road where the guardrail was removed. Conclusions USAID has completed the construction of a major 91-mile road in a coastal area of northern Indonesia that was heavily affected by the December 2004 tsunami and, in doing so, has helped to provide an opportunity for economic growth in the region. Although USAID completed the road construction more than 2 years later than planned, the agency finished the work by confronting and overcoming significant obstacles while working in a challenging environment. However, despite taking several actions to help ensure the road’s quality, USAID currently lacks the capacity to ensure that approximately 50 miles of recently completed road sections—55 percent of the entire road—still under warranty perform as intended through the 1-year warranty period, as stipulated in USAID’s contracts with construction firms. Specifically, although USAID officials in Jakarta told us that they are considering rehiring the Project Manager on an interim contract, the agency has not finalized this arrangement. Without qualified personnel inspecting these road sections before their warranties expire, USAID cannot ensure that the quality standards are met and that the contractor corrects any deficiencies as required within the 1-year warranty period. To help ensure that the road will reach its intended 10-year life expectancy, USAID took actions such as designing the road in accordance with established standards and supporting the Indonesian government’s Directorate for Highways. However, the road may not achieve its 10-year life expectancy unless the Indonesian government properly maintains the road, restricts usage by overweight vehicles, and prevents construction in the road right-of-way and creation of unauthorized access roads. Recommendations for Executive Action We recommend that the Administrator of USAID take the following two actions: To help ensure that recently completed sections of the Indonesia road meet quality standards as required during the 1-year warranty period, ensure that the road sections are inspected in a timely manner and, if deficiencies are found, require that the construction contractor repair the sections before they are formally turned over to the Indonesian government. To help ensure that the constructed road remains sustainable for 10 years as intended, direct the USAID Mission in Indonesia to work with the Indonesian government to develop and implement a process addressing factors that could affect the road’s sustainability. Agency Comments and Our Evaluation We provided a draft of this report, as well as a video of our March 2012 inspection of the road, to USAID and State for their review. USAID provided written comments about the draft report, which are reprinted in appendix II, and provided technical comments about the draft report and the video that we incorporated as appropriate. State did not provide comments on either the draft report or the video. In its written comments, USAID stated that our report presented an accurate assessment of its construction operations and its efforts to ensure the road’s quality and sustainability. In addition, USAID concurred with our recommendation that it ensure that road sections still under warranty are inspected in a timely manner and that it require the contractor to repair any defective sections. The agency stated that it will retain qualified personnel who can perform inspections before the warranties expire. USAID concurred with the intent of our recommendation that, to help ensure the road’s sustainability for the intended 10 years, it should work with the Indonesian government to develop and implement a process addressing factors that could affect the road’s sustainability. USAID noted that many of the factors that could affect the road’s sustainability are outside the agency’s managerial control and that, apart from the road sections still under warranty, the road is under the Indonesian government’s administration. USAID indicated that any additional technical assistance it might offer the Indonesian government would be contingent on the government’s receptiveness as well as the availability of USAID resources. We maintain that it is essential that USAID work proactively with the Indonesian government to develop and implement a process that addresses certain factors, such as the use of overweight vehicles, construction in the right- of-way, and the creation of unauthorized access roads, that could affect the road’s sustainability. We are sending copies of this report to interested congressional committees, the Secretary of State, and the USAID Administrator. We will also provide copies to others on request. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Scope and Methodology We reviewed the 91-mile road constructed by the U.S. Agency for International Development (USAID) in Aceh Province, Indonesia, following the December 2004 tsunami. This is the third report we have conducted on USAID’s post-tsunami reconstruction efforts. Our objectives in this report were to (1) describe USAID’s road construction operations as well as factors that delayed the road’s completion, (2) assess USAID efforts to ensure the road’s quality, and (3) examine factors that could affect the road’s sustainability. To determine the status and funding of USAID’s road reconstruction activities in Aceh Province, Indonesia, we reviewed documents and interviewed officials from USAID’s Bureau for Asia and the Near East and the Department of State’s (State) Office of Foreign Assistance Resources in Washington, D.C.; and USAID’s Office of Financial Management at its mission in Jakarta, Indonesia. We obtained information from USAID officials on internal controls for collection of data, reviewed consolidated reports and mission-specific reports, and interviewed cognizant officials at USAID and State about data reliability. In addition, we interviewed knowledgeable USAID officials about the systems and methodology they use to verify the completeness and accuracy of the data. We determined that the data were sufficiently reliable for the purposes of our report. To assess the reliability of USAID and State funding and expenditure data, we reviewed USAID Office of the Inspector General and previous GAO reports on USAID disaster reconstruction programs and funding since 2002; we found that none of these sources noted any discrepancies or concerns about the reliability of USAID’s data. Based on our comparison of data generated from various USAID and State sources, we found that the sources generally corroborated each other. We determined that USAID and State funding and expenditure data were sufficiently reliable for our report. To assess the quality, construction features, and sustainability of the road construction, we traveled the full 91-mile length of the road from Banda Aceh to Calang, Indonesia, with USAID’s Project Manager for the road construction project. We also traveled the road from Calang to Meulaboh, Indonesia, for comparative purposes. During our trip between Banda Aceh and Calang, we photographed and recorded video of several road construction features and activities, and recorded testimonyUSAID’s Project Manager on construction features, obstacles, challenges, quality, and potential impediments to sustainability. In Banda Aceh, we met with representatives from the Indonesian Directorate General of Highways (the Directorate), which is responsible for maintaining the road. To better understand construction, quality, potential obstacles to sustainability, and general construction challenges, we met with representatives of both SsangYong and Hutama, the two firms that constitute the SsangYong-Hutama Joint Association. To identify obstacles that USAID encountered, we examined USAID Office of Inspector General and State reports which provide information on construction status as well as summarize major construction accomplishments and challenges. In Jakarta, Indonesia, we met with representatives from the Directorate. We also reviewed USAID road construction files to better understand the obstacles that led to delays and cost increases; this review included reviewing status reports, contracts, and correspondence. To examine the extent to which USAID ensured quality, we reviewed USAID road construction contracts and met with USAID’s Project Manager to discuss oversight procedures. We also discussed USAID’s road quality inspections and procedures for ensuring that construction contractors make repairs to damaged sections of road within the 1-year warranty period, as required. Members of our staff, including a U.S.- registered professional engineer, traveled the road and examined road quality through direct observation of road conditions, construction features, and repair work being performed. To examine the extent to which USAID helped ensure sustainability, we examined road operations and maintenance plans developed by USAID for the Directorate, which included practices that the USAID Project Manager recommended that the Directorate adopt and implement. We also examined the checklist of equipment developed by USAID’s Project Manager and needed by the Directorate to maintain the road. Members of our staff, including a U.S.-registered professional engineer, made direct observations of road features that were designed and constructed for road sustainability. We conducted this performance audit from January 2012 to July 2012 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the U.S. Agency for International Development Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Emil Friberg, Jr. (Assistant Director), Michael Armes (Assistant Director, registered Professional Engineer), Ryan Barlow, Mason Calhoun, Reid Lowe, and George Taylor made key contributions to this report. Ashley Alley, Martin De Alteriis, Theresa Perkins, Jeremy Sebest, Jena Sinkfield, and Cynthia Taylor provided technical assistance. Related GAO Products Haiti Reconstruction: Factors Contributing to Delays in USAID Infrastructure Construction. GAO-12-68. Washington, D.C.: November 16, 2011. Haiti Reconstruction: U.S. Efforts Have Begun, Expanded Oversight Still to Be Implemented. GAO-11-415. Washington, D.C.: May 19, 2011. Afghanistan Reconstruction: Progress Made in Constructing Roads, but Assessments for Determining Impact and a Sustainable Maintenance Program Are Needed. GAO-08-689. Washington, D.C.: July 8, 2008. Compact of Free Association: Palau’s Use of and Accountability for U.S. Assistance and Prospects for Economic Self-Sufficiency. GAO-08-732. Washington, D.C.: June 10, 2008. Foreign Assistance: USAID Signature Tsunami Reconstruction Efforts in Indonesia and Sri Lanka Exceed Initial Cost and Schedule Estimates, and Face Further Risks. GAO-07-357. Washington, D.C.: February 27, 2007. Foreign Assistance: USAID Completed Many Caribbean Disaster Recovery Activities, but Several Challenges Hampered Efforts. GAO-06-645. Washington, D.C.: May 26, 2006. Foreign Assistance: USAID Has Begun Tsunami Reconstruction in Indonesia and Sri Lanka, but Key Projects May Exceed Initial Cost and Schedule Estimates. GAO-06-488. Washington, D.C.: April 14, 2006. Foreign Assistance: USAID’s Earthquake Recovery Program in El Salvador Has Made Progress, but Key Activities Are Behind Schedule. GAO-03-656. Washington, D.C.: May 15, 2003. Foreign Assistance: Disaster Recovery Program Addressed Intended Purposes, but USAID Needs Greater Flexibility to Improve Response Capability. GAO-02-787. Washington, D.C.: July 24, 2002.
In December 2004, an earthquake in the Indian Ocean caused a major tsunami that devastated several countries, affecting Indonesia most severely. In May 2005, Congress appropriated $908 million for aid to the affected countries. USAID budgeted $245 million of this amount to rehabilitate and construct a 150-mile paved coastal road in Aceh Province, Indonesia, with a planned completion date of September 2009. After reducing the project’s scope, USAID completed a 91-mile road in April 2012 at an estimated cost of $256 million. GAO was asked to (1) describe USAID’s construction operations as well as factors that delayed the road’s completion, (2) assess USAID’s efforts to ensure the road’s quality, and (3) examine factors that could affect the road’s sustainability. GAO reviewed USAID documents, interviewed USAID and Indonesian officials, and traveled the entire length of the road. From August 2005 to September 2010, the U.S. Agency for International Development (USAID) awarded five contracts to reconstruct a major coastal road in Aceh Province, Indonesia. Three of the contracts were for construction, one contract was for design and supervision, and one contract was for project management. Several factors delayed the road’s completion and increased costs. For example, according to USAID, when one contractor did not make acceptable progress, the agency reduced the scope of work, terminated construction of an 8-mile road section, and hired another contractor to complete the section. Other factors included the Indonesian government’s difficulty in acquiring land for the road and local opposition to the new road alignment. USAID took several actions to ensure quality in the road’s design and construction. For example, USAID hired an experienced, U.S.-registered professional engineer as Project Manager and hired a U.S.-based engineering firm to design the road and supervise most construction. USAID also required contractors to remain liable for any quality defects for 1 year after completing road sections. In addition, USAID required the Project Manager and the engineering firm to perform routine inspections, including final inspections when the warranties ended. Some inspections revealed poor-quality work that the contractors corrected. However, the engineering firm’s and Project Manager’s contracts ended in March 2012 and April 2012, respectively, leaving no qualified staff to inspect around 50 miles—more than half of the completed road—still under warranty. USAID told GAO it is considering rehiring the Project Manager on an intermittent basis, but USAID has not finalized this arrangement and has no mechanism to ensure quality in these sections. USAID also took several actions to help ensure the road’s sustainability, such as designing it to withstand heavy weights and providing a maintenance plan and equipment to the Indonesian Directorate General of Highways. However, various factors could affect the road’s sustainability for its intended 10-year design life. For example, according to USAID and Indonesian officials, the Directorate lacks resources needed to maintain the road. Also, according to USAID, the Indonesian government has not taken certain actions, such as using portable scales to prevent overweight vehicles that could cause pavement failure and prohibiting construction in the road right-of-way that could obstruct drainage.
GAO_GAO-13-601
Background Exchanges are intended to allow eligible individuals to obtain health insurance, and all exchanges, whether state-based or established and operated by the federal government, will be required to perform certain functions. The federal government’s role with respect to an exchange for any given state is dependent on the decisions of that state. Overview of Exchanges PPACA required that exchanges be established in each state to allow consumers to compare health insurance options available in that state and enroll in coverage. Once exchanges are established, individual consumers will be able to access the exchange through a website, toll- free call centers, or in person. The exchanges will present qualified health plans (QHP) approved by the exchange and offered in the state by the participating issuers of coverage. The benefits, cost-sharing features, and premiums of each QHP are to be presented in a manner that facilitates comparison shopping of plans by individuals. Once individuals wish to select a QHP, they will complete an application—through the exchange website, over the phone, in person, or by mailing a paper form—that collects the information necessary to determine their eligibility to enroll in a QHP. On the basis of the application, the exchange will determine individuals’ eligibility for enrollment in a QHP, and also determine their eligibility for income-based financial subsidies—advance payment of premium tax credits and cost-sharing subsidies—to help pay for that coverage. Also at the time of the application, the exchange will determine individuals’ eligibility for Medicaid and CHIP. After an individual has been determined to be eligible for enrollment in a QHP, the individual will be able to use tools on the exchange website to compare plans and make a selection. For individuals applying for enrollment in a QHP and for income-based financial subsidies, eligibility determinations and enrollment should generally occur on a near real-time basis, to be accomplished through the electronic transfer of eligibility information between the exchange and federal and state agencies, and through the electronic transfer of enrollment data between the exchange and QHP issuers. Assistance with the enrollment process will be provided to individuals either through the website, an established telephone call center, or in person. To undertake these functions, all exchanges, including those established and operated by the federal government, will be required to perform certain activities, many of which fall within the core functions of eligibility and enrollment, plan management, and consumer assistance. Eligibility and enrollment: All exchanges will be required to determine an individual’s eligibility for QHP enrollment, income-based financial subsidies, and enrollment in Medicaid and CHIP. Exchanges will be required to enroll eligible individuals into the selected QHP or transmit information for individuals eligible for Medicaid or CHIP to the appropriate state agency to facilitate enrollment in those programs. The exchange is to use a single, streamlined enrollment eligibility system to collect information from an application and verify that information. CMS is building the data hub to support these efforts. The data hub is intended to provide data needed by the exchanges’ enrollment eligibility systems to determine each applicant’s eligibility. Specifically, the data hub will provide one electronic connection and near real-time access to the common federal data, as well as provide access to state and third party data sources needed to verify consumer application information. For example, the data hub is to verify an applicant’s Social Security number with the Social Security Administration (SSA), and to access the data from the Internal Revenue Service (IRS) and the Department of Homeland Security (DHS) that are needed to assess the applicant’s income, citizenship, and immigration status. The data hub is also expected to access information from the Veterans Health Administration (VHA), Department of Defense (DOD), Office of Personnel Management (OPM), and Peace Corps to enable exchanges to determine if an applicant is eligible for insurance coverage from other federal programs that would make them ineligible for income-based financial subsidies. In states in which an FFE will operate, the hub is also expected to access information from state Medicaid and CHIP agencies to identify whether FFE applicants are already enrolled in those programs. Plan management: Exchanges will be required to develop and implement processes and standards to certify health plans for inclusion as QHPs and recertify or decertify them, as needed. As part of these processes, the exchange must develop an application for issuers of health coverage that seek to offer a QHP. The exchange must review a particular plan’s data to ensure it meets certification standards for inclusion in the exchange as a QHP. The exchange must also conduct ongoing oversight and monitoring to ensure that the plans comply with all applicable regulations. Consumer assistance: All exchanges will be required to provide a call center, website, and in-person assistance to support consumers in filing an application, obtaining an eligibility determination, comparing coverage options, and enrolling in a QHP. Other consumer assistance function activities that exchanges must conduct are outreach and education to raise awareness of and promote enrollment in QHPs and income-based financial subsidies. One such form of consumer assistance required by PPACA is the establishment of Navigators—entities, such as community and consumer-focused nonprofit groups, to which exchanges award grants to provide fair and impartial public education regarding QHPs, facilitate selection of QHPs, and refer consumers as appropriate for further assistance. Federal and State Roles in Exchanges The role of the federal government with respect to an exchange for a state is dependent on whether that state seeks to operate a state-based exchange. States can choose to establish exchanges as directed by PPACA and seek approval from CMS to do so. States electing to establish and operate a state-based exchange in 2014 were required to submit to CMS, by December 14, 2012, a declaration of intent and the “Blueprint for Approval of Affordable State-based and State Partnership Insurance Exchange.” Through this Blueprint, the state attests to how its exchange meets, or will meet, all legal and operational requirements associated with a state-based exchange. For example, the state must demonstrate that it will establish the necessary legal authority and governance, oversight, financial-management processes, and the core exchange functions of eligibility and enrollment, plan management, and consumer assistance. Although a state assumes responsibility for the exchange when it elects to operate a state-based exchange, it can choose to rely on the federal government for certain exchange-related activities, including determining individuals’ eligibility for income-based financial subsidies and activities related to reinsurance and risk adjustment.issuers on behalf of enrollees in all exchanges. In addition, CMS will make financial subsidy payments to Under PPACA, if a state did not elect to establish a state-based exchange or is not approved by CMS to operate its own exchange, then CMS is required to establish and operate an FFE in that state. Although the federal government retains responsibility to establish and operate each FFE, CMS has identified possible ways that states may assist it in the day-to-day operation of these exchanges: CMS indicated that a state can choose to participate in an FFE through a partnership exchange by assisting CMS with the plan management function, consumer assistance function, or both. According to CMS, the overall goal of a partnership exchange is to enable the FFE to benefit from efficiencies to the extent states have regulatory authority and capability to assist with these functions, help tailor the FFE to that state, and provide a seamless experience for consumers. The agency also noted that a partnership exchange can serve as a path for states toward future implementation of a state- based exchange. Although the states would assist in carrying out the plan management function, consumer assistance function, or both on a day-to-day basis, CMS would retain responsibility for these and all other FFE functions. For example, for plan management, states would recommend QHPs for certification, and CMS would decide whether to approve the states’ recommendations and, if so, implement them. In the case of consumer assistance, states would manage an in-person assistance program and Navigators and may choose to conduct outreach and education activities. However, CMS would be responsible for awarding Navigator grants and training Navigators, and would operate the exchange’s call center and website. By February 15, 2013, states seeking to participate in a partnership exchange had to submit a declaration letter and Blueprint to CMS regarding expected completion dates for key activities related to their participation. CMS indicated in guidance issued on February 20, 2013, that an FFE state choosing not to submit a Blueprint application for a partnership exchange by the February 15, 2013, deadline could still choose to assist it in carrying out the plan management function on a day-to-day basis. CMS officials said that, operationally, the plan management functions performed by these states will be no different than the functions performed by partnership exchange states. Instead of a Blueprint application, states interested in participating in this alternative type of arrangement had to submit letters attesting that the state would perform all plan management activities in the Blueprint application. Even in states in which CMS will operate an FFE without a state’s assistance, CMS plans to rely on states for certain information. For example, it expects to rely on state licensure of health plans as one element of its certification of a QHP. After a state submits an application to operate a state-based exchange or participate in a partnership exchange, CMS may approve or conditionally approve the state for that status. Conditional approval indicates that the state had not yet completed all steps necessary to carry out its responsibilities in a state-based exchange or partnership exchange, but its exchange is expected to be ready to accept enrollment on October 1, 2013. To measure progress towards completing these steps, CMS officials indicated that the agency created a set of typical dates for when specific activities would need to be completed in order for the exchanges to be ready for the initial enrollment period. The agency then adapted those dates for each state establishing a state-based exchange or participating in a partnership exchange. The agency officials said that if the state indicated in its Blueprint that it planned to complete an activity earlier than CMS’s typical targeted completion date, CMS accepted the state’s earlier date. If the state proposed a date that was later than CMS’s typical targeted completion date, the state had to explain the difference and CMS determined whether that date would allow the exchange to be ready for the initial enrollment period. The agency indicated that a state’s conditional approval continues as long as it conducts the activities by the target dates agreed to with the individual state and demonstrates its ability to perform all required exchange activities. CMS’s role in operating an exchange in a particular state may change for future years if states reassess and alter the roles they play in establishing and operating exchanges. For example, a state may be approved to participate in a partnership exchange in 2014 and then apply, and receive approval, to run a state-based exchange in 2015. Although the federal government would retain some oversight over the state-based exchange, the responsibility for operating the exchange would shift from the federal government to the state. Funding for FFEs and the Data Hub HHS indicated that it has drawn from several different appropriations to fund CMS activities to establish and operate FFEs and the data hub. These include the Health Insurance Reform Implementation Fund, HHS’s General Departmental Management Account, and CMS’s Program Management Account. HHS also indicated that it plans to use funds from the Prevention and Public Health Fund and the agency’s Nonrecurring Expenses Fund to pay for certain exchange activities in 2013.will assist with eligibility determinations and activities to make people Specifically, the agency plans to use these funds for activities that aware of insurance options and enrollment assistance available to them. For fiscal year 2014, CMS has estimated that it will need almost $2 billion to establish and operate the FFEs. Specifically, the President’s fiscal year 2014 budget requests $1.5 billion in appropriations for CMS’s Program Management Account for the implementation and operation of the exchanges. In addition to this amount, it estimated that $450 million in user fees will be collected from issuers of health coverage participating in the exchanges in fiscal year 2014 and credited to the Program Management Account.used for activities related to operation of the exchanges, including eligibility and enrollment, consumer outreach, plan oversight, SHOP and employer support, information-technology systems, and financial management. According to the agency, these funds will be In addition to these sources of funding, the agency also awarded grants with funds appropriated under section 1311 of PPACA to states in which an FFE will operate for activities related to the FFE. These include the plan management and consumer assistance activities that certain states will undertake on behalf of the FFE, as well as the development of state data systems to coordinate with the FFE. CMS Expects to Operate an Exchange in Most States, but Planned CMS and State Exchange Activities Continue to Evolve CMS expects to operate an FFE in 34 states in 2014. States are expected to assist with certain day-to-day functions in 15 of these FFEs. However, the precise activities that CMS and the states will perform have not been finalized and may continue to evolve. CMS Expects to Operate FFEs, including Partnership Exchanges, in 34 States in 2014 For 2014, CMS will operate the exchange in 34 states, although it expects that states will assist in carrying out certain activities in almost half of those exchanges. As of May 2013, 17 states were conditionally approved by CMS to establish state-based exchanges. CMS granted conditional approval to these states in letters issued from December 2012 to January 2013. CMS is required to operate an FFE in the remaining 34 states. While CMS will retain full authority over each of these 34 FFEs, it plans to allow 15 of the states to assist it in carrying out certain exchange functions. Specifically, as of May 2013, CMS granted 7 FFE states conditional approval to participate in a partnership exchange. CMS issued these conditional approval letters from December 2012 to March 2013. Of the 7 partnership exchange states, 6 (Arkansas, Delaware, Illinois, Michigan, New Hampshire, and West Virginia) indicated that they planned to assist with both the plan management and consumer assistance functions of the exchange and 1 (Iowa) indicated that it would only assist with the plan management function. In an alternate arrangement, CMS plans to allow the other 8 of these 15 FFE states (Kansas, Maine, Montana, Nebraska, Ohio, South Dakota, Utah, and Virginia) to assist In the remaining 19 FFE states, with the plan management function.CMS plans to operate all functions of an FFE without states’ assistance for plan year 2014. (See fig. 1 for a map of exchange arrangements for 2014.) Some states also informed CMS of whether or not they chose to carry out certain other activities related to the exchanges. First, CMS officials said that all states with an FFE are to notify CMS whether or not their relevant state agencies will determine the Medicaid/CHIP eligibility for individuals who submit applications to the FFE or if the states will delegate this function to the FFE. As of May 2, 2013, CMS officials indicated that none of the 34 FFE states had notified CMS as to whether they would conduct Medicaid/CHIP eligibility determinations rather than delegate this responsibility to CMS. CMS officials indicated that states do not have a deadline for notifying CMS of their decisions on this area, but would have to do so before initial enrollment on October 1, 2013. Second, states notified CMS as to whether they would operate a transitional reinsurance program. CMS indicated that for plan year 2014, two state-based exchange states—Connecticut and Maryland—notified CMS that they would each operate a transitional reinsurance program, leaving CMS to operate programs in the remaining 49 states. Planned CMS and State Activities to Establish Exchanges Have Evolved Recently and May Continue to Change The activities that CMS and the states each plan to carry out to establish the exchanges have evolved recently. CMS was required to certify or conditionally approve any 2014 state-based exchanges by January 1, 2013. CMS extended application deadlines leading up to that date to provide states with additional time to determine whether they would operate a state-based exchange. On November 9, 2012, CMS indicated that in response to state requests for additional time, it would extend the deadline for submission of the Blueprint application for states that wished to operate state-based exchanges in 2014 by a month to December 14, 2012. The agency noted that this extension would provide states with additional time for technical support in completing the application. At the same time, the agency extended the application deadline for states interested in participating in a partnership exchange by about 3 months to February 15, 2013. In addition, the option for FFE states to participate in an alternative arrangement to provide plan management assistance to the FFE was made available to states by CMS in late February. CMS did not provide states with an explicit deadline for them to indicate their intent to participate in this arrangement, but CMS officials said April 1, 2013, was a natural deadline because issuers of health coverage had to know by then to which entity—CMS or the state—to submit health plan data for QHP certification. The specific activities CMS will undertake in each of the state-based and partnership exchanges may continue to change if states do not make adequate progress toward completion of their required activities. When CMS granted conditional approval to states, it was contingent on states meeting several conditions, such as obtaining authority to undertake exchange activities and completing several required activities by specified target dates. For example, in April 2013, CMS officials indicated that Michigan—a state that had been conditionally approved by CMS in March to participate in a partnership exchange—had not been able to obtain passage of legislation allowing the state to use federal grant funds to pay for exchange activities, which had been a requirement of its conditional approval. As of May 2, 2013, CMS officials expected that Michigan would remain a partnership exchange state, but indicated that Michigan may not be able to conduct consumer assistance without funding authority. They noted, however, that a final decision about Michigan’s responsibilities had not been determined. In addition, on May 10, 2013, CMS indicated that it intended to allow Utah’s exchange, which was conditionally approved as a state-based exchange in January 2013, to now be an FFE. Officials indicated that final approval for state-based and partnership exchanges will not be granted until the states have succeeded in completing required activities, and that some of these exchanges may still be under conditional approval when enrollment begins on October 1, 2013. Agency officials indicated that they are working with each state to develop mitigation strategies to ensure that all applicable exchange functions are operating in each state on October 1, 2013. CMS officials said that they are assessing the readiness of each state as interim deadlines approach. For example, issuers began submitting applications to exchanges for QHP certification on April 1, 2013. Therefore, CMS officials said that they began assessing state readiness for this activity in March 2013. They also indicated that CMS is doing this kind of assessment for each state as deadlines approach for other functions—such as eligibility and enrollment, and consumer assistance. If a state is not ready to carry out a specific responsibility, CMS officials said the agency will support them in this area. As of May 2, 2013, CMS had not granted final approval to any state to operate a state-based exchange or participate in a partnership exchange. If any state conditionally approved to operate a state-based exchange or to participate in a partnership exchange does not adequately progress towards implementation of all required activities, CMS has indicated that it would carry out more exchange functions in that state. CMS officials indicated that exchanges receiving this assistance would retain their status as a state-based or partnership exchange. CMS Completed Many Activities to Establish FFEs and the Data Hub by the Beginning of Open Enrollment, Although Completion of Some Activities Was Behind Schedule CMS has completed many activities necessary to establish FFEs and the data hub. The agency established targeted completion dates for the many activities that remain to be completed by the beginning of initial enrollment on October 1, 2013, and certain activities were behind schedule. CMS Issued Numerous Regulations and Guidance Necessary to Establish FFEs, and Made Progress in Each of the Core Exchange Functions and in Developing the Data Hub CMS issued numerous regulations and guidance that it has said are necessary to set a framework within which the federal government, states, issuers of health coverage, and others can participate in the exchanges. For example, in March 2012, the agency issued a final rule regarding implementation of exchanges under PPACA, and in February 2013, it issued a final rule setting forth minimum standards that all health insurance issuers, including QHPs seeking certification on a state-based exchange or FFE, have to meet. The March 2012 rule, among other things, sets forth the minimum federal standards that state-based exchanges and FFEs must meet and outlines the process a state must follow to transition between types of exchanges. The February 2013 rule specifies benefit design standards that QHPs must meet to obtain certification. That rule also established a timeline for QHPs to be accredited in FFEs. CMS also issued a proposed rule related to the Navigator program on April 5, 2013. This rule proposes conflict of interest, training, and certification standards that will apply to Navigators in FFEs. CMS officials expected to issue this final rule prior to initial enrollment. CMS officials indicated that before initial enrollment begins in October 2013, they would propose an additional rule that would set forth exchange oversight and records retention requirements, among other things. On June 14, 2013, CMS released this proposed rule, which will be published in the Federal Register on June 19, 2013. CMS also issued guidance specifically related to the establishment of FFEs and partnership exchanges to assist states seeking to participate in a partnership exchange and issuers seeking to offer QHPs in an FFE, including a partnership exchange. For example, the agency issued general guidance on FFEs and partnership exchanges in May 2012 and January 2013, respectively. On April 5, 2013, the agency issued guidance to issuers of health coverage seeking to offer QHPs through FFEs or partnership exchanges. In addition to establishing the basic exchange framework for state-based exchanges and FFEs, including partnership exchanges, CMS also completed activities needed to establish the core FFE functions— eligibility and enrollment, including the data hub; plan management; and consumer assistance. (See table 1.) CMS established timelines to track its completion of the remaining activities necessary to establish FFEs. CMS has many key activities remaining to be completed across the core exchange functions—eligibility and enrollment, including development and implementation of the data hub; program management; and consumer assistance. In addition, the agency established targeted completion dates for the required activities that states must perform in order for CMS to establish partnership exchanges in those states. However, the completion of certain activities was behind schedule. CMS expects to complete development and testing of the information technology systems necessary for FFEs to determine eligibility for enrollment into a QHP and to enroll individuals by October 1, 2013, when enrollment is scheduled to begin for the 2014 plan year. As of April 2013, CMS indicated that it still needed to complete some steps to enable FFEs to be ready to test development of key eligibility and enrollment functions, including calculation of advance payments of the premium tax credits and cost-sharing subsidies, verification of consumer income, and verification of citizenship or lawful presence. CMS indicated that these steps will be completed in July 2013. For one activity—the capacity to process applications and updates from applications and enrollees through all channels, including in-person, online, mail, and phone—CMS estimated that the system will be ready by October 1, 2013. CMS officials said that redeterminations of consumer eligibility for coverage will not occur until the middle of 2014. Effective use of the FFEs’ eligibility and enrollment systems is dependent upon CMS’s ability to provide the data needed to carry out eligibility determination and enrollment activities through the implementation of the data hub. According to program officials, CMS established milestones for completing the development of required data hub functionality by July 2013, and for full implementation and operational readiness by September 2013. Project schedules reflect the agency’s plans to provide users access to the hub for near real-time data verification services by October 1, 2013. Agency officials stated that ongoing development and testing activities are expected to be completed to meet the October 1, 2013, milestone. Additionally, CMS has begun to establish technical, security, and data sharing agreements with federal partner agencies and states, as required by department-level system development processes. These include Business Service Definitions (BSDs), which describe the activities, data elements, message formats, and other technical requirements that must be met to develop, test, and implement capabilities for electronically sharing the data needed to provide various services, such as income and Social Security number verification. Computer Matching Agreements, which establish approval for data exchanges between various agencies’ systems and define any personally identifiable information the connecting entity may access through its connection to the data hub; and Data Use Agreements, which establish the legal and program authority that governs the conditions, safeguards, and procedures under which federal or state agencies agree to use data. For example, CMS officials stated that they established Data Use Agreements with OPM and the Peace Corps in April 2013 and completed BSDs by mid-June. Additionally, these officials plan to obtain final approval of Computer Matching Agreements with IRS, SSA, DHS, VHA, and DOD by July 2013. CMS began conducting both internal and external testing for the data hub in October 2012, as planned. The internal testing includes software development and integration tests of the agency’s systems, and the external testing begun in October included secured communication and functionality testing between CMS and IRS. These testing activities were scheduled to be completed in May 2013. CMS has also begun to test capabilities to establish connection and exchange data with other federal agencies and the state agencies that provide information needed to determine applicants’ eligibility to enroll in a QHP or for income-based financial subsidies, such as advance premium tax credits and cost- sharing assistance, Medicaid, or CHIP. For example, CMS officials stated that testing with 11 states began on March 20, 2013, and with five more states in April. They also stated that, although originally scheduled to begin in April, testing with SSA, DHS, VHA and Peace Corps started early in May 2013 and that testing with OPM and DOD was scheduled to begin in July 2013. Additionally, CMS recently completed risk assessments and plans for mitigating identified risks that, if materialized, could negatively affect the successful development and implementation of the data hub. While CMS stated that the agency has thus far met project schedules and milestones for establishing agreements and developing the data hub, several critical tasks remain to be completed before the October 1, 2013, implementation milestone. (See fig. 2). According to CMS officials and the testing timeline: Service Level Agreements (SLA) between CMS and the states, which define characteristics of the system once it is operational, such as transaction response time and days and hours of availability, are planned to be completed in July 2013; SLAs between CMS and its federal partner agencies that provide verification data are expected to be completed in July 2013; and Completion of external testing with all federal partner agencies and all states is to be completed by the beginning of September 2013. The activities that remain for CMS to implement the plan management function primarily relate to the review and certification of the QHPs that will be offered in the FFEs. CMS has set time frames that it anticipates will allow it to certify and upload QHP information to the exchange website in time for initial enrollment. CMS indicated that its system for issuers of health coverage to submit applications for QHP certification was available by April 1, 2013, and issuers were to submit their Once received, CMS, with the assistance applications by May 3, 2013.of its contractor, expects to evaluate and certify health plans as QHPs by July 31, 2013. CMS will then allow issuers to preview and approve QHP information that will be presented on the exchange website by August 26, 2013. CMS then expects to finalize the QHP information and load it into the exchange website by September 15, 2013. For those 15 FFEs for which states will assist with the plan management function, CMS will rely on the states to ensure the exchanges are ready by October 2013. In contrast to other FFE states in which CMS manages all aspects of the QHP application and certification process, these 15 states were to evaluate health issuer plan applications to offer a QHP in the exchange and submit recommendations to CMS regarding the plans to be certified as QHPs. CMS indicated that the states are expected to submit their recommendations by July 31, 2013, which is also when CMS expects to complete its evaluation of QHPs for the other FFE states. (See fig. 3.) CMS has yet to complete many activities related to consumer assistance and outreach, and some initial steps were behind schedule. Specifically, several steps necessary for the implementation of the Navigator program in FFEs have been delayed by about 2 months. CMS had planned to issue the funding announcement for the Navigator program in February 2013 and have two rounds of awards, in June and September 2013. However, the announcement was delayed until April 9, 2013, and CMS officials indicated that there would be one round of awards, with an anticipated award date of August 15, 2013. CMS did not indicate the number of awards it expected to make, but noted that it expects that at least two types of applicants will receive awards in each of the 34 FFE states, and at least one will be a community or consumer-focused nonprofit organization. CMS officials indicated that, despite these delays, they planned to have Navigator programs operating in each FFE state by October 1, 2013. Before any federally funded in-person assisters, including Navigators, can begin their activities, they will have to be trained and certified. For example, these individuals are required to complete an HHS-approved training program and receive a passing score on all HHS-approved certification exams before they are able to assist with enrollment activities. CMS officials said that the required training for Navigators will be web-based, and it is under development. According to CMS, the Navigator training will be based on the training content that is being developed for agents and brokers in the FFEs and partnership exchanges, which CMS indicates is near completion. In addition, CMS is developing similar web-based training for the state partnership exchange in-person assistance programs. While CMS had planned to begin Navigator training in July 2013, under its current plan, the agency will not have awarded Navigator grants by this date. CMS indicated that it plans to complete development of the training curriculum and certification exam in July or August 2013. CMS officials expected that the training would begin in the summer of 2013, following completion of the curriculum and exam. Each of the six partnership exchange states that CMS conditionally approved to assist with certain consumer assistance responsibilities plans to establish other in-person assistance programs that will operate in addition to Navigator programs in these states. The dates by which the states planned to release applications and select in-person assisters varied. (See fig. 4.) For example, according to the conditional approval letters, one partnership exchange state planned to select in-person assisters by March 1, 2013, to begin work by May 15, 2013, while another planned to make that selection by August 1, 2013, to begin work by September 1, 2013. Five of the states’ required activities indicated that they planned to add state-specific modules to the required federal training for Navigators and in-person assisters. As of April 24, 2013, CMS indicated that these six partnership exchange states had made progress, but the completion of some activities was behind schedule.the applications to select in-person assisters by April 2013 had done so. While the deadline for most states to select in-person assisters had not passed as of April 24, 2013, there were delays for two states. One state that planned to select in-person assisters by March 15, 2013 delayed that deadline to May 30, 2013, while the other delayed it to June 15. CMS indicated that these delays are not expected to affect the implementation of these programs. However, the state now planning to complete selection by May 30, 2013, had originally planned to begin training assisters in March and begin work May 15, 2013. The second state had planned that in-person assisters would begin work August 1, 2013. For example, three states that had planned to release CMS and states with partnership exchanges have also begun, and established time frames for, undertaking other outreach and consumer assistance activities that are necessary to implement FFEs. CMS recommended that in-person outreach activities begin in the summer of 2013 to educate consumers in advance of the open enrollment period. Examples of key activities that remain to be completed include the federal call center, healthcare.gov website, media outreach, and the consumer complaint tracking system for the FFEs. While states with partnership exchanges will utilize the federal call center and website, they have established plans for undertaking other outreach and consumer assistance activities. (See table 2.) CMS Spent Almost $394 Million through Contracts to Support Establishment of the FFEs and Data Hub and to Carry Out Certain Other Exchange-Related Activities as of March 2013 CMS data indicated that the agency spent almost $394 million from fiscal year 2010 through March 31, 2013, through contracts to complete activities to establish the FFEs and the data hub and carry out certain other exchange-related activities. CMS officials said that these totals did not include CMS salaries and other administrative costs, but rather reflected the amounts obligated for contract activities. The majority of these obligations, about $248 million (63 percent), were incurred in fiscal year 2012. The sources of the $394 million in funding were three appropriation accounts: HHS’s General Departmental Management Account, CMS’s Program Management Account, and the Health Insurance Reform Implementation Fund. The majority of the funding came from the CMS Program Management Account (66 percent) followed by the Health Insurance Reform Implementation Fund (28 percent). (See fig. 5.) CMS reported that the almost $394 million supported 64 different types of projects through March 31, 2013. The highest volume of obligations related to the development of information technology systems for the FFEs. The 10 largest project types in terms of obligations made through March 31, 2013, accounted for $242.6 million, 62 percent of the total obligations. (See table 3.) These activities were carried out by 55 different contractors.10 contractors accounted for $303.4 million (77 percent of total obligations) for activities to support establishment of FFEs and the data hub and carry out certain other exchange-related activities. (See table 4.) Their contracts were for projects related to information technology, the healthcare.gov website, call center, and technical assistance for the FFEs. For one contract, with CGI Federal, CMS obligated about $88 million for activities to support establishment of the FFEs, such as information technology and technical assistance. For another contract, with Quality Software Services, Inc., CMS obligated about $55 million for related activities, including to support development of the data hub. (See app. I for each contract by the contractor, the amount obligated, the fiscal year in which funds were obligated, and the source of funding.) Concluding Observations FFEs along with the data services hub services are central to the goal under PPACA of having health insurance exchanges operating in each state by 2014, and of providing a single point of access to the health insurance market for individuals. Their development has been a complex undertaking, involving the coordinated actions of multiple federal, state, and private stakeholders, and the creation of an information system to support connectivity and near real-time data sharing between health insurance exchanges and multiple federal and state agencies. Much progress has been made in establishing the regulatory framework and guidance required for this undertaking, and CMS is currently taking steps to implement key activities of the FFEs, and developing, testing, and implementing the data hub. Nevertheless, much remains to be accomplished within a relatively short amount of time. CMS’s timelines and targeted completion dates provide a roadmap to completion of the required activities by the start of enrollment on October 1, 2013. However, certain factors, such as the still-unknown and evolving scope of the exchange activities CMS will be required to perform in each state, and the large numbers of activities remaining to be performed—some close to the start of enrollment—suggest a potential for implementation challenges going forward. And while the missed interim deadlines may not affect implementation, additional missed deadlines closer to the start of enrollment could do so. CMS recently completed risk assessments and plans for mitigating identified risks associated with the data hub, and is also working on strategies to address state preparedness contingencies. Whether CMS’s contingency planning will assure the timely and smooth implementation of the exchanges by October 2013 cannot yet be determined. Agency Comments We received comments from HHS on a draft of this report (see app. II). HHS emphasized the progress it has made in establishing exchanges since PPACA became law, and expressed its confidence that on October 1, 2013, exchanges will be open and functioning in every state. HHS also provided technical comments, which we incorporated as appropriate. We are sending copies of this report to the Secretary of Health and Human Services and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have questions about this report, please contact John E. Dicken at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Contractors Supporting the Federally Facilitated Exchanges and Data Hub and Amounts Obligated Table 5 provides information on the amounts the Department of Health and Human Services’ (HHS) Centers for Medicare & Medicaid Services (CMS) obligated for contract activities to support the establishment of the federally facilitated exchanges (FFE) and the data hub and carry out certain other exchange-related activities by individual contractors. The funds were obligated from fiscal year 2010 through March 31, 2013. The information presented in this table was obtained from CMS. Due to the large number of contractors, we did not edit the information to correct typographical or grammatical errors, or clarify the information provided. We reprinted the abbreviations and acronyms provided by CMS. Appendix II: Comments from the Department of Health and Human Services Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, Randy Dirosa and Teresa Tucker, Assistant Directors; Tonia Brown; Sandra George; Jawaria Gilani; William Hadley; Thomas Murphy; and Laurie Pachter made key contributions to this report.
The Patient Protection and Affordable Care Act required the establishment in all states of exchanges—marketplaces where eligible individuals can compare and select health insurance plans. CMS must oversee the establishment of exchanges, including approving states to operate one or establishing and operating one itself in states that will not do so. CMS will approve states to assist it in carrying out certain FFE functions. CMS will also operate an electronic data hub to provide eligibility information to the exchanges and state agencies. Enrollment begins on October 1, 2013, with coverage effective January 1, 2014. GAO was asked to examine CMS’s role and preparedness to establish FFEs and the data hub. In this report, GAO describes (1) the federal government’s role in establishing FFEs for operation in 2014 and state participation in that effort; and (2) the status of federal and state actions taken and planned for FFEs and the data hub. GAO reviewed regulations and guidance issued by CMS and documents indicating the activities that the federal government and states are expected to carry out for these exchanges. GAO also reviewed planning documents CMS used to track the implementation of federal and state activities, including documents describing the development and implementation of the data hub. GAO also interviewed CMS officials responsible for establishment of the exchanges. GAO relied largely on documentation provided by CMS—including information CMS developed based on its contacts with the states—regarding the status of the exchanges and did not interview or collect information directly from states. The Centers for Medicare & Medicaid Services (CMS) will operate a health insurance exchange in the 34 states that will not operate a state-based exchange for 2014. Of these 34 federally facilitated exchanges (FFE), 15 are in states expected to assist CMS in carrying out certain FFE functions. However, the activities that CMS plans to carry out in these 15 exchanges, as well as in the state-based exchanges, have evolved and may continue to change. For example, CMS approved states' exchange arrangements on the condition that they ultimately complete activities necessary for exchange implementation. CMS indicated that it would carry out more exchange functions if any state did not adequately progress towards implementation of all required activities. CMS completed many activities necessary to establish FFEs by October 1, 2013, although many remain to be completed and some were behind schedule. CMS issued numerous regulations and guidance and took steps to establish processes and data systems necessary to operate the exchanges. The activities remaining cross the core exchange functional areas of eligibility and enrollment, plan management, and consumer assistance. To support consumer-eligibility determinations, for example, CMS is developing a data hub that will provide electronic, near real-time access to federal data, as well as provide access to state and third party data sources needed to verify consumer-eligibility information. While CMS has met project schedules, several critical tasks, such as final testing with federal and state partners, remain to be completed. For plan management, CMS must review and certify the qualified health plans (QHP) that will be offered in the FFEs. Though the system used to submit applications for QHP certification was operational during the anticipated time frame, several key tasks regarding plan management, including certification of QHPs and inclusion of QHP information on the exchange websites, remain to be completed. In the case of consumer assistance, for example, funding awards for Navigators--a key consumer assistance program--have been delayed by about 2 months, which has delayed training and other activities. CMS is also depending on the states to implement specific FFE exchange functions, and CMS data show that many state activities remained to be completed and some were behind schedule. Much progress has been made, but much remains to be accomplished within a relatively short amount of time. CMS's timelines provide a roadmap to completion; however, factors such as the still-evolving scope of CMS's required activities in each state and the many activities yet to be performed--some close to the start of enrollment--suggest a potential for challenges going forward. And while the missed interim deadlines may not affect implementation, additional missed deadlines closer to the start of enrollment could do so. CMS recently completed risk assessments and plans for mitigating risks associated with the data hub, and is also working on strategies to address state preparedness contingencies. Whether these efforts will assure the timely and smooth implementation of the exchanges by October 2013 cannot yet be determined. In commenting on a draft of this report, the Department of Health and Human Services emphasized the progress it has made in establishing exchanges, and expressed its confidence that exchanges will be open and functioning in every state by October 1, 2013.
GAO_GAO-10-822T
Opportunities for Strengthening Interagency Collaboration National security threats have evolved and require involvement beyond the traditional agencies of DOD, the Department of State, and USAID. The Departments of Homeland Security, Energy, Justice, the Treasury, Agriculture, Commerce, and Health and Human Services are now a bigger part of the equation. What has not yet evolved are the mechanisms that agencies use to coordinate national security activities such as developing overarching strategies to guide planning and execution of missions, or sharing and integrating national security information across agencies. The absence of effective mechanisms can be a hindrance to achieving national security objectives. Within the following key areas, a number of challenges exist that limit the ability of U.S. government agencies to work collaboratively in responding to national security issues. Our work has also identified actions that agencies can take to enhance collaboration. Developing and Implementing Overarching, Integrated Strategies to Achieve National Security Objectives Although some agencies have developed or updated overarching strategies on national security-related issues, our work has identified cases where U.S. efforts have been hindered by the lack of information on roles and responsibilities of organizations involved or the lack of mechanisms to coordinate their efforts. National security challenges covering a broad array of areas, ranging from preparedness for an influenza pandemic to Iraqi governance and reconstruction, have necessitated using all elements of national power—including diplomatic, military, intelligence, development assistance, economic, and law enforcement support. These elements fall under the authority of numerous U.S. government agencies, requiring overarching strategies and plans to enhance agencies’ abilities to collaborate with each other. Strategies can help agencies develop mutually reinforcing plans and determine activities, resources, processes, and performance measures for implementing those strategies. The Government Performance and Results Act (GPRA) provides a strategic planning and reporting framework intended to improve federal agencies’ performance and hold them accountable for achieving results. Effective implementation of GPRA’s results-oriented framework requires, among other things, that agencies clearly establish performance goals for which they will be held accountable, measure progress towards those goals, and determine strategies and resources to effectively accomplish the goals. Furthermore, defining organizational roles and responsibilities and mechanisms for coordination in these strategies can help agencies clarify who will lead or participate in which activities and how decisions will be made. It can also help them organize their individual and joint efforts, and address how conflicts would be resolved. Our prior work, as well as that by national security experts, has found that strategic direction is required as a foundation for collaboration toward national security goals. We have found that, for example, in the past, multiple agencies, including the State Department, USAID, and DOD, led separate efforts to improve the capacity of Iraq’s ministries to govern, without overarching direction from a lead entity to integrate their efforts. Since 2007, we have testified and reported that the lack of an overarching strategy contributed to U.S. efforts not meeting the goal for key Iraqi ministries to develop the capacity to effectively govern and assume increasing responsibility for operating, maintaining, and further investing in reconstruction projects. We recommended that the Department of State, in consultation with the Iraqi government, complete an overall strategy for U.S. efforts to develop the capacity of the Iraqi government. State recognized the value of such a strategy but expressed concern about conditioning further capacity development investment on completion of such a strategy. Moreover, our work on the federal government’s pandemic influenza preparedness efforts found that the Departments of Homeland Security and Health and Human Services share most federal leadership roles in implementing the pandemic influenza strategy and supporting plans; however, we reported that it was not clear how this would work in practice because their roles are unclear. The National Strategy for Pandemic Influenza and its supporting implementation plan describes the Secretary of Health and Human Services as being responsible for leading the medical response in a pandemic, while the Secretary of Homeland Security would be responsible for overall domestic incident management and federal coordination. However, since a pandemic extends well beyond health and medical boundaries—to include sustaining critical infrastructure, private-sector activities, the movement of goods and services across the nation and the globe, and economic and security considerations—it is not clear when, in a pandemic, the Secretary of Health and Human Services would be in the lead and when the Secretary of Homeland Security would lead. This lack of clarity on roles and responsibilities could lead to confusion or disagreements among implementing agencies that could hinder interagency collaboration. Furthermore, a federal response could be slowed as agencies resolve their roles and responsibilities following the onset of a significant outbreak. We have also issued reports recommending that U.S. government agencies, including DOD, the State Department, and others, develop or revise strategies to incorporate desirable characteristics for strategies for a range of programs and activities. These include humanitarian and development efforts in Somalia, the Trans-Sahara Counterterrorism Partnership, foreign assistance strategy, law enforcement agencies’ role in assisting foreign nations in combating terrorism, and meeting U.S. national security goals in Pakistan’s Federally Administered Tribal Areas. In commenting on drafts of those reports, agencies generally concurred with our recommendations. Officials from one organization—the National Counterterrorism Center—noted that at the time of our May 2007 report on law enforcement agencies’ role in assisting foreign nations in combating terrorism, it had already begun to implement our recommendations. Creating Collaborative Organizations That Facilitate Integrated National Security Approaches Organizational differences—including differences in agencies’ structures, planning processes, and funding sources—can hinder interagency collaboration. Agencies lack adequate coordination mechanisms to facilitate this collaboration during planning and execution of programs and activities. U.S. government agencies, such as the Department of State, USAID, and DOD, among others, spend billions of dollars annually on various diplomatic, development, and defense missions in support of national security. Achieving meaningful results in many national security– related interagency efforts requires coordinated efforts among various actors across federal agencies; foreign, state, and local governments; nongovernment organizations; and the private sector. Given the number of agencies involved in U.S. government national security efforts, it is important that there be mechanisms to coordinate across agencies. Without such mechanisms, the results can be a patchwork of activities that waste scarce funds and limit the overall effectiveness of federal efforts. A good example of where agencies involved in national security activities define and organize their regions differently involves DOD’s regional combatant commands and the State Department’s regional bureaus. Both are aligned differently in terms of the geographic areas they cover, as shown in figure 1. As a result of differing structures and areas of coverage, coordination becomes more challenging and the potential for gaps and overlaps in policy implementation is greater. Moreover, funding for national security activities is budgeted for and appropriated by agency, rather than by functional area (such as national security), resulting in budget requests and congressional appropriations that tend to reflect individual agency concerns. Given these differences, it is important that there be mechanisms to coordinate across agencies. In addition to regional bureaus, the State Department is organized to interact through U.S. embassies located within other countries. As a result of these differing structures, our prior work and that of national security experts has found that agencies must coordinate with a large number of organizations in their regional planning efforts, potentially creating gaps and overlaps in policy implementation and leading to challenges in coordinating efforts among agencies. Given the differences among U.S. government agencies, developing adequate coordination mechanisms is critical to achieving integrated approaches. In some cases, agencies have established effective mechanisms. For example, DOD’s U.S. Africa Command had undertaken efforts to integrate personnel from other U.S. government agencies into its command structure because the command is primarily focused on strengthening security cooperation with African nations and creating opportunities to bolster the capabilities of African partners, which are activities that traditionally require coordination with other agencies. However, in other cases, challenges remain. For example, we reported in May 2007 that DOD had not established adequate mechanisms to facilitate and encourage interagency participation in the development of military plans developed by the combatant commanders. Furthermore, we noted that inviting interagency participation only after plans have been formulated is a significant obstacle to achieving a unified government approach in the planning effort. In that report, we suggested that Congress require DOD to develop an action plan and report annually on steps being taken to achieve greater interagency participation in the development of military plans. Moreover, we reported in March 2010 that DOD has many strategy, policy, and guidance documents on interagency coordination of its homeland defense and civil support mission; however, DOD entities do not have fully or clearly defined roles and responsibilities because key documents are outdated, are not integrated, or are not comprehensive. More specifically, conflicting directives assigned overlapping law enforcement support responsibilities to three different DOD entities, creating confusion as to which DOD office is actually responsible for coordinating with law enforcement agencies. DOD’s approach to identifying roles and responsibilities and day-to-day coordination processes could also be improved by providing relevant information in a single, readily-accessible source. This source could be accomplished through a variety of formats such as a handbook or a Web-based tool and could provide both DOD and other agencies a better understanding of each other as federal partners and enable a unified and institutionalized approach to interagency coordination. We recommended, and DOD agreed, that the department update and integrate its strategy, policy, and guidance; develop a partner guide; and implement key practices for management of homeland defense and civil support liaisons. We have reported other instances in which mechanisms are not formalized or fully utilized. For example, we found that collaboration between DOD’s Northern Command and an interagency planning team on the development of the command’s homeland defense plan was largely based on the dedicated personalities involved and informal meetings. Without formalizing and institutionalizing the interagency planning structure, we concluded efforts to coordinate may not continue when personnel move on to their next assignments. We made several recommendations, and DOD generally concurred, that the department take several actions to address the challenges it faces in its planning and interagency coordination efforts. In recent years we have issued reports recommending that the Secretaries of Defense, State, and Homeland Security and the Attorney General take a variety of actions to address creating collaborative organizations, including taking actions to provide implementation guidance to facilitate interagency participation and develop clear guidance and procedures for interagency efforts, develop an approach to overcome differences in planning processes, create coordinating mechanisms, and clarify roles and responsibilities. In commenting on drafts of those reports, agencies generally concurred with our recommendations. In some cases, agencies identified planned actions to address the recommendations. For example, in our April 2008 report on U.S. Northern Command’s plans, we recommended that clear guidance be developed for interagency planning efforts, and DOD stated that it had begun to incorporate such direction in its major planning documents and would continue to expand on this guidance in the future. Developing a Well-Trained Workforce Federal agencies do not always have the right people with the right skills in the right jobs at the right time to meet the challenges they face, to include having a workforce that is able to quickly address crises. As the threats to national security have evolved over the past decades, so have the skills needed to prepare for and respond to those threats. To effectively and efficiently address today’s national security challenges, federal agencies need a qualified, well-trained workforce with the skills and experience that can enable them to integrate the diverse capabilities and resources of the U.S. government. Our work has found that personnel often lack knowledge of the processes and cultures of the agencies with which they must collaborate. Some federal government agencies lack the personnel capacity to fully participate in interagency activities and some agencies do not have the necessary capabilities to support their national security roles and responsibilities. For example, in June 2009, we reported that DOD lacks a comprehensive strategic plan for addressing its language skills and regional proficiency capabilities. Moreover, as of September 2009, we found that 31 percent of the State Department’s generalists and specialists in language-designated positions did not meet the language requirements for their positions, an increase from 29 percent in 2005. Similarly, we reported in September 2008 that USAID officials at some overseas missions told us that they did not receive adequate and timely acquisition and assistance support at times, in part because the numbers of USAID staff were insufficient or because the USAID staff lacked necessary competencies. We also reported in February 2009 that U.S. Africa Command has faced difficulties integrating interagency personnel into its command. According to DOD and Africa Command officials, integrating personnel from other U.S. government agencies is essential to achieving Africa Command’s mission because it will help the command develop plans and activities that are more compatible with those agencies. However, the State Department, which faced a 25 percent shortfall in midlevel personnel, told Africa Command that it likely would not be able to fill the command’s positions due to personnel shortages. DOD has a significantly larger workforce than other key agencies involved in national security activities as shown in figure 2. Furthermore, agencies’ personnel systems often do not recognize or reward interagency collaboration, which could diminish agency employees’ interest in serving in interagency efforts. In June 2009 we reviewed compensation policies for six agencies that deployed civilian personnel to Iraq and Afghanistan, and reported that variations in policies for such areas as overtime rate, premium pay eligibility, and deployment status could result in monetary differences of tens of thousands of dollars per year. The Office of Personnel Management acknowledged that laws and agency policy could result in federal government agencies paying different amounts of compensation to deployed civilians at equivalent pay grades who are working under the same conditions and facing the same risks. In another instance, we reported in April 2009 that officials from the Departments of Commerce, Energy, Health and Human Services, and the Treasury stated that providing support for State Department foreign assistance program processes creates an additional workload that is neither recognized by their agencies nor included as a factor in their performance ratings. Various tools can be useful in helping agencies to improve their ability to more fully participate in collaboration activities. For example, increasing training opportunities can help personnel develop the skills and understanding of other agencies’ capabilities. We have previously testified that agencies need to have effective training and development programs to address gaps in the skills and competencies that they identified in their workforces. Moreover, we issued a report in April 2010 on DOD’s Horn of Africa task force, which found that DOD personnel did not always understand U.S. embassy procedures in carrying out their activities. This resulted in a number of cultural missteps in Africa because personnel did not understand local religious customs and may have unintentionally burdened embassies that must continuously train new staff on procedures. We recommended, and DOD agreed, that the department develop comprehensive training guidance or a program that augments personnel’s understanding of African cultural awareness and working with interagency partners. Training and developing personnel to fill new and different roles will play a crucial part in the federal government’s endeavors to meet its transformation challenges. Also, focusing on strategic workforce planning can support agencies’ efforts to secure the personnel resources needed to collaborate in interagency missions. We have found that tools like strategic workforce planning and human capital strategies are integral to managing resources as they enable an agency to define staffing levels, identify critical skills needed to achieve its mission, and eliminate or mitigate gaps between current and future skills and competencies. In recent years we have recommended that the Secretaries of State and Defense, the Administrator of USAID, and the U.S. Trade Representative take a variety of actions to address the human capital issues discussed above, such as staffing shortfalls, training, and strategic planning. Specifically, we have made recommendations to develop strategic human capital management systems and undertake strategic human capital planning, include measurable goals in strategic plans, identify the appropriate mix of contractor and government employees needed and develop plans to fill those needs, seek formal commitments from contributing agencies to provide personnel to meet interagency personnel requirements, develop alternative ways to obtain interagency perspectives in the event that interagency personnel cannot be provided due to resource limitations, develop and implement long-term workforce management plans, and implement a training program to ensure employees develop and maintain needed skills. In commenting on drafts of those reports, agencies generally concurred with our recommendations. In some cases, agencies identified planned actions to address the recommendations. For example, in our April 2009 report on foreign aid reform, we recommended that the State Department develop a long-term workforce management plan to periodically assess its workforce capacity to manage foreign assistance. The State Department noted in its comments that it concurred with the idea of further improving employee skill sets and would work to encourage and implement further training. Sharing and Integrating National Security Information Across Agencies U.S. government agencies do not always share relevant information with their national security partners due to a lack of clear guidelines for sharing information and security clearance issues. The timely dissemination of information is critical for maintaining national security. Federal, state, and local governments and private-sector partners are making progress in sharing terrorism-related information. For example, we reported in October 2007 that most states and many local governments had established fusion centers—collaborative efforts to detect, prevent, investigate, and respond to criminal and terrorist activity—to address gaps in information sharing. However, we found that non-DOD personnel could not access some DOD planning documents or participate in planning sessions because they may not have had the proper security clearances. Moreover, because of concerns about agencies’ ability to protect shared information or use that information properly, other agencies and private- sector partners may be hesitant to share information. For example, we have reported that Department of Homeland Security officials expressed concerns about sharing terrorism-related information with state and local partners because such information had occasionally been posted on public Internet sites or otherwise compromised. To facilitate information sharing, it is important to establish clear guidelines, agreements, and procedures that govern key aspects, such as how information will be communicated, who will participate in interagency information sharing efforts, and how information will be protected. When agencies do share information, managing and integrating information from multiple sources presents challenges regarding redundancies in information sharing, unclear roles and responsibilities, and data comparability. For example, we reported in December 2008 that in Louisiana, reconstruction project information had to be repeatedly resubmitted separately to state and Federal Emergency Management Agency officials during post-Hurricane Katrina reconstruction efforts because the system used to track project information did not facilitate the exchange of documents. Information was sometimes lost during this exchange, requiring state officials to resubmit the information, creating redundancies and duplication of effort. As a result, reconstruction efforts in Louisiana were delayed. In another instance, we reported in October 2008 that biometric data, such as fingerprints and iris images, collected in DOD field activities such as those in Iraq and Afghanistan, were not comparable with data collected by other units or with large federal databases that store biometric data, such as the Department of Homeland Security biometric database or the Federal Bureau of Investigation (FBI) fingerprint database. A lack of comparable data, especially for use in DOD field activities, prevents agencies from determining whether the individuals they encounter are friend, foe, or neutral, and may put forces at risk. Since 2005, we have recommended that the Secretaries of Defense, Homeland Security, and State establish or clarify guidelines, agreements, or procedures for sharing a wide range of national security information, such as planning information, terrorism-related information, and reconstruction project information. We have recommended that such guidelines, agreements, and procedures define and communicate how shared information will be protected; include provisions to involve and obtain information from nonfederal partners in the planning process; ensure that agencies fully participate in interagency information- sharing efforts; identify and disseminate practices to facilitate more effective communication among federal, state, and local agencies; clarify roles and responsibilities in the information-sharing process; and establish baseline standards for data collecting to ensure comparability across agencies. In commenting on drafts of those reports, agencies generally concurred with our recommendations. In some cases, agencies identified planned actions to address the recommendations. For example, in our December 2008 report on the Federal Emergency Management Agency’s public assistance grant program, we recommended that the Federal Emergency Management Agency improve information sharing within the public assistance process by identifying and disseminating practices that facilitate more effective communication among federal, state, and local entities. In comments on a draft of the report, the Federal Emergency Management Agency generally concurred with the recommendation and noted that it was making a concerted effort to improve collaboration and information sharing within the public assistance process. Moreover, agencies have implemented some of our past recommendations. For example, in our April 2006 report on protecting and sharing critical infrastructure information, we recommended that the Department of Homeland Security define and communicate to the private sector what information is needed and how the information would be used. The Department of Homeland Security concurred with our recommendation and, in response, has made available, through its public Web site, answers to frequently asked questions that define the type of information collected and what it is used for, as well as how the information will be accessed, handled, and used by federal, state, and local government employees and their contractors. Importance of Sustained Leadership Underlying the success of these key areas for enhancing interagency collaboration for national security-related activities is committed and effective leadership. Our prior work has shown that implementing large- scale change management initiatives or transformational change—which is what these key areas should be considered—are not simple endeavors and require the concentrated efforts of leadership and employees to realize intended synergies and to accomplish new goals. Leadership must set the direction, pace, and tone and provide a clear, consistent rationale for the transformation. Sustained and inspired attention is needed to overcome the many barriers to working across agency boundaries. For example, leadership is important in establishing incentives to promote employees’ interest in serving in interagency efforts. The 2010 National Security Strategy calls for a renewed emphasis on building a stronger leadership foundation for the long term to more effectively advance our interests in the 21st century. Moreover, the strategy identifies key steps for improving interagency collaboration. These steps include more effectively ensuring alignment of resources with our national security strategy, adapting the education and training of national security professionals to equip them to meet modern challenges, reviewing authorities and mechanisms to implement and coordinate assistance programs, and other policies and programs that strengthen coordination. National security experts also note the importance of and need for effective leadership for national security issues. For example, a 2008 report by the Project on National Security Reform notes that the national security system requires skilled leadership at all levels and, to enhance interagency coordination, these leaders must be adept at forging links and fostering partnerships all levels. Strengthening interagency collaboration—with leadership as the foundation—can help transform U.S. government agencies and create a more unified, comprehensive approach to national security issues at home and abroad. Mr. Chairman, this concludes my prepared remarks. I would be pleased to respond to any questions you or other Members of the Subcommittee may have. For future information regarding this statement, please contact John H. Pendleton at (202) 512-3489 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this statement. Key contributors to this statement are listed in appendix II. Appendix I: Related GAO Products Defense Management: DOD Needs to Determine the Future of Its Horn of Africa Task Force, GAO-10-504, Washington, D.C.: Apr. 15, 2010. Homeland Defense: DOD Needs to Take Actions to Enhance Interagency Coordination for Its Homeland Defense and Civil Support Missions, GAO-10-364, Washington, D.C.: Mar. 30, 2010. Interagency Collaboration: Key Issues for Congressional Oversight of National Security Strategies, Organizations, Workforce, and Information Sharing, GAO-09-904SP, Washington, D.C.: Sept. 25, 2009. Military Training: DOD Needs a Strategic Plan and Better Inventory and Requirements Data to Guide Development of Language Skills and Regional Proficiency. GAO-09-568. Washington, D.C.: June 19, 2009. Influenza Pandemic: Continued Focus on the Nation’s Planning and Preparedness Efforts Remains Essential. GAO-09-760T. Washington, D.C.: June 3, 2009. U.S. Public Diplomacy: Key Issues for Congressional Oversight. GAO-09-679SP. Washington, D.C.: May 27, 2009. Military Operations: Actions Needed to Improve Oversight and Interagency Coordination for the Commander’s Emergency Response Program in Afghanistan. GAO-09-61. Washington, D.C.: May 18, 2009. Foreign Aid Reform: Comprehensive Strategy, Interagency Coordination, and Operational Improvements Would Bolster Current Efforts. GAO-09-192. Washington, D.C.: Apr. 17, 2009. Iraq and Afghanistan: Security, Economic, and Governance Challenges to Rebuilding Efforts Should Be Addressed in U.S. Strategies. GAO-09-476T. Washington, D.C.: Mar. 25, 2009. Drug Control: Better Coordination with the Department of Homeland Security and an Updated Accountability Framework Can Further Enhance DEA’s Efforts to Meet Post-9/11 Responsibilities. GAO-09-63. Washington, D.C.: Mar. 20, 2009. Defense Management: Actions Needed to Address Stakeholder Concerns, Improve Interagency Collaboration, and Determine Full Costs Associated with the U.S. Africa Command. GAO-09-181. Washington, D.C.: Feb. 20, 2009. Combating Terrorism: Actions Needed to Enhance Implementation of Trans-Sahara Counterterrorism Partnership. GAO-08-860. Washington, D.C.: July 31, 2008. Information Sharing: Definition of the Results to Be Achieved in Terrorism-Related Information Sharing Is Needed to Guide Implementation and Assess Progress. GAO-08-637T. Washington, D.C.: July 23, 2008. Highlights of a GAO Forum: Enhancing U.S. Partnerships in Countering Transnational Terrorism. GAO-08-887SP. Washington, D.C.: July 2008. Stabilization and Reconstruction: Actions Are Needed to Develop a Planning and Coordination Framework and Establish the Civilian Reserve Corps. GAO-08-39. Washington, D.C.: Nov. 6, 2007. Homeland Security: Federal Efforts Are Helping to Alleviate Some Challenges Encountered by State and Local Information Fusion Centers. GAO-08-35. Washington, D.C.: Oct. 30, 2007. Military Operations: Actions Needed to Improve DOD’s Stability Operations Approach and Enhance Interagency Planning. GAO-07-549. Washington, D.C.: May 31, 2007. Combating Terrorism: Law Enforcement Agencies Lack Directives to Assist Foreign Nations to Identify, Disrupt, and Prosecute Terrorists. GAO-07-697. Washington, D.C.: May 25, 2007. Results-Oriented Government: Practices That Can Help Enhance and Sustain Collaboration among Federal Agencies. GAO-06-15. Washington, D.C.: Oct. 21, 2005. Appendix II: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, Marie Mak, Assistant Director; Laurie Choi; Alissa Czyz; Rebecca Guerrero; and Jodie Sandel made key contributions to this testimony. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Recent terrorist events such as the attempted bomb attacks in New York's Times Square and aboard an airliner on Christmas Day 2009 are reminders that national security challenges have expanded beyond the traditional threats of the Cold War Era to include unconventional threats from nonstate actors. Today's threats are diffuse and ambiguous, making it difficult--if not impossible--for any single federal agency to address them alone. Effective collaboration among multiple agencies and across federal, state, and local governments is critical. This testimony highlights opportunities to strengthen interagency collaboration by focusing on four key areas: (1) developing overarching strategies, (2) creating collaborative organizations, (3) developing a well-trained workforce, and (4) improving information sharing. It is based on GAO's body of work on interagency collaboration. Federal agencies have an opportunity to enhance collaboration by addressing long-standing problems and better positioning the U.S. government to respond to changing conditions and future uncertainties. Progress has been made in enhancing interagency collaboration, but success will require leadership commitment, sound plans that set clear priorities, and measurable goals. The agencies involved in national security will need to make concerted efforts to forge strong and collaborative partnerships, and seek coordinated solutions that leverage expertise and capabilities across communities. Today, challenges exist in four key areas: 1) Developing and implementing overarching strategies. Although some agencies have developed or updated overarching strategies on national security-related issues, GAO's work has identified cases where U.S. efforts have been hindered by the lack of information on roles and responsibilities of organizations involved or coordination mechanisms. 2) Creating collaborative organizations. Organizational differences--including differences in agencies' structures, planning processes, and funding sources--can hinder interagency collaboration. Agencies lack adequate coordination mechanisms to facilitate this collaboration during planning and execution of programs and activities. 3) Developing a well-trained workforce. Agencies do not always have the right people with the right skills in the right jobs at the right time to meet the challenges they face--including having a workforce that is able to quickly address crises. Moreover, agency performance management systems often do not recognize or reward interagency collaboration, and training is needed to understand other agencies' processes or cultures. 4) Sharing and integrating national security information across agencies. U.S. government agencies do not always share relevant information with their national security partners due to a lack of clear guidelines for sharing information and security clearance issues. Additionally, incorporating information drawn from multiple sources poses challenges to managing and integrating that information. Strengthening interagency collaboration--with leadership as the foundation--can help transform U.S. government agencies and create a more unified, comprehensive approach to national security issues at home and abroad.
GAO_GAO-12-69
Background Election Authority The basic goal of the elections system in the United States is that all eligible voters have the opportunity to cast their vote and have their valid ballot counted accurately. Election authority is shared by federal, state, and local officials, and the election system is highly decentralized. States are responsible for the administration of their own elections as well as federal elections, and states regulate various aspects of elections including registration procedures, absentee voting requirements, alternative voting methods, establishment of polling places, provision of Election Day workers, testing and certification of voting equipment, and counting and certification of the vote. As the U.S. election system is highly decentralized, primary responsibility for managing, planning, and conducting elections resides locally with about 10,500 local election jurisdictions nationwide. In most states, election responsibility resides at the county level, although some states have delegated election responsibility to subcounty governmental units, such as cities, villages, and townships. widely in size and complexity, ranging from small New England townships to Los Angeles County, where the number of registered voters exceeds that of 42 states. Some states have mandated statewide election administration guidelines and procedures that foster uniformity in the way local jurisdictions conduct elections. Others have guidelines that generally permit local election jurisdictions considerable autonomy and discretion in the way they run elections. Although some states bear some election costs, it is local jurisdictions that pay for elections. According to the Executive Director of the EAC, costs are not tracked in uniform ways because of the decentralized nature of elections and the variation in state and jurisdiction size and funding structures. States can be divided into two groups according to how election responsibilities are delegated. The first group contains 41 states that delegate election responsibilities primarily to the county level, with a few of these states delegating election responsibilities to some cities, and 1 state that delegates these responsibilities to election regions. The second group contains 9 states that delegate election responsibility principally to subcounty governmental units. various constitutional sources, depending upon the type of election. Federal legislation has been enacted in major functional areas of the voting process, such as voter registration, absentee voting requirements, accessibility provisions for the elderly and voters with disabilities, and prohibitions against discriminatory voting practices. With regard to the administration of federal elections, Congress has constitutional authority over both presidential and congressional elections, including the timing of federal elections. Under federal statute, the Tuesday after the first Monday in November in an even-numbered year is established as the day for federal congressional elections. Federal statute also sets this same day for the selection of presidential electors—the Tuesday after the first Monday in November in every 4th year succeeding every election of a President and Vice President. In general, these are the federal statutes that the previously pending weekend voting bills would have amended to move the November Tuesday federal Election Day to Saturday and Sunday. Such a change in federal law would, in effect, likely require states to change their laws and regulations governing the implementation of federal elections to mirror the day(s) established in federal law. Current federal law does not dictate the hours that polling places are required to be open on Election Day. The timing of state and local elections is not mandated by the federal election calendar. Nevertheless, many state and local government officials are also elected on federal Election Day as a matter of convenience and to save costs. According to the EAC, some states and local jurisdictions have held nonfederal elections or primaries on Saturdays, believing that it might be more convenient for voters and, in turn, might increase voter turnout. For example, in Louisiana, all nonfederal elections take place on Saturdays and, in Texas, some nonfederal elections such as general elections for cities and schools take place on Saturdays. From 1978 through 2006, Delaware held local elections, including primaries, on Saturdays. It held its first Saturday presidential primary in 1996. However, according to the EAC, because the Jewish Sabbath is on Saturday and, additionally, the state’s 2002 primary fell on the Jewish New Year, Delaware moved the presidential primary to Tuesday in 2004 and the state primary to Tuesday in 2006. Election System Elements The U.S. election system is based on a complex interaction of people (voters, election officials, and poll workers), process, and technology that must work effectively together to achieve a successful election, as shown in figure 1. The election process is dependent on the citizens who cast ballots; however, election officials and poll workers are also essential to making the system work. State and local election officials are either elected or appointed and are responsible for carrying out federal and state election requirements. This can be a year-round effort. Among other things, election officials register eligible voters and maintain voter registration lists; design ballots; educate voters on how to vote; arrange for polling places; recruit, train, organize, and mobilize poll workers; prepare and test voting equipment for use; count ballots; and certify the final vote count. However, elections also depend on an army of poll workers—about 2 million for a federal election—who are willing to staff the polls on Election Day. Some poll workers are elected, some are appointed by political parties, and some are volunteers. Compensation varies by the level of responsibility of the poll worker and the state or jurisdiction in which they work. As we reported in 2006, increasingly, poll workers are needed with different skills, such as computer or technical skills, and across the country jurisdictions have faced challenges finding poll workers. Voting methods and related technology also play a critical part in the success of an election. Voting methods are tools for accommodating the millions of voters in our nation’s approximately 10,500 local election jurisdictions. Since the 1980s, ballots in the United States, to varying degrees, have been cast and counted using five methods: paper ballots, lever machines, punch cards, optical scan, and direct recording electronic (DRE) machines. Four of these methods involve technology; only the paper ballot system does not. For example, many DREs use computers to present the ballot to the voter, and optical scan and DRE systems depend on computers to tally votes. The way voting systems are designed, developed, tested, installed, and operated can lead to a variety of situations where misunderstanding, confusion, error, or deliberate actions by voters or election workers can, in turn, affect the equipment’s performance in terms of accuracy, ease of use, security, reliability, and efficiency. Each of the 50 states and the District has its own election system with a somewhat distinct approach. While election systems vary from one local jurisdiction to another, all involve people, process, and technology, and most have the following elements: Voter registration. Voter registration is not a federal requirement. However, except for North Dakota, all states and the District generally require citizens to register before voting. The deadline for registering and what is required to register varies. At a minimum, state eligibility provisions typically require a person to be a U.S. citizen, at least 18 years of age, and a resident of the state, with some states requiring a minimum residency period. Citizens apply to register to vote in various ways, such as at motor vehicle agencies, by mail, or at local voter registrar offices. Some states allow citizens to register at a polling place on Election Day. Election officials process registration applications and compile and maintain the list of registered voters to be used throughout the administration of an election. Absentee and early voting. Absentee voting is a process that allows citizens the opportunity to vote when they are unable to vote at their precinct on Election Day and is generally conducted by mail. All states and the District have provisions allowing voters to cast their ballot before Election Day by voting absentee with variations on who may vote absentee, whether the voter needs an excuse, and the time frames for applying and submitting absentee ballots. In addition, some states also allow in-person early voting, as discussed later in the report. In general, early voting allows voters from any precinct in the jurisdiction to cast their vote in person without an excuse before Election Day either at one specific location or at one of several locations. Early voting locations have a registration list for the jurisdiction and ballots specific to each precinct. The voter is provided with and casts a ballot designed for his or her assigned precinct. As with absentee voting, the specific circumstances for in-person early voting—such as the dates, times, and locations—are based on state and local requirements. Planning and conducting Election Day activities. Election officials perform a range of activities in preparation for and on Election Day itself. Prior to an election, officials recruit and train poll workers to have the skills needed to perform their Election Day duties, such as opening and closing the polls and operating polling place equipment. Where needed and required, election officials must also recruit poll workers who speak languages other than English. Officials also locate polling places that are to meet basic standards for accessibility and have an infrastructure to support voting machines as well as voter and poll worker needs. They design and produce ballots to meet state requirements and voter language needs, and that identify all election races, candidates, and issues on which voters in each precinct in their jurisdiction will vote. Election officials seek to educate voters on topics such as what the ballot looks like, how to use a voting machine, and where their particular polling place is located. Finally, election officials seek to ensure that voting equipment, ballots, and supplies are delivered to polling places. On Election Day, poll workers set up and open the polling places. This can include setting up the voting machines or voting booths, testing equipment, posting required signs and voter education information, and completing paperwork such as confirming that the ballot is correct for the precinct. Before a voter receives a ballot or is directed to a voting machine, poll workers typically are to verify his or her eligibility. Provisional voting. Federal law requires that an individual asserting to be registered in the jurisdiction for which he or she desires to vote and is eligible to vote in a federal election—but whose name does not appear on the official list of eligible voters for the polling place—be provided a provisional ballot. In addition, provisional ballots are to be provided in elections for federal office to individuals whom an election official asserts to be ineligible to vote, and for court-ordered voting in a federal election after the polls have closed. If individuals are determined to be eligible voters, their provisional ballots are to be counted as votes in accordance with state law, along with other types of ballots, and included in the total election results. Vote counting and certification. Following the close of the polls, election officials and poll workers complete steps to count the votes and determine the outcome of the election. Equipment and ballots are to be secured, and votes are to be tallied or transferred to a central location for counting. The processes used to count or to recount election votes vary with the type of voting equipment used in a jurisdiction, state statutes, and local jurisdiction policies. Votes from Election Day, absentee ballots, early votes (where applicable), and provisional ballots are to be counted and consolidated for each race to determine the outcome. While preliminary results are available usually by the evening of Election Day, the certified results are generally not available until days later. Most States Provided Early or No-Excuse Absentee Voting as Alternatives to Voting on Tuesday in the 2010 General Election For the November 2010 general election, 35 states and the District provided voters at least one alternative to casting their ballot on Election Day through in-person early voting, no-excuse absentee voting, or voting by mail. As shown in figure 2, 33 states and the District provided in- person early voting, 29 states and the District provided no-excuse absentee voting, and 2 states provided voting by mail to all or most voters. In addition, eight of the states and the District with no-excuse absentee voting permitted registered voters to apply for an absentee ballot on a permanent basis so those voters automatically receive an absentee ballot in the mail prior to every election without providing an excuse or reason for voting absentee. Furthermore, the number of states providing these alternatives has increased in recent elections. We previously reported that for the 2004 general election, 24 states and the District required or allowed in-person early voting, 21 states required or allowed no-excuse absentee voting, Appendix III and 1 state—Oregon—required all voters to vote by mail. compares the alternative voting methods for the 2004 and 2010 general elections, by state. Of the nine states and the District where we conducted interviews, all but two states provided voters the option of in-person early voting in the November 2010 general election. Five of the seven states and the District offered both early voting and no-excuse absentee voting. Appendix IV provides additional details of how these seven states and the District implemented these two alternative voting methods for the 2010 general election. The two other states where we conducted interviews—Delaware and New Hampshire—did not provide voters with either of these alternatives, although they allowed voters to vote by absentee ballot if they provided a reason. See GAO-06-450. This information was based on the result of web-based surveys we conducted in 2005 of the 50 states and the District. See GAO-06-451SP for additional survey results. characterized their process as early voting. Five states—California, Illinois, Louisiana, Maryland, and Texas—as well as the District called their process “early voting,” but North Carolina called it “one-stop absentee voting” and Wisconsin called it “in-person absentee voting.” Moreover, implementation and characteristics of early voting also varied among the seven states and, in some cases, among the jurisdictions within a state. Method of voting. In three of the seven states (California, North Carolina, and Wisconsin) where we conducted interviews, voters were allowed to cast their vote in person by using vote-by-mail or absentee ballots during a specified period prior to Election Day. In these states, voters applied for an absentee or vote-by-mail ballot when they went to vote early, received a ballot on the spot, and could then cast their ballot. In contrast, in the other four states and the District, voters cast their ballots using the method voters generally use on Election Day (i.e., DRE or optical scan). Days of early voting. Although the length of the early voting periods ranged from 7 to 30 days in the states we contacted, five of the seven states and the District required local jurisdictions to include at least one Saturday in their early voting period, and two states allowed for some jurisdiction discretion to include weekend days. Of the 14 jurisdictions we contacted that offered an early voting period, 12 included an option for voters to vote on at least one Saturday, and 6 of those jurisdictions also included at least one Sunday. For example, jurisdictions in Maryland offered a 7-day early voting period that ended 4 days before Election Day and included Saturday, but not Sunday. On the other hand, California and Wisconsin allowed voters to cast ballots in person starting about 1 month before Election Day through Election Day, and it was up to local discretion whether to include weekends. Hours of early voting. Although seven of the nine states where we conducted interviews included at least 1 day of the weekend in their early voting period, in some jurisdictions the hours available to vote were the same for weekdays and weekends, whereas in some cases weekend hours were fewer. Sometimes the hours varied by the week of the month. For example, Louisiana, Maryland, and the District required all of their early voting sites to be open the same hours each day—9.5, 10, and 10.5 hours, respectively—Monday through Saturday. Four states—California, Illinois, North Carolina, and Wisconsin—allowed local jurisdiction discretion to determine the hours of operation for some or all of their early voting sites. Texas used a formula based on county population to determine the number of hours, in addition to the specific days, during which early voting sites must be open. In the two Texas jurisdictions where we conducted interviews, early voting sites were open Monday through Friday for 9 or 10 hours (depending on the county) during the first week of early voting; 12 hours the second week; 12 hours on Saturday; and 5 hours or 6 hours on Sunday (depending on the county). Number of early voting sites. The number of sites where voters could cast their ballots early, in person, also varied among the states and local jurisdictions where we conducted interviews. For example, in North Carolina there were 297 early voting sites across 100 counties, whereas in Illinois there were 180 early voting sites across 110 counties. Half of the 14 local jurisdictions we contacted that offered early voting provided voters with a single early voting site, with the size of these jurisdictions varying in terms of both registered voter population and square miles. In the 7 jurisdictions that offered more than one early voting site, voters from any precinct in the jurisdiction could cast their ballot at any of that jurisdiction’s early voting sites. Types of early voting sites. The 14 local jurisdictions we contacted also used a variety of facilities as early voting sites. In 7 of these jurisdictions, early voting locations included county clerk or election offices, schools, libraries, and community centers, as well as mobile locations. For example, in an effort to make early voting convenient, one county in Illinois provided 30 of the 180 total early voting sites used in the state, consisting of 2 permanent sites and 28 temporary sites. The 2 permanent early voting sites were county clerk offices and the remaining 28 temporary sites included community centers, libraries, senior living communities, and grocery stores, some of which were serviced by “vote mobiles”—mobile units on wheels that moved from one location to another every few days. In contrast, in the 5 local jurisdictions we contacted in California and Wisconsin, their sole early voting site was located at the local election office. See appendix V for additional details on how the local jurisdictions we contacted implemented in-person early voting for the November 2010 general election. Most Election Officials We Interviewed Expect Greater Difficulty and Costs Associated with a Weekend Election State and local election officials we interviewed about implementing a weekend election most often identified challenges they would anticipate facing in planning and conducting Election Day activities—specifically, finding poll workers and polling places and securing ballots and voting equipment. Election officials told us that they expected few changes to how they register voters, conduct early voting, and provide voting with provisional ballots, but they did identify other challenges with implementing federal elections on a weekend. Most Election Officials Anticipate Finding Poll Workers for a Weekend Election Would Be Difficult and Costly Election officials we interviewed in all nine states, the District, and all 17 local jurisdictions said they would expect more poll workers would be needed for a 2-day weekend election than for a Tuesday election and related costs would increase. Further, officials in 13 of those jurisdictions and the District expected it would be more difficult to recruit a sufficient number of poll workers for a weekend election. We reported in 2006 that even though the number of poll workers needed varies by jurisdiction, having enough qualified poll workers on Election Day is crucial to ensuring that voters are able to successfully cast a vote. Nationwide, the majority of jurisdictions rely on poll workers from past elections to meet their needs, but for each election, officials also recruit new poll workers from other sources such as high schools and colleges, local businesses and organizations, and government agencies. Election officials in three jurisdictions described how changing the day for federal elections to a weekend would negatively affect their ability to draw from the poll workers and sources they have relied on in the past. example, election officials in one local jurisdiction said that about one- fourth of their approximately 23,000 poll workers for the 2010 general election were county employees and students. A weekend election would essentially end the incentives—paying county employees their salary and excusing students from classes—that the jurisdiction successfully used in the past to attract them to work at the polls on a Tuesday when they would normally be at work or at school. Similarly, election officials from two other jurisdictions that are required by law to provide language assistance to certain groups of voters said that they rely on younger volunteers, such as high school students, to make up the majority of their bilingual poll workers. These officials were concerned that these poll workers would be less likely to volunteer during a weekend election because the incentives used to attract them in the past—exemption from classes—would no longer be viable. Election officials from the other 14 local jurisdictions we interviewed did not express views or provide information specifically on how moving the date of federal elections might affect their ability to recruit from the poll workers and sources they have relied on in the past. Although we asked election officials in nine states, the District, and 17 local jurisdictions about whether or not various aspects of the election process might be affected by changing Election Day to a weekend, not all expressed views or provided information on every specific issue discussed throughout this report. workers from volunteering to work during a weekend election. Officials from one jurisdiction said that, based on their past experience with conducting an election on a Saturday, poll worker volunteers are less likely to report to work on the morning of a weekend election than they do for a Tuesday Election Day. Further, officials from 12 jurisdictions and the District said they would expect poll workers to be less willing or able to work 2 consecutive days of a weekend election due to fatigue, noting that many poll workers are elderly. Officials from one of these jurisdictions stated that many of the 2,350 poll workers who volunteered during the 2010 general election were elderly and unlikely to have the stamina to work 2 consecutive days that could each be 14 or 15 hours long. These officials further voiced concern that poll worker fatigue can lead to increased mistakes. In contrast, election officials we interviewed in 4 local jurisdictions did not anticipate difficulties finding the poll workers that would be needed for a weekend election. According to election officials in 3 of these jurisdictions, it might be easier to recruit poll workers for a weekend than for a Tuesday because a larger pool of volunteers who work Monday through Friday might be available. In a fourth jurisdiction with experience conducting state and local elections on Saturdays, officials said that while they may need to replace some poll workers that are only able or willing to work one day of a weekend election, they would expect that the compensation they offer would be sufficient to attract the number of poll workers needed to work over a weekend. However, election officials from all 17 jurisdictions and the District stated that the costs associated with poll worker pay would increase for a 2-day election, and in all but one jurisdiction, officials anticipated such costs would at least double what they spent in the 2010 general election. In that one jurisdiction, the election official anticipated poll worker costs might increase by about half—but not double—because she expected voter activity would be spread over the course of Saturday and Sunday and, thus, she would need fewer poll workers each day than for a single-day election. Moreover, election officials from 10 of these jurisdictions noted that poll worker costs represented their greatest cost in administering the 2010 general election. For example, officials from one local jurisdiction expected the number of needed poll workers and the related costs to double for a weekend election. They added that poll worker costs were already their greatest election expense, and that such an increase would significantly affect their overall election budget. Furthermore, election officials in this state said that a weekend election would at least double the $2.6 million the state incurred to help jurisdictions pay for nearly 54,000 poll workers statewide in the 2010 general election. Given its financial constraints, these officials questioned whether the state would be able to provide these payments to jurisdictions for the second day of a weekend election. In addition, election officials in three states and 4 jurisdictions noted that they might have to increase the compensation they provide poll workers or consider paying overtime to attract a sufficient number to work during a weekend election. For example, officials from a jurisdiction with less than 20 poll workers in the 2010 general election said that their costs for poll worker pay might double or triple for a weekend election because they would expect needing more poll workers as well as needing to increase compensation to successfully recruit them. Most Election Officials Expect Difficulty and Some Increased Costs Finding Polling Places for a Weekend Election Election officials we interviewed in 14 of the 17 local jurisdictions— including 5 jurisdictions with experience conducting elections on a Saturday—and the District expected that at least some of the polling places they used in past elections would not be available for a weekend election, and officials in all of those jurisdictions and the District anticipated difficulty finding replacements. Local election officials are responsible for selecting and securing a sufficient number of polling places that meet basic requirements and standards that include ensuring polling places are easily accessible to all voters, including voters with disabilities. They should also have a basic infrastructure capable of supporting voting machines and be comfortable for voters and poll workers, including offering sufficient indoor space and parking. The types of facilities used as polling places varied in the jurisdictions where we conducted interviews and included public and private facilities such as places of worship, schools, government buildings, fire departments, community centers, libraries, and residential facilities. Election officials noted potential challenges associated with relying on commonly used polling places on the weekend. Of the 12 jurisdictions and the District that relied on churches or synagogues for at least some of their polling places, election officials in all but one said they would need to find other locations for a weekend election because the places of worship they have relied on as polling places for Tuesday elections are used for religious services or activities on the weekend and, thus, would not be available. For example, in 2 jurisdictions where about half of the 3,067 and 200 polling places, respectively, were churches and synagogues, election officials said that they would not expect those facilities to be available on a weekend, and it would be difficult to find replacements. In contrast, in one jurisdiction with experience conducting state and local elections on a Saturday where about 15 percent of its 127 polling places were churches, election officials said they would expect the majority of those churches to remain available as polling places for a weekend election by using areas of the church not needed for religious services. However, they anticipated that churches would need to make special parking arrangements, as church goers and voters would be competing for parking spaces. Officials from 9 jurisdictions and the District explained that other polling places, such as schools and community centers, would also be more difficult to use on the weekend because of scheduled events, such as athletic events, dances, or fairs. For example, officials from one jurisdiction with past experience conducting federal elections on a Saturday stated that they had a harder time finding enough polling places for Saturday voting because fewer locations, such as community centers, were available. Officials stated that due to conflicts that prevented the use of some facilities, some polling place locations had to change from the presidential primary to the general election in the same election year. They added that, as a result, voters had to be assigned to a different polling place for the general election which caused a problem on Election Day when some of those voters went to the wrong location. In another jurisdiction where almost 70 percent of the 249 polling places in the 2010 general election were schools, officials said they would anticipate problems using schools as weekend polling places because of activities, such as athletic events, that might compete with a weekend election for space and parking. Furthermore, they found it difficult to think of any facilities that they might be able to use as replacements. In contrast, election officials from 5 jurisdictions with past experience conducting state or local elections on Saturdays noted that they might find it easier to use schools as polling places on a weekend than a Tuesday because students would not be attending classes and having students present on Election Day when campuses are open to the public has raised security concerns for some schools and jurisdictions. Officials from 2 of these jurisdictions acknowledged that schools would still have competing activities on the weekend, but anticipated they could use a different part of the school and employ additional staff to assist with parking and traffic. Regardless of the type of facility that might be unavailable as a weekend polling place, officials in 14 jurisdictions and the District said that finding alternatives would be challenging if not impossible. In all but one of these jurisdictions, officials pointed out the difficulty in locating alternative polling places that would be accessible to voters with disabilities. For example, according to one local election official, in some precincts the only building that is accessible to voters with disabilities is a church that is already used as a polling place for Tuesday elections, but would not be available on a weekend. Officials in 4 jurisdictions and the District said that in order to provide for a sufficient number of polling places they might need to consolidate precincts, in which case some voters would likely need to travel further to vote. However, in the three smallest jurisdictions in which we held interviews, election officials said they would expect the same polling places they used in past elections to still be available if the day of federal elections were moved to a weekend. In two cases, the jurisdictions had a single polling place—a municipal building—and officials would expect to use that building for a weekend election. Officials from the third jurisdiction that had experience conducting state and local elections on Saturdays, similarly stated that a weekend election would not present a challenge with respect to polling places, and they would expect to use the same 10 facilities—mostly public buildings—as polling places regardless of the day of the week the election is held. Election officials from 13 jurisdictions—including 5 jurisdictions with experience conducting elections on a Saturday—said they would expect costs associated with polling places to increase with a weekend election. Officials in 8 jurisdictions that pay for at least some of the facilities they use as polling places anticipated rental fees would double because of the 2-day aspect of a weekend election. Other officials said they would expect at least some of the facilities that are available at no cost for a Tuesday election to charge a rental fee on the weekend to compensate for potential revenue losses by, for example, not being able to rent their spaces for weddings or other private events. For example, officials from one jurisdiction said that to replace many of their 249 polling places that would be unavailable for a weekend election, they might need to offer higher compensation to attract private facilities that have not previously served as polling places. Furthermore, officials in 11 jurisdictions stated that other costs might increase with a weekend election if facilities that are normally closed on a weekend were opened for a weekend election. This might include charges for electricity or custodial and maintenance staff, who would need to be available or on the premises. In 6 of these jurisdictions, officials stated that paying for custodial or maintenance personnel might further entail overtime pay because they would be working on a weekend. Most Election Officials Said Ensuring Overnight Security of Ballots and Equipment Would Also Be Challenging and Costly According to election officials we interviewed in all nine states, the District, and 15 of the 17 local jurisdictions, ensuring the security of ballots and voting equipment over the Saturday night of a weekend election would be both challenging and expensive. We have previously reported that secure voting systems are essential to maintaining public confidence in the election process. EAC election management guidelines further articulate that physical security safeguards are required for all voting equipment and ballots while stored, transported, and in place at polling places on Election Day, and until the time the vote is certified. Officials we interviewed in 5 of the 7 states and the District that conducted early voting and provided security over multiple days explained that the level of planning and challenges needed for overnight security for a weekend election would be on a scale that would far surpass that of early voting due to the greater number and variety of polling places used on Election Day. For example, election officials in one state observed that for the 2010 general election, the entire state had fewer than 300 early voting sites compared to more than 2,750 polling places on Election Day, and the early voting sites were selected with the need for overnight security in mind. In contrast, Election Day polling places are precinct-based and generally selected based on factors that include availability and proximity to voters rather than overnight security. In 15 of the local jurisdictions and the District, election officials said they anticipated challenges regarding the overnight security aspect of a weekend election and described the following approaches they would envision taking to ensure the security of ballots and voting equipment: Transporting and securing ballots at another location. Election officials in 8 jurisdictions said that to ensure the security and the integrity of the election results, they would likely have ballots transported from polling places to a secure location on the Saturday night of a weekend election and back again on Sunday morning. An election official from one jurisdiction stated that municipal law requires that deputy sheriffs pick up ballots at the polling places and bring them to the clerk’s office to secure them overnight during the jurisdiction’s early voting period. This official stated that the jurisdiction’s elections office currently employs approximately 120 deputy sheriffs to do this on Tuesday night of Election Day, and they would likely be required to do the same on Saturday night in addition to Sunday night of a weekend election. Safeguarding voting equipment at polling places. Officials from 10 jurisdictions and the District said that to ensure overnight security during a weekend election, they would likely hire security personnel for each polling place to safeguard voting equipment from the close of polls on Saturday night until they reopen on Sunday morning. For example, an election official in one jurisdiction explained that because some of the jurisdiction’s 27 polling places are located up to 100 miles from the election office, there is not enough time between polls closing Saturday night and reopening Sunday morning to transport the voting equipment to and from each polling place and the secure county office. Thus, this official said hiring security personnel and posting them at each polling place overnight would be the only viable option to ensure the security of the equipment. Officials in 3 other jurisdictions explained that two security personnel would likely be needed at each polling place not only to secure the equipment, but to provide a check and balance and safeguard the integrity of the election results. Although these officials believed that on-site security personnel would be needed, some questioned whether a sufficient number would be available. For example, officials in one jurisdiction said that even if they were to hire every off-duty police officer in their jurisdiction, they did not think they would have enough officers to secure all of their 249 polling places over the Saturday night of a weekend election. Officials from another jurisdiction anticipated that, rather than hiring security personnel, they would likely secure the voting machines on-site in a locked room to prevent tampering, vandalism, or theft, but they would need to change the locks at all of their 23 polling places. We have previously reported that larger, diverse jurisdictions can face more challenges than smaller jurisdictions, as the complexity of administering an election and the potential for challenges increase with the number of people and places involved and the scope of activities and processes that must be conducted. This might be the case with respect to ensuring overnight security during a weekend election. For example, at one extreme, election officials in the largest jurisdiction where we held interviews said they would likely employ some combination of on-site security and transporting of ballots to ensure overnight security if elections were held over 2 days. Officials explained that in their jurisdiction, which had more than 3,000 polling places on Election Day for the 2010 general election, ensuring the chain of custody of ballots on election night involved a complex logistical operation that included transporting ballots by helicopters to an estimated 70 to 80 secure locations. Given the size of their jurisdiction and the enormity of the task, these officials said they would need to assemble a task force and devote considerable resources to determine how to address Saturday night security during a weekend election since it would involve a completely new model for them and a fundamental change in procedures. In contrast, election officials in the two smallest jurisdictions where we held interviews did not anticipate overnight security would be a challenge during a weekend election, as they use a single polling place—a municipal building—on Election Day. These officials said they would expect that ballot boxes would be secured in a safe located in the county office over the Saturday night of a weekend election, just as they are at the end of a Tuesday Election Day. They added that they might consider implementing additional security measures for a weekend election, such as having police patrol the building during the weekend, but they did not anticipate this would present a challenge or represent additional costs. In addition to presenting planning and logistical challenges, election officials in all nine states, the District, and 15 of the 17 local jurisdictions where we conducted interviews said they expected the costs associated with implementing these overnight security measures to increase the cost of a weekend election. For example, in the jurisdiction that would employ deputy sheriffs to transport the ballots to the clerk’s office both nights of a weekend election, the election official said this would double the more than $210,000 in security-related costs incurred for the 2010 general election. In one of the jurisdictions where officials anticipated posting two overnight security guards at each polling place, officials estimated this would add about $100,000 to their cost of administering an election. Election Officials Expected Other Challenges with Implementing Federal Elections on a Weekend In all 17 local jurisdictions and the District, election officials reported that they would expect few changes to how they register voters, conduct early voting, and provide voting with provisional ballots. However, election officials with whom we spoke identified other challenges related to operating voting systems and reconciling ballots in preparation for counting and certifying the total number of ballots cast over a 2-day election, as well as concerns with the effect of a weekend election on workload and the election calendar. Voting technology challenges and related costs. Election officials we interviewed in 7 of the 17 local jurisdictions discussed technology-related challenges they foresaw with using their voting systems for a 2-day weekend election, and officials from 4 of these jurisdictions said they would expect addressing this to result in significantly higher costs than for a Tuesday election. According to officials, their voting systems are designed for all voting to take place in a single day and for equipment to be closed when polling places close that night. Officials explained that, to preserve the integrity of the vote in a weekend election, they would have to leave voting machines open Saturday night where polls are closed; however, the equipment could not simply be suspended Saturday night and started up again Sunday morning for a second day of voting.Rather, once closed, the equipment would, in effect, consider the election to be over and could not record additional votes. According to officials, to conduct a second day of voting, their equipment would either need to be (1) reprogrammed by the vendor in advance of the election and recertified or (2) reprogrammed Saturday night and retested before Sunday morning, which involves a lengthy process that cannot be completed in a single night. Alternatively, they could purchase additional memory cards or even a second set of voting machines. Elections officials in the City and County of San Francisco anticipated facing such a challenge in planning for a November 2011 municipal election that was to take place on 2 days—a Saturday and the following Tuesday. In consultation with the California Secretary of State’s office, they determined that their voting equipment could not be closed on Saturday night and restarted on Tuesday morning. Therefore, to address this issue, they intended to borrow voting machines from other jurisdictions and use different machines each day. However, they explained that borrowing voting equipment would not be an option if the day of general elections were moved to a weekend since every jurisdiction in the country would be using its own voting equipment on the same days. Thus, they stated that if federal elections were moved to a weekend, they would likely have to purchase a second set of voting equipment to use on Sunday at over 550 polling places, at an estimated cost of over $5.9 million. This alone would represent about 88 percent of the total costs the county incurred in administering the November 2010 general election. Officials from another jurisdiction said they anticipate their voting machines would need significant changes, including changes to software, to suspend the election Saturday night and resume it on Sunday morning—changes that the officials expected would require EAC recertification.as long as 1 year and cost the manufacturer of their voting system hundreds of thousands of dollars, some of which might be passed on to them in the form of required software upgrades. Election officials in another state that used different voting equipment said they thought their equipment could suspend voting Saturday night and resume on Sunday morning if careful steps were taken by trained poll workers or technical staff on how to temporarily turn off voting machines without closing them and ending the vote. However, they would need technical staff or poll workers with more technical skills than those they have used in the past to accomplish this without ending the entire voting process by mistake. They estimated that the recertification process could take In addition, election officials in all nine states expected other related costs, such as for technology support—either in-house or contracted— would be greater for a weekend election. They stated that cost increases would primarily be due to securing these services for a second day and potentially having to pay overtime or premium pay on a weekend. For example, based on their experience conducting nonfederal elections on a Saturday, officials from Louisiana said that they would expect to incur significant additional costs because they would need to hire more part- time election staff to load and reprogram a second set of memory cards into their electronic voting machines on Sunday morning at approximately 3,000 polling places statewide. Moreover, the state normally pays to have technology vendors on call to troubleshoot equipment-related problems at polling places on Election Day, and would anticipate these costs would at least double with a 2-day election as premium pay might be involved for a weekend. Ballot reconciliation on Saturday and Sunday nights. Election officials from six states, the District, and 12 of the 17 local jurisdictions said that they would likely need to reconcile ballots—the process of accounting for the numbers of ballots issued, unused, and spoiled and ensuring that the number of ballots cast matches the number of voters who cast ballots— on both Saturday and Sunday night of a weekend election. Officials in three of these states and 2 of these jurisdictions anticipated challenges with having to do this on 2 consecutive nights. For example, officials from one state said that in jurisdictions that use paper ballots, reconciling them on Saturday night might be difficult because it takes more time to reconcile paper ballots than other voting methods and there might not be sufficient time to complete the process before opening the polls again on Sunday morning. Election officials from another state and 2 local jurisdictions added that the work associated with reconciling ballots both nights would lengthen what is already a long day for poll workers, contribute to their fatigue, and might result in more errors in the reconciliation process. Increased election and temporary staff workload and costs. Officials from all 17 jurisdictions and the District said that the workload of local election staff would increase with a 2-day weekend election and, in all but one of the jurisdictions, said this would significantly increase personnel costs. For example, officials from one jurisdiction that employs eight full- time and one part-time election staff said that a 2-day election would require that the staff work an additional 24 hours or more with a weekend election than a Tuesday election. Further, because staff are paid a premium for weekend overtime, the $10,500 incurred in overtime costs in the November 2010 general election would at least double. Election officials in 12 of the 13 jurisdictions and the District that used temporary workers for the 2010 general election anticipated they would either need to hire more temporary workers for a weekend election or have their temporary staff work more hours, which would also result in increased costs. Effect on election calendar. Election officials in three states, the District, and all 17 jurisdictions also noted that moving the day of federal elections to a weekend could affect certain aspects of their entire election calendar—that is, dates associated with administering elections (e.g., candidates’ declarations, printing ballots, voter registration, absentee ballot deadlines, and certification of the vote). Officials in 12 jurisdictions did not anticipate this would create a particular problem in administering elections in their jurisdiction. However, a state election official in New Hampshire was concerned that a weekend election might, in effect, compel his state to move its congressional primaries earlier in the year. New Hampshire’s congressional primaries take place in September— relatively late in the primary season. According to the state official, if a weekend election resulted in congressional elections being scheduled earlier than the Tuesday Election Day, the amount of time between the state’s congressional primary and Election Day would not be sufficient for election officials to create the Election Day ballot. Also, officials in 3 jurisdictions and the District noted the effect that existing absentee ballot deadlines might have on voters if the day of federal elections were changed to a weekend. These officials explained that limited weekend post office hours and concerns that the U.S. Postal Service might further reduce weekend days or hours, could result in some voters—more than with a weekday election—not mailing their absentee ballots in time to be counted. For example, election officials in the District said they would expect mailed absentee ballots would need to be postmarked no later than the Saturday of a weekend election since post offices are closed on Sunday. They anticipated that under this scenario, some ballots mailed on the weekend might not be postmarked until after the election, resulting in rejected ballots. Weekend Elections Have Not Been Studied, but Studies of Other Voting Alternatives Suggest That Voter Turnout May Not Be Strongly Affected Limited U.S. Experience with Weekend Elections Makes Evaluating Effect on Voter Turnout Challenging Because nationwide federal elections have never been held on a weekend and we could identify few U.S. jurisdictions that have held weekend elections for state or local offices, it is difficult to draw valid conclusions about how moving federal elections to a weekend would affect voter turnout. In principle, a persuasive analysis of weekend elections would involve comparing voter turnout in jurisdictions that had moved their elections to a weekend to turnout in similar jurisdictions that continued to hold the same type of election on a Tuesday. However, since federal law requires federal elections in the United States be held on a specific Tuesday, it is not possible to use national data to estimate whether voter turnout would be different if voting took place on a weekday or weekend without making assumptions that cannot be verified. The experiences of certain state and local jurisdictions with weekend elections, as well as the experiences of other countries, might lead to speculation about how voter turnout in a weekend election in the United States would compare to turnout elsewhere. In fact, the experiences of state, local, and foreign jurisdictions do not provide good proxies for the likely U.S. experience with weekend elections for the following reasons: State and local elections. According to the EAC, the states of Delaware, Louisiana, and Texas have had experience holding nonfederal elections or federal primaries on Saturday. However, these states’ experiences do not allow for an expedient and persuasive evaluation. Historical data on state and local elections in Delaware and Texas were not easily accessible in a reliable, electronic format for the periods before, during, and after weekend elections occurred. In addition, comparing the experiences of these three states with other states would risk confusing differences in election schedules with other unobserved differences, such as state culture or campaign mobilization efforts. Further, the many unique features of each election jurisdiction limit the usefulness of this type of analysis for predicting the national effect of weekend elections. Elections in other countries. Although other countries have had experience conducting national elections on weekends, comparisons between the United States and these countries have limited value because of differences in election laws, requirements, and civic responsibilities. For example, Australia and Brazil, which have held federal elections during the weekend in the past 5 years, generally require all eligible citizens to participate in the election process, whereas the United States makes voting optional. Differences in turnout between U.S. elections and elections in these countries may reflect different civic responsibilities in addition to different election schedules; however, it is difficult to assess which factor is actually responsible. Several other methodological challenges exist in evaluating the effect of alternative voting methods (e.g., in-person early voting, no-excuse absentee voting, and vote by mail), including weekend voting, on voter turnout. Voting alternatives cannot easily be evaluated using randomized controlled trials that often provide the most persuasive evidence of program effect. Jurisdictions likely would not randomly assign citizens to one set of election laws without first examining potential equal- protection-type issues. Political representatives and voters choose to adopt voting alternatives for various reasons, which might include increasing low turnout or maintaining high turnout. Consequently, the difference in turnout between jurisdictions that have or have not adopted a particular alternative could be caused by the alternative itself or by the reasons that led the jurisdiction to adopt it. The limited number of jurisdictions that have used a particular voting alternative, or the length of time it had been in use, limit evaluations to the elections in which these alternatives have been tried. For example, researchers have evaluated vote by mail in Oregon, Washington, and selected precincts in California, because these jurisdictions have regularly used vote by mail in recent years. Distinguishing the effect of a voting alternative from other factors that affect turnout can be challenging. These other factors include demographic, social, and psychological differences across voters; other election practices, such as registration closing dates and distance to polling places; the intensity or closeness of a campaign; and the activities of political campaigns and the news media. For example, voters in jurisdictions with highly educated, older citizens might have higher turnout and a higher propensity to use voting alternatives designed to increase turnout. Turnout might be higher in these jurisdictions, but it is unclear whether the difference is caused by the voting alternative or by the citizen characteristics that are associated with a greater motivation to vote. Further, it is difficult to assess the effect of a specific change in election practices when more than one change is made at the same time. Thus, should states make several new changes concurrently, such as implementing voter identification requirements and allowing citizens to vote in early voting periods, it would be difficult to assess the unique effect of any one change on voter turnout. Research Finds Effect of Alternative Voting Methods on Turnout Are Small and Citizen Demographics Are More Consequential Our review of 24 studies found that alternative voting methods have small and inconsistent effects on voter turnout, as compared to demographic differences among citizens. With the exception of vote by mail, each of the alternative voting methods we reviewed was estimated to increase or decrease turnout by no more than 4 percentage points. The studies disagreed about whether the methods would increase or decrease turnout, however, as the estimates for all methods except vote by mail varied from an increase of 2.8 percentage points to a decrease of 4 percentage points, depending on the voting method and the study, as shown in table 1. The maximum estimated increase suggests that alternative voting methods other than vote by mail do not increase turnout by large amounts, contrary to the goals of these policy reforms. In contrast, the estimated effects of vote by mail were larger and less consistent, ranging from a 2.7 percentage point decrease to a 10.2 percentage point increase. The maximum effect of vote by mail decreased to 6.8 percentage points when we excluded one study whose results were challenged by another study. We were unable to identify any study that directly estimated the effect of weekend elections on voter turnout in United States elections. The 24 studies showed that citizen demographics—age, education, race, income, and residential mobility—had stronger and more consistent associations with turnout than jurisdictions’ use of alternative voting methods. More specifically, the studies showed the following: A 10 percentage point increase in the percentage of a jurisdiction’s population between the ages of 35 and 54 (in one study) and 45 to 64 (in another study) increased turnout by 1 to 10 percentage points. A 10 percentage point increase in a jurisdiction’s population with 4- year college degrees increased turnout by 1 to 6 percentage points. A 10 percentage point increase in a jurisdiction’s nonwhite population decreased turnout by 2 to 11 percentage points. A $40,000 increase in a jurisdiction’s median income increased turnout by 0 to 4 percentage points. A 10 percentage point increase in a jurisdiction’s renter population—a measure of residential mobility—decreased turnout by 8 percentage points. The broader academic research on voter turnout has drawn conclusions that are consistent with those of the studies we reviewed. These studies have concluded that individual differences among citizens and electoral competition are more strongly and consistently associated with the decision to vote than interventions that seek to make voting more convenient for registered voters. As a representative example, one study concluded that the association between voter age and turnout in presidential elections from 1956 through 1988 was more than five times larger than the association between voter registration closing dates prior to Election Day and turnout. Our review found that alternative voting methods have not mobilized groups of citizens who are typically less likely to vote. Five of the 24 studies examined how the effect of alternative voting methods varied across particular groups of citizens. Four of those studies showed that the methods either did not increase turnout for citizens who were typically less likely to vote, or that the methods increased turnout for citizens who were already more likely to vote. For example, one study concluded that longer poll hours did not disproportionately benefit any demographic group, including farmers and employed people working more than 40 hours per week. Another study concluded that vote by mail methods increased turnout among citizens who were well educated, older, and more interested in political campaigns. These findings suggest that alternative voting methods are more effective at retaining existing voters than mobilizing citizens who do not vote. Similarly, our review showed that citizens who were typically more likely to vote were also more likely to take advantage of early voting when it was an option. Six of the 24 studies assessed which demographic groups were more likely to vote early. These studies showed that early voters are more likely to be older, better educated, more interested in politics, and more strongly identified with a political party, as compared to voters who used other voting methods. Because these groups of citizens are typically more likely to vote, the research suggests that alternative voting methods have been more popular among citizens who need less encouragement to vote. Election officials in the nine states and the District where we conducted interviews said that they expected moving Election Day from a Tuesday to a Saturday and Sunday would have little to no effect on total voter turnout. In four of the states, officials said that a weekend election might lead to more voters voting early or absentee, but they did not think total turnout would be affected. This view was shared by officials in states that had experience in early voting, including weekend early voting, as well as states with considerable experience in holding local elections on Saturday. Their comments are generally consistent with the studies we reviewed, which assessed the effects of alternative voting methods on turnout using larger, more-representative samples of elections, jurisdictions, and time periods. Turnout Did Not Increase during the Weekend Early Voting Period in Maryland’s 2010 General Election Our analysis of voter turnout data from the early voting period during the 2010 general election in Maryland showed that voters were not very likely to vote on the weekend days provided. Maryland offered early voting for the first time in the 2010 primary and general elections. Of the voters we analyzed, 1.1 percent cast ballots on the weekend during the early voting period when they had this option during the primary election, and 1.5 percent of voters did so during the general election. The turnout rate for the general election did not increase during weekend periods of early voting, as compared to weekday periods and Election Day. About 81 percent of voters voted in person on Election Day and about 6 percent voted by absentee ballot. A total of about 11.8 percent of voters voted in person on a weekday during the state’s 7-day early voting period (the second Friday through the first Thursday prior to Election Day), and about 1.5 percent voted on the Saturday of that period. Those who voted early on Saturday were generally more likely to be members of demographic groups who, according to academic research, are typically more likely to vote—that is, those who are older, less mobile, and more politically engaged. The length of registration and prior voting experience approximate a voter’s residential mobility and long-term level of political engagement, respectively. However, the youngest and least experienced voters were relatively more likely to vote on Saturday, compared to voters who were slightly older and more experienced. As shown in table 2, voters who were older than 40, had been registered for at least 10 years, and voted in at least 6 of the past 10 primary and general elections were more likely to vote on Saturday in Maryland’s 2010 general election than voters in other subgroups. For example, 1.4 percent of the registrants who were older than 65 and voted, voted on Saturday, compared to 1 percent of the registrants between the ages of 25 and 39 who voted. Although this change is small on an absolute scale, it is larger when expressed as a ratio of turnout rates for the two groups—a proportional difference of 45 percent. In addition to these differences, registered Democrats were 0.4 percentage points more likely than registered Republicans to have voted on the weekend—a proportional difference of 33 percent—but 6.3 percentage points less likely to have voted at all. Saturday turnout was slightly higher among the youngest and least- experienced subgroups of voters, as compared to voters in the subgroups immediately above them, and the most recently registered had the highest Saturday turnout of all registration groups. Because academic research has generally found that older, less mobile, and more politically engaged citizens are more likely to vote, early weekend voting appears to have been slightly more popular among Maryland citizens who need the most encouragement to vote in the first place. However, the small size of this increase suggests that Saturday poll hours did not meaningfully increase overall turnout or draw a large number of new or infrequent voters to the polls. Apart from this group, the likelihood of voting on Saturday generally increased with age, length of registration, and prior voting experience. Appendix II describes our more-detailed statistical analysis of voter turnout in Maryland. We are sending copies of this report to interested congressional committees and the EAC. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (201) 512-8777 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs can be found on the last page of this report. Key contributors to this report are listed in appendix VI. Appendix I: Selected Characteristics of States and Local Election Jurisdictions We Contacted States are responsible for the administration of their own elections as well as federal elections, and states regulate various aspects of elections including registration procedures, absentee voting requirements, alternative voting methods, establishment of polling places, provision of Election Day workers, testing and certification of voting equipment, and counting and certification of the vote. However, local election jurisdictions—counties and subcounty governmental units, such as cities, villages, and townships—have primary responsibility for managing, planning, and conducting elections. We conducted interviews with election officials in a nonprobability sample of nine states and the District of Columbia (District), and a nonprobability sample of 17 local jurisdictions within those states, about if and how they implemented alternative voting methods and their views on how election administration and voter turnout would likely be affected in their state or jurisdiction if the day for regularly scheduled federal elections were moved to a weekend. To obtain a range of perspectives, we selected states that varied according to, among other things, geographic region, alternative voting methods provided in federal elections, experience with voting on weekends, and the level of local government responsible for administering elections (e.g., county or township) as shown in table 3. In addition, we conducted interviews with election officials in a nonprobability sample of 17 local election jurisdictions within the nine states. We selected jurisdictions to reflect variation in factors including demographics, applicable bilingual voting assistance requirements, and voting methods used, as shown in table 4. In addition, we considered other factors specific to the jurisdiction—such as for Los Angeles County, which is the largest election jurisdiction in the United States, or for San Francisco, which had developed an implementation plan for a Saturday voting pilot program for a November 2011 municipal election—in making our selections. Appendix II: Analysis of Weekend Voting in the 2010 Maryland State Elections The state of Maryland provided its citizens the option of in-person early voting for the first time in the 2010 primary and general elections. Polls were open for early voting on a total of 6 days, beginning the second Friday prior to Election Day (September 14 or November 2, respectively, for the primary and general elections) and extending through the first Thursday prior to Election Day. Early voting hours were provided on Saturday, but not on Sunday, of each 7-day early voting period. State statute required counties to establish early voting centers, with the number of early voting locations based on the county’s number of registered voters. Each county had at least one location, plus three to five additional locations if they had more than 150,000 registered voters. Early voting hours were the same across counties, beginning at 10:00 a.m. and ending at 8:00 p.m. each day. Maryland’s experience with early voting allowed us to analyze how voters used weekend poll hours when they were available. Voter registration and turnout data in Maryland are sufficiently detailed and reliable to allow for statistical analysis of citizens who were registered for the 2010 general election. This appendix presents our analysis of (1) whether the turnout rate during the early voting period was higher or lower on Saturday as compared to weekdays and (2) which groups of citizens used weekend poll hours in the 2010 general election. Specifically, we assessed whether citizens who belonged to groups that typically vote less frequently, such as younger and more-recently registered voters, were more likely to use weekend poll hours. While our analysis describes the use of weekend poll hours, it does not seek to estimate the causal effect of providing these voting methods or holding Election Day on Saturday and Sunday. Turnout Did Not Substantially Increase during Weekend Poll Hours Our analysis of voter turnout data showed that only 1.5 percent of voters used Saturday poll hours during the early voting period of the 2010 general election. To further examine how the turnout rate changed between the weekend and weekday periods, we analyzed the voting times for early voters. According to state officials, all counties in Maryland used the same computerized voter registration and election administration system in 2010, which recorded the date and time when each voter received a ballot. By estimating the turnout rate within small intervals during the early voting period, we assessed whether turnout meaningfully changed between the weekday and weekend periods. As shown in figure 3, the proportion of Maryland voters—categorized into groups by age, length of registration, and participation in prior elections— who cast ballots on a certain “poll day” during the early voting period did not substantially increase on Saturday. In our analysis, a poll day is a 24- hour period when the polls were open during the early voting period. It equals the calendar days prior to Election Day when citizens were able to vote minus the subsequent time when the polls were closed. example, figure 3 shows that the first citizen to receive a ballot when the polls opened on Saturday of the early voting period voted 2.9 poll days prior to Election Day, even though Saturday, October 23, was the 10th calendar day prior to Election Day on Tuesday, November 2. We rescaled calendar time to poll days to avoid analyzing periods when the polls were closed. In effect, this adjusts the voting duration times for the time “at risk” of voting. While Maryland standardized early voting poll hours across counties, we included voting times outside of the official poll hours, which may have represented citizens who were in line to vote when the polls closed. As a result, we defined the start and end of each poll day as the earliest and latest recorded voting time on a particular calendar day of early voting. Multivariate Statistical Analysis of Saturday Voting across Groups Produces Similar Results In order to describe the patterns in figure 3 more precisely, we used several statistical methods to estimate how turnout and the use of Saturday voting varied across groups of citizens with different characteristics. These methods allowed us to estimate the association between a certain characteristic and outcomes of interest, such as age and prior turnout, while holding constant other characteristics, such as the length of registration. ) Λ (ß + Age ß + Tenureß + Sexß + Partyß + Countyß) = 1) indicates whether a voter voted on Saturday; and the remaining terms are vectors of parameters and indicator covariates as specified in table 5. (County is a vector of indicators for each county.) To assess marginal effects, we estimated the in-sample mean predicted probabilities for each level of each covariate (though table 5 includes estimates only for the covariates of interest). We estimated robust standard errors of the parameters and predicted probabilities but do not report them here for simplicity. The standard errors were no more than 5 percent of the estimated probabilities, which partially reflects sample sizes of 1,857,675 for the model of turnout and 927,774 for the model of weekend voting. For ease of computation, we estimated the models on a 50 percent simple random sample of the population of registrants. The model estimates support the patterns in the raw data. Relatively fewer young citizens chose to vote, and most of those who did were not more likely to have voted on Saturday. Similarly, the most recently registered voters were also less likely to vote; however, in contrast, they were more likely to vote on Saturday, holding constant differences associated with age. On an absolute scale, however, few voters used Saturday poll hours, and a far greater proportion of less-experienced voters either did not vote, voted late in the early voting period, or waited until Election Day. Specifically, although our model estimates that no more than 2.2 percent of any subgroup of voters cast their ballots on Saturday, holding constant other group memberships, older voters were relatively more likely to do so than younger voters. The adjusted probability of voting on Saturday for voters who were between the ages of 40 and 64 was 1.8 percentage points, as compared to 1.2 percentage points for voters who were younger than 25—a difference of 50 percent expressed as a ratio. The analogous probabilities for voters registered less than 2 years ago and between 2 and 9 years ago were 2.2 and 1.5 percentage points, respectively, or a difference of 47 percent. The probability of voting on Saturday was slightly lower among citizens at least 65 years old, as compared to citizens between the ages of 40 and 64. Less-experienced citizens were much less likely to have voted in the first place. Citizens younger than 25 were 37 percentage points less likely to vote than citizens 65 and older. Similarly, citizens who first registered within the past 2 years were 39 percentage points less likely to vote than citizens who had been registered for 30 years or more. The national experience with holding regular elections on Saturday and Sunday might differ in meaningful ways from Maryland’s experience with allowing early voting on the weekend. Maryland citizens are not necessarily representative of the nation, and in 2010 the state’s early voting program was in its first year of operation. Voters may use weekend poll hours differently as they continue to learn about this option. Moreover, early voter behavior may not resemble voter behavior in elections where Election Day falls on Saturday and Sunday. In the latter system, political campaigns and the news media may increase voter awareness of weekend poll hours, and voters would not be forced to choose between voting on the weekend and voting before the political campaigns have ended. Despite these limitations, our analysis suggests that relatively few voters used weekend poll hours when they were offered in the 2010 Maryland general election, and that most of the citizens in subgroups typically less likely to vote did not turn out at vastly higher rates during this period. If voters’ behavior can accurately reveal their preferences for different voting methods, the demand for weekend poll hours appeared to be modest in this election. Appendix III: Alternative Voting Methods Provided in 50 States and the District for the 2004 and 2010 November General Elections The number of states providing alternative voting methods—that is, in- person early voting and no-excuse absentee voting—has increased, as shown in figure 4. Specifically, in 2006, on the basis of results from a survey of 50 states and the District of Columbia (District), we reported that 24 states and the District required or allowed in-person early voting and 21 states allowed or required no-excuse absentee voting by mail in the November 2004 general election. For the November 2010 general election, 33 states and the District provided in-person early voting and 29 states and the District provided no-excuse absentee voting by mail. Appendix IV: Selected Details of Early and No-Excuse Absentee Voting for the 2010 General Election in States We Contacted Of the nine states and the District of Columbia (District) we contacted, seven states and the District provided early voting. Of those seven states, five states and the District provided both early voting and no-excuse absentee voting. Two of the nine states where we conducted interviews— Delaware and New Hampshire—did not provide voters with either of these alternatives, although they allowed voters to vote by absentee ballot if they provided a reason. Table 6 provides selected details on how early and no-excuse absentee voting were implemented during the November 2010 general election. Appendix V: Selected Details of Early Voting for the 2010 General Election in Local Jurisdictions We Contacted Of the 17 local jurisdictions and the District of Columbia (District) we contacted, 14 jurisdictions and the District provided in-person early voting. Table 7 provides selected details regarding how early voting was implemented during the November 2010 general election. Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Mary Catherine Hult, Assistant Director; David Alexander; Josh Diosomito; Geoffrey Hamilton; Lara Miklozek; Hugh Paquette; Jeff Tessin; and Lori Weiss made key contributions to this report. Bibliography Bergman, Elizabeth, Philip Yates, and Elaine Ginnold. “How Does Vote by Mail Affect Voters? A natural experiment examining individual-level turnout.” The PEW Center on the States, Make Voting Work project. Accessed May 19, 2011. http://www.pewcenteronthestates.org/report_detail.aspx?id=58252 Berinsky, Adam J., Nancy Burns, and Michael W. Traugott. “Who Votes By Mail? A Dynamic Model of the Individual-Level Consequences of Voting-By-Mail Systems.” The Public Opinion Quarterly, vol. 65 (2001): 178-197. Burden, Barry C., David T. Canon, Kenneth R. Mayer, and Donald P. Moynihan. “Election Laws, Mobilization, and Turnout: The Unanticipated Consequences of Election Reform,” April 12, 2011. Social Science Research Network eLibrary. Accessed May 19, 2011. http://ssrn.com/abstract=1690723 Fitzgerald, Mary. “Greater Convenience but Not Greater Turnout—the Impact of Alternative Voting Methods on Electoral Participation in the United States.” American Politics Research, vol. 33 (2005): 842-867. Giammo, Joseph D., and Brian J. Brox. “Reducing the Costs of Participation: Are States Getting a Return on Early Voting?” Political Research Quarterly, vol. 63 (2010): 295-303. Gronke, Paul, and Daniel Krantz Toffey, “The Psychological and Institutional Determinants of Early Voting.” Journal of Social Issues, vol. 64 (2008): 503-524. Gronke, Paul, Eva Galanes-Rosenbaum, and Peter A. Miller. “Early Voting and Turnout.” PS: Political Science and Politics, vol. 40 (2007): 639-645. Gronke, Paul, Eva Galanes-Rosenbaum, and Peter A. Miller. “Early Voting and Voter Turnout.” In Democracy in the States: Experiments in Election Reform. Ed. Bruce E. Cain, Todd Donovan, and Caroline J. Tolbert. Washington, D.C.: Brookings Institution, 2008. Gronke, Paul, and Peter A.M. Miller. “Voting by Mail and Turnout: A Replication and Extension.” Paper presented at the 2007 Annual Meeting of the American Political Science Association, Chicago, Ill. Hanmer, Michael J., and Michael W. Traugott. “The Impact of Voting by Mail on Voter Behavior.” American Politics Research, vol. 32 (2004): 375- 405. Hill, David, and Michael D. Martinez. “The Interactive Effects of Electoral Reform, Competitiveness and Age on Turnout.” Paper presented at the 2008 Annual Meeting of the American Political Science Association, Boston, Mass. Juenke, Eric Gonzalez, and Juliemarie Shepherd. “Vote Centers and Voter Turnout.” In Democracy in the States: Experiments in Election Reform. Ed. Bruce E. Cain, Todd Donovan, and Caroline J. Tolbert. Washington, D.C.: Brookings Institution, 2008. Karp, Jeffrey A., and Susan A. Banducci. “Going Postal: How All-Mail Elections Influence Turnout.” Political Behavior, vol. 22 (2000): 223-239. Kousser, Thad and Megan Mullin. “Does Voting by Mail Increase Participation? Using Matching to Analyze a Natural Experiment.” Political Analysis, vol. 15 (2007): 428-445. Leighley, Jan E., and Jonathan Nagler. “Electoral Laws and Turnout, 1972-2008.” Paper presented at the 4th Annual Conference on Empirical Legal Studies, Los Angeles, Calif., November 2009. Magleby, David B. “Participation in Mail Ballot Elections.” The Western Political Quarterly, vol. 40 (1987): 79-91. Miller, Peter A., and Neilan S. Chaturvedi. “Get Out the Early Vote: Minority Use of Convenience Voting in 2008.” Paper presented at the 2010 Annual Meeting of the American Political Science Association, Washington, D.C. Miller, Peter A.M., and Paul Gronke. “The Effect of Voting by Mail in Washington: 1960-2008.” Portland, Ore.: The Early Voting Information Center, Reed College. Accessed May 19, 2011. http://www.earlyvoting.net/research Patterson, Samuel C. and Gregory A. Caldeira. “Mailing in the Vote: Correlates and Consequences of Absentee Voting.” American Journal of Political Science, vol. 29 (1985): 766-788. Southwell, Priscilla L., and Justin I. Burchett. “The Effect of All-Mail Elections on Voter Turnout.” American Politics Quarterly, vol. 28 (2000): 72-79. Stein, Robert M. “Early Voting.” The Public Opinion Quarterly, vol. 62 (1998): 57-69. Stein, Robert M., and Greg Vonnahme. “Engaging the Unengaged Voter: Voter Centers and Voter Turnout.” The Journal of Politics, vol. 70 (2008): 487-497. Stein, Robert M., and Patricia A. Garcia-Monet. “Voting Early but Not Often.” Social Science Quarterly, vol. 78 (1997): 657-671. Wolfinger, Raymond E., Benjamin Highton, and Megan Mullin. “How Postregistration Laws Affect the Turnout of Citizens Registered to Vote.” State Politics and Policy Quarterly, vol. 5 (2005): 1-23. Related GAO Products Elections: DOD Can Strengthen Evaluation of Its Absentee Voting Assistance Program. GAO-10-476. Washington, D.C.: June 17, 2010. Elderly Voters: Information on Promising Practices Could Strengthen the Integrity of the Voting Process in Long-term Care Facilities. GAO-10-6. Washington, D.C.: November 30, 2009. Voters with Disabilities: Additional Monitoring of Polling Places Could Further Improve Accessibility. GAO-09-941. Washington, D.C.: September 30, 2009. Voters With Disabilities: More Polling Places Had No Potential Impediments Than in 2000, but Challenges Remain. GAO-09-685. Washington, D.C.: June 10, 2009. Elections: States, Territories, and the District Are Taking a Range of Important Steps to Manage Their Varied Voting System Environments. GAO-08-874. Washington, D.C.: September 25, 2008. Elections: Federal Program for Certifying Voting Systems Needs to Be Further Defined, Fully Implemented, and Expanded. GAO-08-814. Washington, D.C.: September 16, 2008. Bilingual Voting Assistance: Selected Jurisdictions’ Strategies for Identifying Needs and Providing Assistance. GAO-08-182. Washington, D.C.: January 18, 2008. Elections: Action Plans Needed to Fully Address Challenges in Electronic Absentee Voting Initiatives for Military and Overseas Citizens. GAO-07-774. Washington, D.C.: June 14, 2007. Elections: The Nation’s Evolving Election System as Reflected in the November 2004 General Election. GAO-06-450. Washington, D.C.: June 6, 2006. Elections: Absentee Voting Assistance to Military and Overseas Citizens Increased for the 2004 General Election, but Challenges Remain. GAO-06-521. Washington, D.C.: April 7, 2006. Elections: Views of Selected Local Election Officials on Managing Voter Registration and Ensuring Eligible Citizens Can Vote. GAO-05-997. Washington, D.C.: September 27, 2005. Elections: Additional Data Could Help State and Local Elections Officials Maintain Accurate Voter Registration Lists. GAO-05-478. Washington, D.C.: June 10, 2005. Elections: Perspectives on Activities and Challenges Across the Nation. GAO-02-3. Washington, D.C.: October 15, 2001. Voters With Disabilities: Access to Polling Places and Alternative Voting Methods. GAO-02-107. Washington, D.C.: October 15, 2001. Elections: Voting Assistance to Military and Overseas Citizens Should Be Improved. GAO-01-1026. Washington, D.C.: September 28, 2001. Elections: The Scope of Congressional Authority in Election Administration. GAO-01-470. Washington, D.C.: March 13, 2001.
Many U.S. citizens who are eligible to vote in federal elections do not do so. For instance, in the 2008 general election, about 62 percent of eligible citizens voted. To increase voter turnout by enhancing convenience, some states have implemented alternative voting methods, such as in-person early voting—casting a ballot in person prior to Election Day without providing a reason—and no-excuse absentee voting—casting an absentee ballot, usually by mail, without providing a reason. In general, since 1845, federal law has required that federal elections be held on Tuesday. The committees on appropriations directed GAO to study and report on costs and benefits of implementing H.R. 254—the Weekend Voting Act—including issues associated with conducting a weekend election. Specifically, this report addresses: (1) alternatives to voting on Tuesday that states provided for the November 2010 general election, (2) how election officials anticipate election administration and costs would be affected if the day for federal elections were moved to a weekend, and (3) what research and available data suggest about the potential effect of a weekend election on voter turnout. GAO reviewed H.R. 254 and analyzed state statutes and early voting turnout in the 2010 Maryland elections, which had early voting over weekdays and weekends. GAO interviewed election officials in nine states, the District of Columbia (District), and 17 local jurisdictions that were selected on the basis of geographic dispersion and experience with weekend voting, among other things. Though not generalizable, the interviews provide insights. For the 2010 general election, 35 states and the District provided voters at least one alternative to casting their ballot on Election Day through in-person early voting, no-excuse absentee voting, or voting by mail. Specifically, 33 states and the District provided in-person early voting, 29 states and the District provided no-excuse absentee voting, and 2 states provided voting by mail to all or most voters. Of the 9 states and the District where GAO conducted interviews, all but 2 states provided voters the option of in-person early voting in the 2010 general election, and 5 states and the District offered both early voting and no-excuse absentee voting. Implementation and characteristics of in-person early voting varied among the 7 states and, in some cases, among the jurisdictions within a state. For example, 5 states and the District required local jurisdictions to include at least one Saturday, and 2 states allowed for some jurisdiction discretion to include weekend days. State and local election officials GAO interviewed identified challenges they would anticipate facing in planning and conducting Election Day activities on weekends—specifically, finding poll workers and polling places, and securing ballots and voting equipment—and expected cost increases. Officials in all 17 jurisdictions and the District we contacted said they expected the number of poll workers needed for a 2-day weekend election would increase. Further, officials in 13 jurisdictions said that some poll workers would be less willing to work on the weekend because of other priorities, such as family obligations or attending religious services. Officials in 14 of the 17 jurisdictions and the District expected that at least some of the polling places they used in past elections—such as churches—would not be available for a weekend election, and anticipated difficulty finding replacements. Officials in all 9 states, the District, and 15 of the 17 local jurisdictions said ensuring the security of ballots and voting equipment over the Saturday night of a weekend election would be both challenging and expensive. Officials in 5 of the 7 states and the District that conducted early voting and provided security over multiple days explained that the level of planning needed for overnight security for a weekend election would far surpass that of early voting due to the greater number and variety of Election Day polling places. For example, officials in one state said that for the 2010 general election, the state had fewer than 300 early voting sites—which were selected to ensure security—compared to more than 2,750 polling places on Election Day, which are generally selected based on availability and proximity to voters. In addition, officials in all 9 states, the District, and 15 of the 17 local jurisdictions said they expected overnight security costs to increase. Weekend elections have not been studied, but studies of other voting alternatives determined that voter turnout is not strongly affected by them. Since nationwide federal elections have never been held on a weekend, it is difficult to draw valid conclusions about how moving federal elections to a weekend would affect voter turnout. GAO’s review of 24 studies found that, with the exception of vote by mail, each of the alternative voting methods was estimated to change turnout by no more than 4 percentage points. GAO’s analysis of early voter turnout data in Maryland found that 1.5 percent of voters we analyzed cast ballots on the weekend during the 2010 general election.
GAO_GAO-11-475
Background Medicare is a federal program that provides health insurance coverage for individuals aged 65 and older and for certain disabled persons. It is funded by general revenues, payroll taxes paid by most employees, employers, and individuals who are self-employed, and beneficiary premiums. Medicare consists of four parts. Medicare Part A provides payment for inpatient hospital, skilled nursing facility, some home health, and hospice services, while Part B pays for hospital outpatient, physician, some home health, durable medical equipment, and preventive services. In addition, Medicare beneficiaries have an option to participate in Medicare Advantage, also known as Part C, which pays private health plans to provide the services covered by Medicare Parts A and B. Further, all Medicare beneficiaries may purchase coverage for outpatient prescription drugs under Medicare Part D, and some Medicare Advantage plans also include Part D coverage. In 2010, Medicare covered 47 million elderly and disabled beneficiaries and had estimated outlays of about $509 billion. CMS uses contractors to help administer the claims processing and payment systems for Medicare. These administrative contractors are responsible for processing approximately 4.5 million claims per workday. The contractors review the claims submitted by providers to ensure payment is made only for medically necessary services covered by Medicare for eligible individuals. Medicaid is the federal-state program that provides health coverage for acute and long-term care services for over 65 million low-income people. It consists of more than 50 distinct state-based programs that each define eligibility requirements and administer payment for health care services for low-income individuals, including children, families, the aged, and the disabled. Within broad federal requirements, each state operates its Medicaid program according to a state plan. Low-income Americans who meet their state’s Medicaid eligibility criteria are entitled to have payments made on their behalf for covered services. States are entitled to federal matching funds, which differ from state to state but can be up to three- fourths of their costs of this coverage. The amount paid with federal funds is determined by a formula established in law. CMS oversees the Medicaid program at the federal level, while the states administer their respective programs’ day-to-day operations, such as enrolling eligible individuals, establishing payment amounts for covered benefits, establishing standards for providers and managed care plans, processing and paying for claims and managed care, and ensuring that state and federal health care funds are not spent improperly or diverted by fraudulent providers. The estimated outlays for Medicaid for both the federal and state governments were $408 billion in 2010. Of this cost, approximately $275 billion was incurred by the federal government and $133 billion by the states. CMS Program Integrity Initiatives The Health Insurance Portability and Accountability Act (HIPAA) of 1996 established the Medicare Integrity Program to increase and stabilize federal funding for health care antifraud activities. The act appropriated funds for the program as well as amounts for HHS and the Department of Justice to carry out the health care fraud and abuse control program. Subsequent legislation further outlined responsibilities under the Medicare Integrity Program. Under the Medicare Integrity Program, CMS staff and several types of contractors perform functions to help detect cases of fraud, waste, and abuse, and other payment errors, which include reviews of paid claims to identify patterns of aberrant billing. Among these program integrity contractors are program safeguard contractors, zone program integrity contractors, and Medicare drug integrity contractors. The program safeguard and zone program integrity contractors are responsible for ensuring the integrity of benefit payments for Medicare Parts A and B (including durable medical equipment), as well as the Medi-Medi data match program. Medicare drug integrity contractors are responsible for monitoring fraud, waste, or abuse in the Medicare prescription drug program (i.e., Part D). These contractors work with the HHS Office of the Inspector General (OIG) and law enforcement organizations, such as the Department of Justice, to help law enforcement pursue criminal or civil penalties when fraudulent claims are detected. Table 1 summarizes the origin and responsibilities of the program integrity contractors who help CMS to detect fraud, waste, and abuse. In addition to provisions of HIPAA and other legislation intended to strengthen Medicare program integrity functions, in 2006 Congress created the Medicaid Integrity Program through the Deficit Reduction Act of 2005. Its goals are to strengthen the national Medicaid audit program and to enhance federal oversight of and support and assistance to state Medicaid programs. The program provides states with technical assistance and support to enhance the federal-state partnership as well as to expand activities that involve data analysis, sharing algorithms of known improper billings, and fraud awareness through education and outreach. Individual states are responsible for ensuring the accuracy of Medicaid payments within their state programs, which can involve using their own staff or contractors to analyze claims to detect improper payments. In addition to the states’ efforts, CMS employs Medicaid program integrity contractors to perform specific activities as part of its efforts to detect fraud, waste, and abuse in the Medicaid program, such as reviewing provider claims payments that have been processed by the states. Generally, each state Medicaid program integrity unit works independently, using its own data models, data warehouses, and approach to analysis. As a result, Medicaid data are stored in multiple disparate systems and databases throughout the country. Because of the volumes of work, states often augment their in-house capabilities by contracting with companies that specialize in Medicaid claims and utilization reviews. State Medicaid program integrity units target their activities to those providers that pose the greatest financial risk to their Medicaid programs. However, the states have limited methods of identifying Medicaid fraud in neighboring jurisdictions or by providers who move from state to state. As stated in a July 2007 report by the HHS OIG, the agency intends for program integrity contractors to perform a significant amount of self- initiated, exploratory analysis to seek patterns or instances of fraud and abuse. One of the specific activities undertaken by these contractors is the analysis of claims data to identify improper billing that may indicate fraud or abuse. If the billing appears to be potentially fraudulent or abusive, the contractors take further actions, which can include requesting and reviewing medical records associated with the claims and referring the case to law enforcement. In 2010, CMS created the Center for Program Integrity to serve as its focal point for all national Medicare and Medicaid program integrity fraud and abuse issues. The new center is responsible for, among other things, collaborating with other CMS components to develop and implement a comprehensive strategic plan, objectives, and measures to carry out the agency’s program integrity mission and goals, and ensure program vulnerabilities are identified and resolved. According to agency documentation describing the program, the center was designed to promote the integrity of the Medicare and Medicaid programs through provider and contractor audits and policy reviews, identification and monitoring of program vulnerabilities, and support and assistance to states; collaborate on the development and advancement of new legislative initiatives and improvements to deter, reduce, and eliminate fraud, waste and abuse; oversee all CMS interactions and collaboration with key stakeholders related to program integrity (e.g., the Department of Justice, HHS OIG, and state law enforcement agencies) for the purposes of detecting, deterring, monitoring, and combating fraud and abuse; and take action against those who commit or participate in fraudulent or other unlawful activities. CMS’s Use of IT to Help Detect Fraud, Waste, and Abuse Like financial institutions, credit card companies, telecommunications firms, and other private sector companies that take steps to protect customers’ accounts, CMS uses automated software tools to help predict or detect cases of improper claims and payments. For more than a decade, CMS and its contractors have applied such tools to access data from various sources to analyze patterns of unusual activities or financial transactions that may indicate fraudulent charges or other types of improper payments. For example, to identify unusual billing patterns and to support referrals for prosecution or other action, CMS and program integrity contractor analysts and investigators need, among other things, access to information about key actions taken to process claims as they are filed and specific details about claims already paid. This includes information on claims as they are billed, adjusted, and paid or denied; check numbers on payments of claims; and other specific information that could help establish provider intent. These data, along with data on regional or national trends on claims billing and payment, support the investigation and potential prosecution of fraud cases. Upon completing investigations, the contractors determine whether to refer the investigations as cases to law enforcement officials. CMS and its program integrity contractors currently use many different means to store and manipulate data and, since the establishment of the agency’s program integrity initiatives in the 1990s, have built multiple databases and developed analytical software tools to meet their individual and unique needs. However, according to CMS, these geographically distributed, regional approaches to data analysis result in duplicate data and limit the agency’s ability to conduct analyses of data on a nationwide basis. Additionally, data on Medicaid claims are scattered among the states in multiple disparate systems and data stores, and are not readily available to CMS. Thus, CMS has been working for most of the past decade to consolidate program integrity data and analytical tools for detecting fraud, waste, and abuse. The agency’s efforts led to the initiation of the IDR program and, subsequently, the One PI program, which are intended to provide CMS and its program integrity contractors with a centralized source that consolidates Medicare and Medicaid data from the many disparate and dispersed legacy systems and databases and a Web-based portal and set of analytical tools by which these data can be accessed and analyzed to help detect cases of fraud, waste, and abuse. CMS’s Initiative to Develop a Centralized Source of Medicare and Medicaid Data The CMS Office of Information Services is responsible for agencywide IT management. Its initiative to develop a centralized data warehouse began in 2003 as an element of the agency’s Enterprise Data Modernization strategy. According to agency documentation, the strategy was designed to meet the increasing demand for higher quality and more timely data to support decision making throughout the agency, including identifying trends and discovering patterns of fraud, waste, and abuse. As part of the strategy, the agency established the Data Warehouse Modernization project to develop and implement the technology needed to store long- term data for analytical purposes, such as summary reports and statistical analyses. CMS initially planned for the data warehouse project to be complete by September 30, 2008. However, in 2006 CMS expanded the scope of the project to not only modernize data storage technology but also to integrate Medicare and Medicaid data into a centralized repository. At that time, program officials also changed the name to IDR, which reflected the expanded scope. The Office of Information Services’ Enterprise Data Group manages the IDR program and is responsible for the design and implementation of the system. The program’s overall goal is to integrate Medicare and Medicaid data so that CMS and its partners may access the data from a single source. Specific goals for the program are to transition from stove-piped, disparate sets of databases to a highly integrated data environment for the enterprise; transition from a claim-centric orientation to a multi-view orientation that includes beneficiaries, providers, health plans, claims, drug code data, clinical data, and other data as needed; provide uniform privacy and security controls; provide database scalability to meet current and expanding volumes of provide users the capability to analyze the data in place instead of relying on data extracts. According to IDR program officials, CMS envisioned that IDR would become the single repository for the agency’s data and enable data analysis within and across programs. Specifically, IDR was to establish the infrastructure for storing data for Medicare Parts A, B, and D, as well as a variety of other CMS functions, such as program management, research, analytics, and business intelligence. CMS envisioned an incremental approach to incorporating data into IDR. Specifically, program plans provided to the Office of Management and Budget (OMB) by the Office of Information Services in 2006 stated that all Medicare Part D data would be incorporated into IDR by the end of that fiscal year. CMS’s 2007 plans added the incorporation of Medicare Parts A and B data by the end of fiscal year 2007, and Medicaid data for 5 states by the end of fiscal year 2009, 20 states by 2010, 35 by 2011, and all 50 states by the end of fiscal year 2012. Initial program plans and schedules also included the incorporation of additional data from legacy CMS claims-processing systems that store and process data related to the entry, correction, and adjustment of claims as they are being processed, along with detailed financial data related to paid claims. According to program officials, these data, called “shared systems” data, are needed to support the agency’s plans to incorporate tools to conduct predictive analysis of claims as they are being processed, helping to prevent improper payments. Shared systems data, such as check numbers and amounts related to claims that have been paid, are also needed by law enforcement agencies to help with fraud investigations. CMS initially planned to include all the shared systems data in IDR by July 2008. Figure 1 shows a timeline of initial plans for incorporating data into IDR. In 2006, CMS’s Office of Financial Management initiated the One PI program with the intention of developing and implementing a portal and software tools that would enable access to and analysis of claims, provider, and beneficiary data from a centralized source. CMS’s goal for One PI was to support the needs of a broad program integrity user community, including agency program integrity personnel and contractors who analyze Medicare claims data, along with state agencies that monitor Medicaid claims. To achieve its goal, agency officials planned to implement a tool set that would provide a single source of information to enable consistent, reliable, and timely analyses and improve the agency’s ability to detect fraud, waste, and abuse. These tools were to be used to gather data about beneficiaries, providers, and procedures and, combined with other data, find billing aberrancies or outliers. For example, as envisioned, an analyst could use software tools to identify potentially fraudulent trends in ambulance services. He or she could gather data about claims for ambulance services and medical treatments, and then use other software to determine associations between the two types of services. If the analyst found claims for ambulance travel costs but no corresponding claims for medical treatment, the analyst may conclude that the billings for those services were possibly fraudulent. According to agency program planning documentation, the One PI system was to be developed incrementally to provide access to data, analytical tools, and portal functionality in three phases after an initial proof of concept phase. The proof of concept phase was reportedly begun in early 2007 and focused on integrating Medicare and Medicaid data into the portal environment. After its completion, the first development phase focused on establishing a development environment in CMS’s Baltimore, Maryland, data center and, according to program officials, was completed in April 2009. The second and third phases of development were planned in January 2009 to run concurrently and to focus on the technical and analytical aspects of the project, such as building the environment to integrate the analytical tools using data retrieved from IDR, sourcing claims data from the shared systems, conducting data analyses in production, and training analysts who were intended users of the system. CMS planned to complete these two phases and implement the One PI portal and two analytical tools for use by program integrity analysts on a widespread basis by the end of fiscal year 2009. CMS’s Office of Financial Management engaged contractors to develop the system. Responsibility for and management of the One PI program moved from the Office of Financial Management to the Center for Program Integrity in 2010. Figure 2 illustrates initial plans for One PI. Prior GAO Reports on Fraud, Waste, and Abuse in the Medicare and Medicaid Programs In our prior work, we have reported on CMS’s efforts to detect and prevent fraudulent and improper payments in the Medicare and Medicaid programs and on its management of IT to support its mission. For example, as early as 1995, we reviewed IT systems used in the Medicare program to detect and prevent fraud and discussed the availability of other technologies to assist in combating fraudulent billing. We found it was too early to fully document the cost-effectiveness of such systems, although several potential fraud cases were detected by this technology, indicating that these types of systems could provide net benefits in combating fraud. We observed that such technology could ultimately be utilized in the claims-processing environment to delay or even prevent the payment of questionable claims submitted by suspect providers. We have also reported on weaknesses in CMS’s processes for managing IT investments based upon key practices established in our Information Technology Investment Management framework. Specifically, in 2005, we evaluated CMS’s capabilities for managing its internal investments, described plans the agency had for improving these capabilities, and examined the agency’s process for approving and monitoring state Medicaid Management Information Systems. We found that CMS had not established certain key practices for managing individual IT investments and recommended that the CMS Administrator develop and implement a plan to address the IT investment management weaknesses identified in the report. We also recommended that at a minimum, the agency should update its investment management guide to reflect current investment management processes. CMS subsequently took actions to implement each of our recommendations. Additionally, our 2007 study of the Medicare durable medical equipment, prosthetics, orthotics, and supplies benefit found that it was vulnerable to fraud and improper payments. We recommended that CMS direct its contractors to develop automated prepayment controls to identify potentially improper claims and consider adopting the most cost-effective controls of other contractors. CMS concurred with the recommendation, but has not yet implemented the prepayment controls that we recommended. In 2009, we examined the administration of the Medicare home health benefit, which we found to leave the benefit vulnerable to fraud and improper payments. We made several recommendations to the Administrator of CMS, including directing contractors to conduct post- payment medical reviews on claims submitted by home health agencies with high rates of improper billing identified through prepayment review. CMS stated it would consider two of our four recommendations—to amend regulations to expand the types of improper billing practices that are grounds for revocation of billing privileges, and to provide physicians who certify or recertify plans of care with a statement of services received by beneficiaries. CMS neither agreed nor disagreed with our other two recommendations. Finally, in testifying on Medicare and Medicaid fraud, waste, and abuse in March 2011, we described steps that CMS could take to reduce improper payments and the agency’s recent solicitation for proposals of contracts for the development and implementation of automated tools that support reviews of claims before they are paid. These predictive modeling tools are intended to provide new capabilities to help prevent improper payments of Medicare claims. IDR and One PI Have Been Developed and Implemented but Without All Planned Data and Widespread Use CMS has developed and implemented IDR and One PI for use by its program integrity analysts, but IDR does not include all the data the agency planned to have incorporated by the end of 2010, and One PI is being used by a limited number of analysts. While CMS has developed and begun using IDR, the repository does not include all the planned data, such as Medicaid and shared systems data. Program officials attribute this lack of data to insufficient planning, which did not consider unexpected obstacles or allow time for contingencies. In addition, the agency has developed and deployed One PI, but the system is being used by less than 7 percent of the intended user community and does not yet provide as many tools as planned. According to agency officials, plans to train and deploy the system to a broad community of users were disrupted when resources dedicated to these activities were redirected to address a need to improve the user training program. Further, plans and schedules for completing the remaining work have not been finalized, and CMS has not identified risks and obstacles to project schedules that may affect its ability to ensure broad use and full implementation of the systems. Until program officials finalize plans and develop reliable schedules for providing all planned data and capabilities and ensuring that One PI gains broader use throughout the program integrity community, CMS will remain at risk of experiencing additional delays in reaching widespread use and full implementation of the systems. Consequently, the agency may miss an opportunity to effectively use these IT solutions to enhance its ability to detect fraud, waste, and abuse in the Medicare and Medicaid programs. IDR Has Been Developed and Is in Use, but Does Not Yet Include All Data Needed to Enhance Program Integrity Efforts IDR has been in use by CMS and contractor program integrity analysts since September 2006 and currently incorporates data related to claims for reimbursement of services under Medicare Parts A, B, and D. Specifically, CMS incorporated Part D data into IDR in September 2006, as planned, and incorporated Parts A and B data by the end of fiscal year 2008. The primary source of these data is CMS’s National Claims History database, from which data are extracted on a weekly basis. Other supplemental data were incorporated into IDR that are used to conduct program integrity analyses, including drug code data that are obtained from daily and weekly updates of data from CMS’s Drug Data Processing System, and claims- related data about physicians that are retrieved from National Provider Index databases on a daily basis. Additionally, IDR contains data about beneficiaries that are extracted daily from the Medicare Beneficiary Database and health plan contract and benefit data that are obtained on a weekly basis from CMS’s Health Plan Management Systems. According to IDR program officials with the Office of Information Services, the integration of these data into IDR established a centralized source of data previously accessed from multiple disparate system files. CMS reported to OMB in 2010 that the agency had spent almost $48 million to establish IDR and incorporate the existing data since the program was initiated. Table 2 provides the actual costs of developing and implementing IDR for each year since fiscal year 2006, as reported to us by CMS officials. Although the agency has been incorporating data from various data sources since 2006, IDR does not yet include all the data that were planned to be incorporated by the end of 2010 and that are needed to support enhanced program integrity initiatives. Specifically, the shared systems data that are needed to allow predictive analyses of claims are not incorporated. Without this capability, program integrity analysts are not able to access data from IDR that would help them identify and prevent payment of fraudulent claims. Additionally, IDR does not yet include the Medicaid data that are critical to analysts’ ability to detect fraud, waste, and abuse in the Medicaid program. According to IDR program officials, the shared systems data were not incorporated into IDR because, although initial program integrity requirements included the incorporation of these data by July 2008, funding for the development of the software and acquisition of the hardware needed to meet this requirement was not approved until the summer of 2010. Since then, IDR program officials have developed project plans and identified users’ requirements, and plan to incorporate shared systems data by November 2011. With respect to Medicaid data, program officials stated that the agency has not incorporated these data into IDR because the original plans and schedules for obtaining Medicaid data did not account for the lack of a mandate or funding for states to provide Medicaid data to CMS, or the variations in the types and formats of data stored in disparate state Medicaid systems. In this regard, program officials did not consider risks to the program’s ability to collect the data and did not include additional time to allow for contingencies. Consequently, the IDR program officials were not able to collect the data from the states as easily as they expected and, therefore, did not complete this activity as originally planned. In addition to the IDR program, in December 2009, CMS initiated another agencywide program intended to, among other things, identify ways to collect Medicaid data from the many disparate state systems and incorporate the data into a single data store. As envisioned by CMS, this program, the Medicaid and Children’s Health Insurance Program Business Information and Solutions program, or MACBIS, is to include activities in addition to providing expedited access to current data from state Medicaid programs. For example, the MACBIS initiative is also intended to result in the development of a national system to address the needs of federal and state Medicaid partners, along with technical assistance and training for states on the use of the system. Once established, the MACBIS system data would then be incorporated into IDR and made accessible to program integrity analysts. According to program planning documentation, this enterprisewide initiative is expected to cost about $400 million through fiscal year 2016. However, plans for this program are not final, and funds for integrating Medicaid data into IDR have not yet been requested. According to agency planning documentation, as a result of efforts to be initiated under the MACBIS program, CMS intends to incorporate Medicaid data for all 50 states into IDR by the end of fiscal year 2014. Program integrity officials stated that they plan to work with three states during 2011 to test the transfer and use of Medicaid data to help CMS determine the data that are available in those states’ systems. The Center for Program Integrity is also working with Medicaid officials to establish a test environment to begin integrating state Medicaid data into IDR. Despite establishing these high-level milestones, the agency has not finalized detailed plans for incorporating the Medicaid data that include reliable schedules that identify all the necessary activities and resources for completing these efforts or the risks associated with efforts to collect and standardize data from 50 independent systems that differ in design, technology, and other characteristics dictated by state policies. Table 3 shows the original planned dates for incorporating the various types of data and the data that were incorporated into IDR as of the end of fiscal year 2010. While CMS has identified target dates for incorporating the remaining data, best practices, such as those described in our cost estimation guide, emphasize the importance of establishing reliable program schedules that include all activities to be performed, assign resources (labor, materials, etc.) to those activities, and identify risks and their probability and build appropriate reserve time into the schedule. However, the IDR schedule we reviewed did not identify all activities and necessary resources or include a schedule risk analysis. Such an analysis could have helped CMS identify and prepare for obstacles, such as those previously encountered in trying to incorporate Medicaid data into IDR and expected to be encountered as CMS initiates efforts to collect and standardize data from 50 state systems. Without establishing a reliable schedule for future efforts to incorporate new data sources, the agency will be at greater risk of schedule slippages, which could result in additional delays in CMS’s efforts to incorporate all the data sources into IDR that are needed to support enhanced program integrity efforts. One PI Has Been Developed but Deployed to Few Users and With Less Functionality Than Planned According to program officials, user acceptance testing of the One PI system was completed in February 2009, and the system was deployed in September 2009 as originally planned. This initial deployment of One PI consisted of a portal that provided Web-based access to analytical tools used by program integrity analysts to retrieve and analyze data stored in IDR. CMS reported to OMB that the agency had spent almost $114 million to develop the existing features and functionality of the One PI system by the end of fiscal year 2010. Table 4 provides information on the actual costs of developing One PI since fiscal year 2006, as reported to us by CMS officials. As currently implemented, the system provides access to two analytical tools—Advantage Suite and Business Objects. Documented specifications of the One PI system described Advantage Suite as a commercial, off-the- shelf decision support tool that is used to perform data analysis to, for example, detect patterns of activities that may identify or confirm suspected cases of fraud, waste, or abuse. According to program officials and the One PI users to whom we spoke, program integrity analysts use Advantage Suite to analyze claims data retrieved from IDR and create standard and custom reports that combine data about costs and quality of services, providers, and beneficiaries. The results of this level of analysis may be used to generate leads for further analysis with Business Objects, which provides users extended capabilities to perform more complex analyses of data by allowing customized queries of claims data across the three Medicare plan types. It also allows the user to create ad hoc queries and reports for nonroutine analysis. For example, an analyst could use Advantage Suite to identify potentially fraudulent trends in ambulance services. He or she could use the tool to gather data about claims for ambulance services and medical treatments, and then use Business Objects to conduct further analysis to determine associations between the two types of services. If the analyst found claims for ambulance travel costs but no corresponding claims for medical treatment, the analyst may conclude that the billings for those services were possibly fraudulent. Figure 3 provides a simplified view of the IDR and One PI environment as currently implemented. While program officials deployed the One PI portal and two analytical tools to CMS and contractor program integrity analysts, the system was not being used as widely as planned. Program planning documentation from August 2009 indicated that One PI program officials planned for 639 program integrity staff and analysts to be trained and using the system by the end of fiscal year 2010; however, CMS confirmed that by the end of October 2010 only 42 of those intended users were trained to use One PI, and 41 were actively using the portal and tools. These users represent less than 7 percent of the original intended users. Of these, 31 were contractors and 10 were CMS staff who performed analyses of claims to detect potential cases of fraud, waste, and abuse. Table 5 describes the analysts planned to be and actually using One PI at the end of fiscal year 2010. According to One PI program officials, the system was not being used by the intended number of program integrity analysts because the office had not trained a sufficient number of analysts to use the system. Similarly, although CMS contractually requires Medicare program integrity contractors to use the system, officials stated that they could not enforce this requirement because they also had not trained enough of their program integrity contractors. Although One PI program plans emphasized the importance of effective training and communications, program officials responsible for implementing the system acknowledged that their initial training plans and efforts were insufficient. According to the officials, they initially provided training for the all the components of the system—the portal, tools, and use of IDR data—in a 3-and-a-half-day course. However, they realized that the trainees did not effectively use One PI after completing the training. Consequently, program officials initiated activities and redirected resources to redesign the One PI training plan in April 2010, and began to implement the new training program in July of that year. The redesigned program includes courses on each of the system components and allows trainees to use the components to reinforce learning before taking additional courses. For example, the redesigned plan includes a One PI portal overview and data training webinars that users must complete before attending instructor-led training on Advantage Suite and Business Objects. The new plans also incorporate the use of “data coaches” who provide hands-on help to analysts, such as assistance with designing queries. Additionally, the plans require users to complete surveys to evaluate the quality of the training and their ability to use the tools after they complete each course. As program officials took the initiative and time to redesign the training program, this effort caused delays in CMS’s plans to train the intended number of users. Since the new training program was implemented, the number of users has not yet significantly increased, but the number of contractor analysts requesting training has increased. Specifically, One PI officials told us that 62 individuals had signed up to be trained in 2011, and that the number of training classes for One PI was increased from two to four per month. The officials also stated that they planned to reach out to and train more contractors and staff from the HHS OIG and the Department of Justice to promote One PI. They anticipated that 12 inspectors general and 12 law enforcement officials would be trained and using One PI by the end of May 2011. Nonetheless, while these activities indicate some progress toward increasing the number of One PI users, the number of users expected to be trained and to begin using the system represents a small fraction of the population of 639 intended users. Additionally, One PI program officials had not yet made detailed plans and developed schedules for completing training of all the intended users. Further, although program officials had scheduled more training classes, they have not established deadlines for contractor analysts to attend training so that they are able to fulfill the contractual requirement to use One PI. Unless the agency takes more aggressive steps to ensure that its program integrity community is trained, it will not be able to require the use of the system by its contractors, and the use of One PI may remain limited to a much smaller group of users than the agency intended. As a result, CMS will continue to face obstacles in its efforts to deploy One PI to the intended number of program integrity users as the agency continues to develop and implement additional features and functionalities in the system. Additionally, although efforts to develop and implement One PI were initiated in 2006 and the Advantage Suite and Business Objects tools are fully developed, implemented, and in use, the One PI system does not yet include additional analytical functionality that CMS initially planned to implement by the end of 2010. Program documentation for the system includes plans for future phases of One PI development to incrementally add new analytical tools, additional sources of data, and expanded portal functionality, such as enhanced communications support, and specifically included the integration of a third tool by the end of fiscal year 2010. However, program officials have not yet identified users’ needs for functionality that could be provided by another tool, such as the capability to access and analyze more data from IDR than the current implementation of the system provides. According to program officials, they intend to determine users’ needs for additional functionality when the system becomes more widely used by agency and contractor analysts who are able to identify deficiencies and define additional features and functionality needed to improve its effectiveness. Additionally, as with IDR, in developing the One PI schedule estimate that was provided to OMB in 2010, program officials did not complete a risk assessment for the schedule that identified potential obstacles to the program. As a result, they lacked information needed to plan for additional time to address contingencies when obstacles arose. As the program office makes plans for deploying the system to the wide community of program integrity analysts and implementing additional tools, it is crucial that officials identify potential obstacles to the schedules and the risks they may introduce to the completion of related activities. For example, an analysis that identified the risk that resources would need to be redirected to other elevated priorities, such as user training, could have informed managers of the need to include additional time and resources in the schedule to help keep the development and deployment of One PI on track. Unless program officials complete a risk assessment of schedules for ongoing and future activities, CMS faces risks of perpetuating delays in establishing widespread use of One PI and achieving full implementation of the system for increased rates of fraud, waste, and abuse detection. CMS Is Not Yet Positioned to Fully Meet Goals and Objectives for Detecting Fraud, Waste, and Abuse through the Use of IDR and One PI Our prior work emphasized agencies’ need to ensure that IT investments actually produce improvements in mission performance. As we have reported, agencies should forecast expected benefits and then measure actual financial benefits accrued through the implementation of IT programs. Further, OMB requires agencies to report progress against performance measures and targets for meeting them that reflect the goals and objectives of the programs. To do this, performance measures should be outcome-based, developed with stakeholder input, and monitored and compared to planned results. Additionally, industry experts describe the need for performance measures to be developed with stakeholders’ input early in a project’s planning process to provide a central management and planning tool and to monitor the performance of the project against plans and stakeholders’ needs. CMS Has Made Limited Progress toward Meeting Program Integrity Goals and Objectives through the Use of IDR As stated in program planning documentation, IDR’s overall goal is to integrate Medicare and Medicaid data so that CMS and its partners may access the data from a single source. Specifically, the implementation of IDR was expected to result in financial benefits associated with the program’s goal to transition from a data environment of stove-piped, disparate databases and systems to an integrated data environment. Officials with the Office of Information Services stated that they developed estimates of financial benefits expected to be realized through the use of IDR. In 2006, program officials projected financial benefits from IDR of $152 million at an estimated cost of $82 million, or a net benefit of about $70 million. In 2007 these officials revised their projection of total financial benefits to $187 million based on their estimates of the amount of improper payments they expected to be recovered as a result of analyzing data provided by IDR. The resulting net benefit expected from implementing IDR was estimated to be $97 million in 2010 due to changes in program cost estimates. Table 6 includes CMS’s estimated financial benefits, costs, and net benefits reported to OMB for the lifecycle of the program from fiscal year 2006 to 2010. However, as of March 2011, program officials had not identified actual financial benefits of implementing IDR based on the recovery of improper payments. In our discussions with the Office of Information Services, program officials stated they determined that deploying IDR led to the avoidance of IT costs as a result of the retirement of several legacy systems attributable to the implementation of IDR. However, they had not quantified these or any other financial benefits. Until officials measure and track financial benefits related to program goals, CMS officials cannot be assured that the use of the system is helping the agency prevent or recover funds lost as a result of improper payments of Medicare and Medicaid claims. Additionally, while program officials defined and reported to OMB performance targets for IDR related to some of the program’s goals, they do not reflect its goal to provide a single source of Medicare and Medicaid data for program integrity efforts. Although progress made to date in implementing IDR supports the program’s goals to transition CMS to an integrated data environment, program officials have not defined and reported to OMB performance measures to gauge the extent to which the program is meeting this goal. Specifically, IDR officials defined performance measures for technical indicators, such as incorporating Medicare data into the repository, making the data available for analysis, and reducing the number of databases CMS must support, but they have not defined measures and targets that reflect the extent to which all the data needed to support program integrity initiatives are incorporated into a single source, including the Medicaid and shared systems data which have not yet been incorporated into IDR. Further, the IDR performance measures do not reflect indicators that may lead to the program’s ability to achieve the financial benefits defined by the agency’s program integrity initiatives. In discussing this matter, IDR officials stated that the performance measures for the program are only intended to track progress toward implementing technical capabilities of the system, such as the amount of data from specific sources incorporated into the repository and made available through software tools to analysts. They do not define performance indicators, measures, and targets for incorporating data from future sources of data until plans are made and funds are provided by the agency’s business offices to begin activities to implement new functionalities into IDR. IDR program officials also stated that they do not define or track business-related performance indicators for achieving specific program integrity goals; rather, they depend upon business owners to measure and track these indicators based upon the use of IDR data to achieve business goals. However, without performance measures that reflect business owners’ and other stakeholders’ needs for the program to deliver a single source of all Medicare and Medicaid data needed to conduct analyses, and lacking measures that reflect the success of the program toward achieving financial benefits projected for program integrity initiatives, program officials lack key management information needed to ensure that the data and infrastructure components provided by IDR enhance CMS’s ability to meet its program integrity goals and objectives. Without this assurance, the effectiveness of the system’s capability to increase rates of fraud, waste, and abuse detection and, consequently, decrease the amount of money lost to improper payments of claims will remain unknown. CMS Is Not Yet Positioned to Demonstrate Improvements in Its Ability to Meet Goals and Objectives for Detecting Fraud, Waste, and Abuse through the Use of One PI The Center for Program Integrity’s overall goal for One PI was to provide robust tools for accessing a single source of information to enable consistent, reliable, and timely analyses to improve the agency’s ability to detect fraud, waste, and abuse. Achieving this goal was intended to result in the recovery of significant funds lost each year from improper payments of Medicare and Medicaid claims. In September 2007, program officials projected financial benefits from implementing One PI—nearly $13 billion over the 10-year lifecycle of the project. According to program officials, these benefits were expected to accrue from the recovery of improper payments of Medicare and Medicaid claims and reduced program integrity contractor expenditures for supporting IT required to maintain separate databases. In September 2007, One PI officials projected and reported to OMB benefits of nearly $13 billion. They subsequently revised this estimate to approximately $21 billion. Program officials told us that increases in the projected financial benefits were made based on assumptions that accelerated plans to integrate Medicare and Medicaid data into a central data repository would enable One PI users to identify increasing numbers of improper payments sooner than previously estimated, thus allowing the agency to recover more funds lost due to payment errors. Table 7 provides data CMS reported to OMB on estimated benefits and costs, actual costs as of the end of fiscal year 2010, and net benefits projected to be realized as a result of implementing One PI from fiscal year 2007 through 2010. However, the current implementation of One PI has not yet produced outcomes that position the agency to identify or measure financial benefits. Therefore, the net financial benefit of developing and implementing One PI remains unknown. Center for Program Integrity officials stated that at the end of fiscal year 2010—over a year after deploying One PI—it was too early to determine whether the program has provided any financial benefits because, since the program had not met its goal for widespread use of One PI, there were not enough data available to quantify financial benefits attributable to the use of the system. These officials anticipated that as the user community is expanded, they will be able to begin to identify and measure financial and other benefits of using the system. However, the officials also indicated that they had not yet defined mechanisms for determining the amount of money recovered as a result of detecting improper payments through the use of One PI. As with IDR, until the agency quantifies and tracks the progress it is making in delivering benefits intended to be realized through widespread use of One PI, CMS officials cannot be assured of the cost-effectiveness of implementing One PI to help the agency meet its goal to enable consistent, reliable, and timely analyses of data to improve the agency’s ability to detect fraud, waste, and abuse. Additionally, in discussion groups held with active One PI users, program integrity analysts identified several issues that confirmed the agency’s limited progress toward meeting the goals of the program. For example, while several users told us that the One PI system can support their work, they recognized limited progress toward the establishment of a single source of information and analysis tools for all fraud, waste, and abuse activities. Further, One PI users stated that the system enabled analysts to access national data not otherwise accessible to them and supported analysis across different Medicare programs. They also noted that the tools offered by One PI provided more functionality than other tools they use. However, of the analysts in the discussion groups, most did not use One PI as their only source of information and analysis for detecting improper payments. Rather, to help conduct their work, they relied on other analysis tools provided by CMS or their companies, along with data from CMS claims processing contractors or from private databases created by other contractors. One PI users in the discussion groups also told us that they use other tools because they are more familiar with those tools. Additionally, they stated that other databases sometimes provide data that are not currently accessible through One PI and IDR, such as demographic data about providers. Program integrity analysts further stated that they only use One PI as a cross-check of data and analysis from their own systems because they are not yet convinced that One PI can be used as a replacement for or adjunct to those data sources and tools. Further, CMS officials have not developed quantifiable measures for meeting the program’s goals. CMS officials defined and reported to OMB performance measures and targets toward meeting the program’s goals for enabling timely analyses of data to detect cases of fraud, waste, and abuse, but have not yet been able to quantify measures for these indicators. For example, performance measures and targets for One PI include increases in the detection of improper payments for Medicare Parts A and B claims. However, according to program integrity officials, measures had not yet been quantified because they had not yet identified ways to determine the extent to which increases in the detection of errors could be attributed to the use of One PI. Additionally, the limited use of the system has not generated enough data to quantify the amount of funds recovered from improper payments. Moreover, measures of One PI’s program performance do not accurately reflect the current state of the program. Specifically, indicators to be measured for the program include the number of states using One PI (for Medicaid integrity purposes) and decreases in the Medicaid payment error rate, but One PI does not have access to those data because they are not yet incorporated into IDR. Therefore, these performance indicators are not relevant to the current implementation of the system. Finally, CMS officials did not consult external system users (e.g., program integrity contractors) in developing measures of One PI’s effectiveness. According to industry experts, developing performance measures with stakeholder input early in the planning process can provide a mechanism for gauging the effectiveness of outcomes toward meeting business needs and achieving program goals as a program progresses. However, CMS officials did not consult external users of the system about how they would measure its effectiveness. According to program officials, program integrity stakeholders within CMS were involved in the development of the performance measures; however, external users of the system were not asked to provide input when it may have been used to establish an effective performance tracking tool, such as when defining ways to determine whether One PI meets stakeholders’ needs. For example, program officials told us that they intend to determine user satisfaction, a performance measure reported to OMB, by conducting surveys at the end of training sessions. However, these surveys were conducted before the analysts actually used the system in their work and were focused on satisfaction with the training itself. In this case, involvement of external stakeholders when defining the measure could have led to more effective ways to determine user satisfaction, such as surveying analysts based on their experiences resulting from the use of One PI after a certain period of time defined by stakeholders. Until they define measurable performance indicators and targets that reflect the goals and objectives of CMS’s program integrity initiatives, agency officials will continue to lack the information needed to ensure that the implementation of One PI helps improve the agency’s ability to identify improper payments and to detect cases of fraud, waste, and abuse. Additionally, when lacking stakeholders’ input into the process for determining measures of successful performance, One PI program officials may miss an opportunity to obtain information needed to define meaningful measures that reflect the success of the program toward meeting users’ and the agency’s needs. Because it lacks meaningful outcome-based performance measures and effective methods for tracking progress toward meeting performance targets, CMS does not have the information needed to ensure that the system is useful to the extent that benefits realized from the implementation of One PI help the agency meet program integrity goals. Conclusions IDR and One PI program officials have made progress in developing and implementing IDR and One PI to support CMS’s program integrity initiatives, but the systems do not yet provide all the data and functionality initially planned. Additionally, CMS program integrity officials have not yet taken appropriate actions to ensure the use of IDR and One PI on a widespread basis for program integrity purposes. Further, program officials have not defined plans and reliable schedules for incorporating the additional data into IDR that are needed to support its program integrity goals. Until the agency takes these steps, it cannot ensure that ongoing development, implementation, and deployment efforts will provide the data and technical capabilities needed to improve program integrity analysts’ capabilities for detecting potential cases of fraud, waste, and abuse. Furthermore, because the systems are not being used as planned, CMS program integrity officials are not yet in a position to determine the extent to which the systems are providing financial benefits or supporting the agency’s initiatives to meet its program integrity goals and objectives. Until it does so, CMS officials will lack the means to determine whether the use of the systems contributes to the agency’s goal of reducing the number and amounts of improper payments made as a result of fraudulent, wasteful, or abusive claims for Medicare and Medicaid services. Furthermore, the contribution of IDR and One PI to the agency’s efforts to save billions of dollars each year attributable to improper payments made due to fraud, waste, and abuse in the Medicare and Medicaid programs will remain unknown. Recommendations for Executive Action To help ensure that the development and implementation of IDR and One PI are successful in helping the agency meet the goals and objectives of its program integrity initiatives, we are recommending that the Administrator of CMS take the following seven actions: finalize plans and develop schedules for incorporating additional data into IDR that identify all resources and activities needed to complete tasks and that consider risks and obstacles to the IDR program; implement and manage plans for incorporating data in IDR to meet schedule milestones; establish plans and reliable schedules for training all program integrity analysts intended to use One PI; establish and communicate deadlines for program integrity contractors to complete training and use One PI in their work; conduct training in accordance with plans and established deadlines to ensure schedules are met and program integrity contractors are trained and able to meet requirements for using One PI; define any measurable financial benefits expected from the implementation of IDR and One PI; and with stakeholder input, establish measurable, outcome-based performance measures for IDR and One PI that gauge progress toward meeting program goals. Agency Comments and Our Evaluation In written comments on a draft of this report, signed by HHS’s Assistant Secretary for Legislation and reprinted in appendix II, CMS stated that it concurred with all of our recommendations and identified steps agency officials were taking to implement them. Among these were actions to further refine training plans to better ensure that program integrity contractors are trained and able to meet requirements to use One PI, along with efforts to define measurable financial benefits expected from augmenting the data in IDR. If these and other identified actions are implemented in accordance with our recommendations, CMS will be better positioned to meet the goals and objectives of its program integrity initiatives. The agency also provided technical comments, which were incorporated as appropriate. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. At that time, we will send copies of this report to appropriate congressional committees, the Administrator of CMS, and other interested parties. The report will also be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have questions about this report, please contact me at (202) 512-6304 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology The objectives of our review were to (1) assess the extent to which the Centers for Medicare and Medicaid Services (CMS) has developed and implemented the Integrated Data Repository (IDR) and One Program Integrity (One PI) systems and (2) determine the agency’s progress toward achieving defined goals and objectives for using the systems to help detect fraud, waste, and abuse in the Medicare and Medicaid programs. To assess the extent to which IDR and One PI have been developed and implemented, we collected and analyzed agency documentation that described planning and management activities. Specifically, we assessed project management plans and artifacts that described the status of the systems, such as program management review briefings to technical review boards, and memoranda approving continued development and implementation of the systems at key decision points in the systems’ lifecycles. We observed the operation of CMS’s data center where IDR is installed and viewed a demonstration of the One PI portal and analytical tools. We also discussed with officials from CMS’s Office of Information Services and Center for Program Integrity plans for and progress made toward developing and implementing the systems. We focused our analysis on the extent to which the development and implementation of IDR and One PI met system and business requirements and plans for deploying the systems to CMS’s program integrity analysts. To assess the agency’s processes for defining system requirements, we reviewed IDR and One PI requirements management plans, system requirements, and documentation that traces requirements to functionality provided by the systems at different stages of implementation. Program documents we reviewed include the 2007 IDR Medicare Program Integrity Requirements, the 2006 One PI Startup Findings Draft, the 2010 One PI Requirements Management Plan, and detailed software requirements specifications for One PI. In addition, we discussed with IDR and One PI program officials their requirements development and management processes and procedures. We then assessed the department’s current approach to requirements development and management against best practices identified in the Software Engineering Institute’s Capability Maturity Model Integration. To assess schedule estimates of the IDR and One PI programs, we used criteria defined in GAO’s cost estimating and assessment guide to determine the extent to which relevant schedules were prepared in accordance with best practices that are fundamental to estimating reliable schedules. We identified information reported to the Office of Management and Budget (OMB) by CMS in fiscal year 2010 that defined program schedule estimates for the remaining lifecycles of the programs through 2016. We collected and analyzed program documentation that supported these estimates, such as work breakdown structures and staffing estimates. To assess each program’s schedule estimates, we rated the IDR and One PI program management offices’ implementation of nine scheduling best practices defined in our guidance. Based on these criteria, we analyzed the One PI integrated master schedule and the IDR validation, along with supporting documentation, and used commercially available software tools to assess the schedules. Specifically, we determined whether each schedule was developed by identifying and including critical elements of reliable scheduling best practices, such as identifying all resources needed to conduct activities, and whether risk assessment and contingency plans had been conducted for the schedules. We shared our guidance, the criteria against which we evaluated the program’s schedule estimates, as well as our preliminary findings with program officials. We then discussed our preliminary assessment results with the program management officials. When warranted, we updated our analyses based on the agency response and additional documentation provided to us. We also analyzed changes to the program schedules over time. To determine the reliability of the data used to assess schedule estimates, we used a scheduling analysis software tool that identified missing logic and constraints, and checked for specific problems that could hinder the schedule’s ability to dynamically respond to changes. We examined the schedule data to identify any open-ended activities (i.e., activities with no predecessor or successors), and searched for activities with poor logic, such as activities with constraints that keep the schedule rigid (e.g., start no earlier than, finish no later than, etc.). We found the data sufficiently reliable for the purposes of this review. To determine the number of system end users for One PI, we identified the universe of analysts trained to use One PI by examining documentation provided by CMS. Specifically, we obtained a list of trained users from the Center for Program Integrity. From that list, we selected program integrity analysts whom CMS identified as using the system to conduct analyses of IDR data to identify potential cases of fraud, waste, and abuse. We then compared this selection of analysts to data generated by the One PI system that recorded user login data from January 3, 2010, through October 16, 2010, to identify the current population of One PI users. Through this analysis, we identified 41 trained program integrity analysts who had used the system during the designated time period, including 8 Medicare drug integrity contractors, 23 zone program integrity and program safeguard contractors, and 10 CMS program integrity analysts. To ensure that the data that we used to identify One PI users were reliable, we held discussions with CMS officials who were knowledgeable of the user community and mechanisms for accessing the system. We discussed with them the list of trained end users and the computer-generated login information provided by the system. We also discussed the reliability of the computer-generated system login information. Specifically, agency officials confirmed that the data reported by the system were complete and accurate and that the method we used to identify active users—an analysis of system login data—was valid. To determine the extent to which the IDR and One PI programs have achieved defined goals and objectives for using the systems to help detect fraud, waste, and abuse, we collected CMS’s analyses of projected costs and benefits for IDR and One PI. We also collected and assessed data reported on the costs and benefits realized through the current implementation of the systems. To do so, we compared (1) actual costs and benefits attributed to each system through fiscal year 2010 and (2) current estimated total lifecycle costs and benefits for each system. We calculated the expected net benefit by subtracting estimated and actual system costs from estimated and actual system benefits for each system. To understand how costs and benefits for each system were derived, we met with officials from the Office of Information Services and from the Center for Program Integrity and discussed CMS’s processes for estimating and tracking costs and benefits of both IDR and One PI. We also obtained from agency officials documentation about and descriptions of qualitative benefits provided by both systems. Additionally, we reviewed planning documents that described the goals and objectives of both programs, along with other documentation that described actions taken to address program goals and objectives. We reviewed and assessed supporting documentation for the measures, which the agency reported to OMB as having been met. To determine if CMS’s approach to developing performance measures for IDR and One PI was consistent with federal guidance, we examined documents describing CMS’s approach and held discussions with program officials about practices they followed when defining performance measures and targets. We compared program officials’ practices to guidance defined by OMB. We also compared the performance measures defined for the two programs to CMS’s goals and objectives for program integrity initiatives to determine if the IDR and One PI measures supported intended outcomes of agencywide efforts to better detect fraud, waste, and abuse. We supplemented our documentation review with interviews of officials from the Center for Program Integrity and the Office of Information Services to obtain additional information about the development of current and future performance measures for IDR and One PI. During our interviews, we discussed performance measures and strategic goals and initiatives for One PI and IDR, and the extent to which the agency involved internal and external stakeholders in the development of performance measures. To obtain information about the extent to which One PI has been deployed and is being used by a broad community of program integrity analysts to meet CMS’s goals and objectives, we invited the 41 users we identified in addressing the first objective of this engagement to participate in facilitated discussions about the data and tools needed to support fraud, waste, and abuse detection. Thirty-two of those 41 users attended the discussion group meetings. During those meetings, we discussed the following topics: usage of One PI tools and data from IDR, comparison and contrasting of One PI and IDR with other tools and data sets, and benefits and challenges of using One PI and IDR for detecting fraud, waste, and abuse. We also discussed users’ needs for analytical tools and data and for systems training. After those discussions, we sent written questions to all 32 discussion group participants to obtain more detailed information about their use of analytical tools and data sources. Thirty-one participants responded and provided additional supplementary information about their use of One PI and IDR. For each of the objectives, we assessed the reliability of the data we analyzed through interviews with agency officials knowledgeable of the user community and training program, mechanisms for accessing the systems, and the methods for tracking and reporting costs and schedules of the IDR and One PI programs. We found the data sufficiently reliable for the purposes of this review. We conducted this performance audit from June 2010 through June 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Comments from the Department of Health and Human Services Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Teresa F. Tucker (Assistant Director), Sheila K. Avruch (Assistant Director), April W. Brantley, Clayton Brisson, Neil J. Doherty, Amanda C. Gill, Kendrick M. Johnson, Lee A. McCracken, Terry L. Richardson, Karen A. Richey, and Stacey L. Steele made key contributions to this report.
GAO has designated Medicare and Medicaid as high-risk programs, in part due to their susceptibility to improper payments--estimated to be about $70 billion in fiscal year 2010. Improper payments have many causes, such as submissions of duplicate claims or fraud, waste, and abuse. As the administrator of these programs, the Centers for Medicare and Medicaid Services (CMS) is responsible for safeguarding them from loss. To integrate claims information and improve its ability to detect fraud, waste, and abuse in these programs, CMS initiated two information technology system programs: the Integrated Data Repository (IDR) and One Program Integrity (One PI). GAO was asked to (1) assess the extent to which IDR and One PI have been developed and implemented and (2) determine CMS's progress toward achieving its goals and objectives for using these systems to help detect fraud, waste, and abuse. To do so, GAO reviewed system and program management plans and other documents and compared them to key practices. GAO also interviewed program officials, analyzed system data, and reviewed reported costs and benefits. CMS has developed and begun using both IDR and One PI, but has not incorporated into IDR all data as planned and has not taken steps to ensure widespread use of One PI to enhance efforts to detect fraud, waste, and abuse. IDR is intended to be the central repository of Medicare and Medicaid data needed to help CMS program integrity staff and contractors prevent and detect improper payments of Medicare and Medicaid claims. Program integrity analysts use these data to identify patterns of unusual activities or transactions that may indicate fraudulent charges or other types of improper payments. IDR has been operational and in use since September 2006. However, it does not include all the data that were planned to be incorporated by fiscal year 2010. For example, IDR includes most types of Medicare claims data, but not the Medicaid data needed to help analysts detect improper payments of Medicaid claims. IDR also does not include data from other CMS systems that are needed to help analysts prevent improper payments, such as information about claims at the time they are filed and being processed. According to program officials, these data were not incorporated because of obstacles introduced by technical issues and delays in funding. Further, the agency has not finalized plans or developed reliable schedules for efforts to incorporate these data. Until it does so, CMS may face additional delays in making available all the data that are needed to support enhanced program integrity efforts. One PI is a Web-based portal that is to provide CMS staff and contractors with a single source of access to data contained in IDR, as well as tools for analyzing those data. While One PI has been developed and deployed to users, few program integrity analysts were trained and using the system. Specifically, One PI program officials planned for 639 program integrity analysts to be using the system by the end of fiscal year 2010; however, as of October 2010, only 41--less than 7 percent--were actively using the portal and tools. According to program officials, the agency's initial training plans were insufficient and, as a result, they were not able to train the intended community of users. Until program officials finalize plans and develop reliable schedules for training users and expanding the use of One PI, the agency may continue to experience delays in reaching widespread use and determining additional needs for full implementation of the system. While CMS has made progress toward its goals to provide a single repository of data and enhanced analytical capabilities for program integrity efforts, the agency is not yet positioned to identify, measure, and track benefits realized from its efforts. As a result, it is unknown whether IDR and One PI as currently implemented have provided financial benefits. According to IDR officials, they do not measure benefits realized from increases in the detection rate for improper payments because they rely on business owners to do so, and One PI officials stated that, because of the limited use of the system, there are not enough data to measure and gauge the program's success toward achieving the $21 billion in financial benefits that the agency projected.
GAO_GAO-07-44
Background State and local governments are primarily responsible for carrying out evacuations. However, if these governments become overwhelmed by a catastrophic disaster, the federal government can provide essential support, such as evacuation assistance for transportation-disadvantaged and other populations. Such support would require adequate preparation on the part of the federal government. The Stafford Act outlines the framework for state and local governments to obtain federal support in response to a disaster. First, a governor must submit a request to the President in order for the President to declare a federal disaster. Once the declaration is granted, the state can request specific assistance from FEMA (part of DHS), such as physical assets, personnel, funding, and technical assistance, among others. While the President can declare a disaster without a request from a governor, this does not frequently occur. The Post-Katrina Emergency Management Reform Act of 2006 amended sections of the Stafford Act whereby the President can provide accelerated federal assistance and support where necessary to save lives absent a specific request from a governor and can direct any federal agency to provide assistance to state and local governments in support of “precautionary evacuations.” DHS’s role is to coordinate federal resources used in disaster response, including evacuations. DHS created the National Response Plan in 2004 to create a comprehensive “all-hazards” approach to enhance the ability of the United States to manage domestic incidents. Under the National Response Plan, DOT is the lead and coordinating federal agency for transportation in a disaster. DOT is primarily responsible for coordinating the provision of federal and civil transportation services, and the recovery, restoration, safety, and security of the transportation infrastructure. However, with respect to evacuations, DOT is only responsible for providing technical assistance in evacuation planning to other federal agencies as well as state and local governments. The Post-Katrina Emergency Management Reform Act of 2006 also included numerous provisions to help strengthen federal, state, and local evacuation preparedness for some transportation-disadvantaged populations. Among these provisions are: the establishment of the National Advisory Council to advise FEMA on all aspects of emergency management that will include disability and other special needs representatives; the institution of a DHS disability coordinator to assist in emergency preparedness for persons with disabilities; the creation of the National Training Program and the National Exercise Program which are designed to address the unique requirements of special needs populations; and a requirement that federal agencies develop operational plans to respond effectively to disasters, which must address support of state and local governments in conducting mass evacuations, including transportation and provisions for populations with special needs. To facilitate evacuation preparedness, state and local entities not traditionally involved in emergency management can provide assistance— such as information or vehicles—that would be helpful in state and local evacuation-preparedness efforts for transportation-disadvantaged populations. Some such entities receive DOT grants to provide transportation for the elderly, low-income individuals, persons with disabilities, and other transportation-disadvantaged populations. These include social service agencies, nonprofit organizations, and public and private sector transportation providers that coordinate the daily transportation of the elderly, low-income individuals, and persons with disabilities, to provide meals or transportation to and from jobs, medical appointments, and other activities. Finally, as a condition for spending federal highway or transit funds in urbanized areas, federal highway and transit statutes require metropolitan planning organizations to plan, program, and coordinate federal highway and transit investments. To carry out these activities, metropolitan planning organizations collect transportation and transit data. In March 2006, DOT issued guidance that recommends increased interaction between some of its grant recipients and emergency management agencies, among other entities. To assess state and local evacuation preparedness, DHS’s Nationwide Plan Review examined the emergency plans of all 50 states and 75 of the largest urban areas, including evacuation plans and annexes. DOT’s report to the Congress, entitled Catastrophic Hurricane Evacuation Plan Evaluation: A Report to Congress also reviewed the evacuation plans of many of the Gulf Coast region’s counties and parishes. Both of these federal reports also recommend that additional actions be taken to address this issue. There are many relevant federal entities and other entities that have served as advocates for all or subsets of transportation-disadvantaged populations. In the federal government, these include the National Council on Disability; and interagency councils such as the Coordinating Council on Access and Mobility, the Interagency Coordinating Council on Emergency Preparedness and Individuals with Disabilities, and the Interagency Council on Homelessness. Outside of the federal government, relevant entities that have advocated for these populations include the National Organization on Disability and the American Association of Retired Persons, as well as transportation groups such as the American Public Transportation Association, the Community Transportation Association of America, and the Association of Metropolitan Planning Organizations. Challenges and Barriers Exist in Evacuation Preparedness for Transportation- Disadvantaged Populations State and local emergency management officials face several challenges in preparing for the evacuation of transportation-disadvantaged populations. For example, state and local officials face difficulties in obtaining information about where transportation-disadvantaged populations are located. These state and local officials also face challenges in determining transportation-disadvantaged populations’ needs and providing for their transportation, such as arranging for the use of appropriate equipment—buses and vans, for example—to evacuate these populations. Additionally, officials confront legal and social barriers in addressing these challenges, such as concerns about being unable to obtain client medical information from public or private sector transportation providers for use in evacuation preparedness efforts because of privacy issues. State and Local Governments Face Challenges in Identifying and Locating Transportation- Disadvantaged Populations, Determining Their Evacuation Needs, and Providing for Their Transportation According to experts and officials, the challenges state and local governments face in preparing for the evacuation of transportation- disadvantaged populations include identifying and locating these populations, determining their evacuation needs, and providing for their transportation. It is difficult for state and local officials to acquire the necessary information to both identify and locate transportation- disadvantaged populations. The difficulty in identifying these populations is due to the fact that these populations represent large, diverse, and constantly changing groups, and that information about them is not always readily available. Transportation-disadvantaged populations can include numerous categories of people without personal vehicles, such as the following: the elderly and persons with disabilities who have mobility impairments that preclude them from driving, or who need medical equipment in order to travel; low-income, homeless, or transient persons who do not have a permanent residence or who do not own or have access to a personal vehicle; children without an adult present during a disaster; tourists and commuters who are frequent users of public transportation; those with limited English proficiency who tend to rely on public transit more than English speakers; or those who, for any other reason, do not own or have access to a personal vehicle. These populations can also include those who could be placed in, or qualify for, more than one category among transportation-disadvantaged populations, such as a person who has disabilities, is homeless, and speaks limited English. Both the large number of these populations and the potential for double counting can make identification difficult for state and local officials. For example, although 52 percent of the Gulf Coast jurisdictions evaluated in DOT’s Catastrophic Hurricane Evacuation Plan Evaluation had identified and located certain transportation- disadvantaged populations, DOT reported that only three jurisdictions had satisfactorily included provisions for schools and day care centers, trailer parks and campgrounds, incarcerated and transient individuals, and people with limited English proficiency in their evacuation plans. Twenty- six percent of respondents to a question in DHS’s Nationwide Plan Review stated that they needed to improve their identification of these populations. Fifteen percent of respondents to this question indicated that a standard federal definition of “transportation-disadvantaged” would facilitate their planning. Additionally, data on the location of transportation-disadvantaged populations is not readily available because such data: have not previously been collected; cannot be collected because of the amount of time, staff, and other resources required, or cannot be shared due to the preference of some transportation-disadvantaged populations; for example, the established registration system in one of the five major cities we visited had only 1400—or 0.3 percent—of the 462,000 people projected to need evacuation assistance registered; are not compiled in a central location, but reside in separate databases across numerous agencies, companies, or organizations, including social service agencies, departments of motor vehicles, and public and private sector transportation providers; are not traditionally shared with emergency management officials; for example, a local metropolitan planning organization may collect data on those who are transit-dependent, but may not have shared that information with emergency management officials; or cannot be shared with emergency officials due to privacy restrictions; for example, social service agencies or nonprofit organizations that regularly transport people during non-emergency times and have information on clients’ needs, but may not be able or willing to share that data because of privacy concerns. In addition to identifying and locating transportation-disadvantaged populations, state and local governments also face the challenge of determining the transportation needs of these populations and providing for their transportation in an evacuation. To adequately prepare for evacuating these populations, state and local officials need information on the medical and transportation needs of each person in addition to his or her location. These needs can vary widely from those who can travel by themselves to a government-assisted evacuation pick-up point to those who: need to be transported to a government-assisted evacuation pick-up point, but do not require medical assistance or additional transportation; live in group homes for persons with mental disabilities and may require medical assistance, but not accessible transportation in an evacuation; or are medically frail but not hospitalized, and require acute medical assistance as well as accessible transportation in an evacuation. However, similar to the location data discussed earlier, it is difficult for state and local officials to obtain information on the transportation needs of these populations. Another challenge that state and local officials face in preparing for the evacuation of transportation-disadvantaged populations is providing for the transportation of these populations. This challenge includes identifying the appropriate equipment and available modes of transport as well as drivers and other needed professionals, providing training to those drivers and other professionals, and communicating evacuation information to the public. When preparing for an emergency, it can be difficult for state and local officials to identify, arrange for the use of, and determine the proper positioning of equipment needed to transport these populations. The transportation needs of such populations can range from persons who can be evacuated in school buses and charter buses to the mobility-impaired who may require low floor buses, wheelchair lift-equipped vans, and other accessible vehicles. Because of the limited number of vehicles (accessible, multi-passenger, or other) available among both public transportation providers (such as transit agencies) and private transportation providers (such as ambulance and bus companies), we found that emergency officials have to spend additional time and resources arranging for transportation and ensuring that those arrangements are coordinated before an evacuation order is issued. Further, state and local governments also need to have drivers and other professionals trained to operate the additional vehicles they have acquired or to move persons with disabilities in and out of vehicles; constraints already exist on the pool of potential drivers. One example of a constrained resource is school bus drivers. If an evacuation is ordered during the school day, the availability of these drivers is severely limited because such drivers must first transport the children home. In addition, drivers who provide transportation to these populations during non-emergency times are often not trained or contracted to provide emergency transportation for these populations. Further, DOT’s Catastrophic Hurricane Evacuation Plan Evaluation reported that, even in urban areas where additional modes of transportation are available, few evacuation plans recognize the potential role for intercity buses, trains, airplanes, and ferries. These modes may be particularly important for persons who cannot evacuate in personal vehicles. In response to a question in DHS’s Nationwide Plan Review on how well all available modes of transportation are incorporated into evacuation plans, 48 percent of respondents stated that plans needed to improve the use of available modes of transport in evacuation planning. For example, one jurisdiction is investigating using ferries and barges in evacuations. Legal and Social Barriers to Addressing Transportation- Disadvantaged Evacuation Challenges Confront State and Local Governments According to experts and officials, several legal and social barriers confront state and local governments in addressing the aforementioned challenges to evacuating transportation-disadvantaged populations. (See fig. 2.) To begin, state and local emergency management officials often face legal barriers in obtaining data on the identification, location, or the transportation needs of these populations. For example, 11 percent of respondents to a DHS Nationwide Plan Review question on addressing the needs of transportation-disadvantaged individuals before, during, and after emergencies, stated that they were concerned about privacy issues vis-à-vis obtaining medical information from public or private sector transportation providers about their clients that would help officials in their evacuation preparedness. These providers could include those that provide paratransit services for persons with disabilities, “Meals on Wheels” programs for the elderly, and job access services for low-income individuals. DOT’s Catastrophic Hurricane Evacuation Plan Evaluation also cited privacy as a legal barrier. Officials in three of the five major cities we visited in addition to several federal officials with whom we spoke expressed concern about what impact the Health Information Portability and Accountability Act’s Privacy Rule (the Privacy Rule) might have on their ability to acquire such data. The act’s Privacy Rule limits the disclosure of individually identifiable health information by certain entities or persons, but does not apply to transportation providers unless they are also covered entities. Covered entities include health care providers that conduct certain transactions in electronic form, health-care clearinghouses, or health plans. Therefore, transportation providers that are not covered entities would not be prohibited by the Privacy Rule from sharing such information. However, misunderstanding about the act’s Privacy Rule may still be discouraging some from sharing this information. Additionally, the general concerns that federal, state, and local officials have expressed may extend to other privacy issues beyond the Privacy Rule, such as potential contractual restrictions on Medicare and Medicaid transportation providers. Another example of a legal barrier is that some public or private sector transportation providers are hesitant to evacuate these populations because of concerns about reimbursement and liability. State and local officials must often broker arrangements with transportation providers in order to secure their services. However, although these providers may be willing to help state and local officials evacuate these populations, they will sometimes not do so without legal agreements (such as memoranda of understanding or contracts) that ensure reimbursement and that absolve the providers from, or reduce liability in case of, an accident or injury. Creating such an agreement usually requires legal representation as well as additional liability insurance to protect against potential damage or loss of property or life—all entailing monetary costs that state or local governments and transportation providers may not be willing or able to cover. Officials in one of the five major cities we visited told us that additional liability insurance would be cost prohibitive to obtain. We learned of a school district’s reluctance to provide vehicles for an evacuation without a legal agreement in one of the five major cities we visited. This was largely due to the fact that the school district had provided vehicles for an evacuation 12 years ago, but FEMA has not yet fully reimbursed it. In one of the five major cities and one of the four states we visited, we also learned of agreements that have been pending for months (or had fallen through) because of one party’s liability concerns; these concerns could not be adequately addressed by the state or local government. An additional legal barrier for state and local officials we identified relates to volunteers (such as nonprofit organizations or Good Samaritans) who may also be dissuaded from providing evacuation assistance in an emergency because of liability concerns. Liability concerns may be even more of a barrier after Hurricane Katrina, where volunteers saw that efforts to assist had unintentional consequences, some of which resulted in lawsuits. For example, Operation Brother’s Keeper is a Red Cross program that connects transportation-disadvantaged populations in local faith-based congregations with voluntary providers of transportation in those congregations. However, because of liability concerns in the provision of such transportation, voluntary participants of the program are now less willing to provide such transportation. Given that most state Good Samaritan laws only apply to voluntary assistance provided in circumstances that involve urgent medical care, transportation providers may be held liable unless they are responding to an accident scene or transporting a patient to a medical facility. Moreover, we found that in one state, an addendum introduced to modify an existing Good Samaritan law that would indemnify volunteers assisting in evacuations did not pass. The absence of protection from potential liability may also jeopardize efforts to enlist the assistance of volunteers in evacuating the transportation- disadvantaged. Furthermore, private transportation providers raise an additional legal barrier for emergency officials, as these providers are hesitant to offer evacuation assistance without formal sheltering arrangements already in place. Sheltering arrangements ensure that such transportation providers will not face unexpected complications once they arrive at an evacuation destination. The providers’ requirement for sheltering arrangements highlights the fact that there are other significant evacuation barriers for state and local governments which extend beyond transportation. Experts who participated in an August 2006 panel we hosted on disaster housing assistance also described similar sheltering challenges that were discussed earlier in this report, such as challenges related to evacuation preparedness for transportation-disadvantaged populations. For example, some of the panelists discussed difficulty in obtaining information on those who require sheltering, where they are located, and what their sheltering needs are. Further, providing shelter for transient populations, persons with disabilities, undocumented workers, and those with limited English proficiency—many of whom are also transportation- disadvantaged—is a complex task. Finally, as we will discuss in the next section, sharing information to increase preparedness needs improvement. Social barriers that may affect evacuation efforts for all populations may pose another major obstacle for state and local officials in addressing challenges to evacuating these populations. While social barriers extend beyond transportation-disadvantaged populations to include many of those with access to a car, there are two reasons why such barriers are particularly pronounced when state and local officials prepare for the evacuation of such populations. First, as opposed to those who have access to a personal vehicle, state and local officials must be able to identify, locate, and determine the needs of transportation-disadvantaged populations in order to evacuate them. Second, the unwillingness to evacuate may be more widespread for the car-less than other populations due to health, financial, or other personal reasons that are related to their transportation-disadvantaged status. Even if the identification, location, or transportation needs data are available for use by state and local officials, we learned that some people may not want to disclose their information to these officials because of concerns that sharing such data will adversely affect their medical situation, whereby the privacy of their personal medical information may be compromised; financial situation, such that their financial assets will be taken or reduced; and legal situation, such that they face consequences if, for example, the government learns that they are undocumented workers. This barrier may therefore prevent state and local governments from determining which populations require evacuation transportation, where they are located, and what their specific transportation needs are. In addition, if state and local officials are able to prepare for the evacuation of transportation-disadvantaged populations, these officials still may confront the unwillingness of these populations to evacuate. State and local officials have the difficult task of making evacuation in advance of emergencies a better alternative for such populations than sheltering in place. Even when the local or state government issues a “mandatory” evacuation order, most state governments do not have the authority to forcibly remove people from their homes or other areas. Instead, state governments must decide whether they can, or are willing to, voluntarily comply with the order. Further, even if emergency management officials provide transportation to these populations, they may not want to evacuate. One example of this unwillingness to evacuate is that transportation-disadvantaged populations may be concerned about being separated from family members or caregivers upon whom they may depend for mobility or the provision of medical services, or pets upon which they may rely for companionship. In addition, shelters that receive evacuees may not be set up to receive pets. Health concerns may also cause these populations to be reluctant to evacuate. For example, some may be reluctant or unable to leave without the medication or medical equipment (e.g., oxygen tanks or dialysis machines) that are critical to their well-being, or may be concerned that riding on an evacuation vehicle would be extremely painful given their medical condition. In addition, some may feel anxiety concerning the lack of information about their destination, including whether they know someone there or whether the destination will meet their needs. These populations’ unwillingness to evacuate can also stem from fear of losing physical or financial assets. For example, some transportation- disadvantaged populations have limited assets and do not feel safe leaving whatever assets they do have—such as their home or belongings—behind. This sentiment is exacerbated among those whose families have lived in their homes for generations. Further, as was observed during Hurricane Katrina, people may be unwilling to evacuate even if they do have a car; they may not have money to pay for gas or are unwilling to move to a place where their financial situation is less certain. In attempting to address some of these social barriers by informing transportation-disadvantaged populations about the benefits of evacuating as opposed to sheltering in place, we found that communicating with these populations can be difficult because these populations often may lack access to a radio or television; may not trust emergency announcements; or may not be able to read or understand emergency materials or announcements because of a disability, such as a cognitive or vision impairment, or a lack of proficiency in English. State and Local Governments Are Generally Not Well Prepared to Evacuate Transportation- Disadvantaged Populations, but Some Have Taken Steps to Improve Preparedness Many state and local governments have gaps in their evacuation preparedness—including planning, training, and conducting exercises— for transportation-disadvantaged populations. Many of these governments generally have limited awareness or understanding of the need to plan for the evacuation of transportation-disadvantaged populations. These governments believe that the risk of an evacuation is too low to warrant planning for these populations. The governments also may have focused only on planning for self-evacuations. In addition, while some state and local governments may be aware of the need to prepare for evacuating these populations, some have made little progress because of insufficient planning details and little training for, and exercising of, plans to evacuate the transportation-disadvantaged. Although some state and local governments have taken steps to address challenges and related barriers, the outcomes of these actions remain uncertain. Many State and Local Governments Are Generally Not Well Prepared to Evacuate Transportation- Disadvantaged Populations for Several Reasons Many states and localities are generally not well prepared—including planning, training, and conducting exercises—to evacuate transportation- disadvantaged populations. DHS’s Nationwide Plan Review of emergency operation plans from all 50 states and 75 of the largest urban areas reported that 10 percent of state and 12 percent of urban area evacuation planning documents sufficiently addressed assisting those who would not be able to evacuate on their own. The review also identified that such planning often consisted of little more than public information campaigns designed to encourage residents to evacuate by their own means. Even in hurricane-affected areas, most evacuation plans do not fully address the needs of transportation-disadvantaged populations. Most notably, DOT’s Catastrophic Hurricane Evacuation Plan Evaluation of 63 Gulf Coast jurisdictions (five states and 58 counties and parishes) reported that, although plans generally address the issue of evacuating those considered transportation-disadvantaged, most do not have detailed information on how to identify and locate populations, or determine their needs and secure transportation and other resources required to carry out an evacuation. The DHS review also reported that most state and urban area emergency plans do not address evacuation for persons with disabilities and overlook the availability of timely accessible transportation, such as life-equipped vehicles, emergency communication methods, and the need to keep people together with their family member, caregivers, or medical equipment. Limited awareness or understanding of the need to prepare for evacuating transportation-disadvantaged populations has contributed to inadequate preparedness on the part of state and local governments. The Nationwide Plan Review stated that some state and local officials believe they will never experience a catastrophic event. These officials also believe that the evacuation of an entire city or state is improbable and expressed concern that strengthening evacuation preparedness standards, such as those related to planning, training, and conducting exercises for the evacuation of transportation-disadvantaged populations, could place unrealistic expectations on communities with limited planning resources and few identified risks. Officials at two of the five major cities we visited also told us that the likelihood of disaster scenarios requiring mass evacuation is too low to warrant spending limited funds on evacuation preparedness for these populations. However, officials at one of the five major cities we visited indicated that they are beginning to address evacuation preparedness for transportation-disadvantaged populations in smaller scale evacuations, which they thought would be more likely to occur. Three of the five major cities and one of the four states we visited have recognized, after Hurricane Katrina, the need to include provisions in their evacuation plans for those without access to their own transportation. Officials at one of these three major cities said that they had not planned, trained, or conducted exercises for these populations until late 2005, when DHS officials started to pose questions for the Nationwide Plan Review. A senior emergency management official in another one of those three major cities said that very few residents are without personal vehicles. Therefore, officials in that city focused plans, training, and exercises on evacuation by personal vehicle. However, 2000 U.S. Census data reported that 16.5 percent of households in that major city are car-less. DOT’s evaluation reported that most state and local evacuation plans focus on highway evacuations by personal vehicles. We found another example of this focus on personal vehicles in one of the four states we visited. This state spent approximately $100,000 to develop and distribute an evacuation pamphlet with self-preparedness information and a large evacuation map on how those with access to a personal vehicle can use the highway system to evacuate. Yet, the state did not conduct similar outreach for those who require transportation assistance in evacuations. DOT’s review of evacuation plans in the Gulf Coast reported that, although some jurisdictions have well-coordinated and tested plans, the plans of many other jurisdictions do not include sufficient detail—nor have staff been trained in or practiced with the plans to ensure effective implementation. We observed a similar phenomenon during our site visits. State and local governments vary in their level of preparedness, with many not well prepared to evacuate transportation-disadvantaged populations. For example, at the time of our review, evacuation plans from two of the five major cities and three of the four states we visited did not address the need to prepare for transportation-disadvantaged populations. Further, DOT reported that many Gulf Coast jurisdictions conduct disaster training and exercises without involving key players such as transit agencies, state departments of transportation, and school bus operators, even though some evacuation plans rely on the use of vehicles from these entities. In the past year, officials at three of the five major cities and three of the four states we visited had conducted training or exercises that addressed evacuating transportation-disadvantaged populations, or included such populations in training or exercises. Government reports on Hurricane Katrina highlighted the vulnerability of transportation-disadvantaged populations, leading some emergency officials to reevaluate their level of preparedness to evacuate these populations. As a result, although state and local governments have generally overlooked transportation- disadvantaged populations in the past, some are now taking steps to overcome the challenges and barriers to evacuating transportation- disadvantaged populations. The lack of evacuation preparedness for transportation-disadvantaged populations may reflect a larger problem in emergency planning, as the DHS Nationwide Plan Review has highlighted. For example, DHS reported that responses to its question on emergency planning actions being taken to address transportation-disadvantaged populations received the lowest percentage of sufficient responses from both state and urban areas. Some respondents to this question indicated that they were not sure how to proceed in planning for transportation-disadvantaged populations or what was expected of them. For example, one jurisdiction requested guidance to “understand what is expected of them and ideas on how they can achieve it.” Another respondent stated they “are wondering what areas should be covered to ensure that a response plan is adequate.” In addition, DHS found no state or urban area emergency plan annexes to be fully sufficient in addressing transportation-disadvantaged populations. Such annexes pertain to specific emergency functions, including evacuation, but also mass care and communications, among others. DHS reported that emergency plans lack a consistency of approach, depth of planning, or evidence of safeguards and effective implementation. In addition, DHS reported that few plans demonstrate the in-depth planning and proactive thinking needed to meet the needs of these populations. Some State and Local Governments Have Taken Steps to Address Evacuation Preparedness Challenges and Related Barriers Although, in general, preparedness efforts to evacuate transportation- disadvantaged populations are lacking, state and local governments have taken steps to address challenges in identifying and locating these populations, determining their evacuation needs, and providing for their transportation. With regard to addressing the challenges of identifying and locating transportation-disadvantaged populations, some of the five major cities and four states we visited, as well as those reviewed as part of the DHS and DOT reports, have taken the following steps: Conducting surveys and studies: Officials in all five major cities and one of the four states we visited told us that they have conducted surveys or collaborated with academic institutions to locate transportation- disadvantaged populations. For example, one major city conducted a disaster preparedness survey of transportation-disadvantaged populations. Another major city obtained survey data on transportation-disadvantaged populations through collaboration with a local university’s school of public health. In a third major city, emergency management officials have plans to collaborate with academics to create simulations of evacuation scenarios. These scenarios would be used for evacuation preparedness activities, such as calculating how many buses would be needed and which routes to take for an evacuation. Collaborating with state and local entities: Two of the five major cities we visited have identified, or plan to identify, transportation- disadvantaged populations through faith-based or community outreach programs such as Operation Brother’s Keeper (a Red Cross program that matches those with access to a personal vehicle to those in their community without such access) and Neighborhood Watch (a crime- prevention program). In another city, officials stated their intent to use Citizen Corps (which brings community and government leaders together to coordinate the involvement of community members and nongovernmental resources in emergency preparedness and response and whose volunteers are trained, exercised, and managed at the local level) to help identify, locate, and evacuate transportation-disadvantaged populations. One respondent to DHS’s Nationwide Plan Review stated that their jurisdiction is looking at developing partnerships with nonprofit and local social service organizations and community groups that deal with transportation-disadvantaged populations in order to assist in identifying and locating these populations. In addition, two of the five major cities we visited had collaborated with their respective metropolitan planning organizations to collect evacuation-related data, and officials in one state we visited told us that cities and counties in their state need to better coordinate with metropolitan planning organizations to identify transportation-disadvantaged populations. Officials from all of the five metropolitan planning organizations we visited (which are also DOT grant recipients) told us that they had information that could be useful in evacuation preparedness. Because these organizations are required to conduct transportation planning as part of their federal funding agreements, they acquire data on transit-dependent populations that would be useful for emergency officials. Three of these organizations showed us data and maps illustrating the location of transportation- disadvantaged populations, but stated that emergency management officials in their communities had not yet reached out to them for information or assistance. The Association of Metropolitan Planning Organizations told us that although their 385 member organizations differ in capacity, many would be able to provide assistance to emergency management officials in identifying and locating transportation- disadvantaged populations. Mapping transportation-disadvantaged populations: DOT’s evaluation of evacuation plans in the 63 Gulf Coast jurisdictions found that just over half (33) of those jurisdictions had identified certain transportation- disadvantaged populations (hospitals, nursing homes, and assisted care facilities) by geographic location. DHS’s Nationwide Plan Review found that some participants are employing modeling software to determine the size and location of transportation-disadvantaged populations. One of the five major cities we visited worked with academics to use computerized mapping technology—known as geographic information systems—to map the location of these populations. Another major city of the five we visited is working with the state’s department of motor vehicles to create a computerized map of households without personal vehicles. With regard to determining the needs of these populations and providing for transportation, state and local governments in some of the states we visited (as well as governments reviewed in the DHS and DOT reports) have taken the following steps: Involving state and local entities that are not traditionally involved in emergency management as part of preparedness efforts: DHS’s Nationwide Plan Review stated that federal, state, and local governments should increase the participation of persons with disabilities and disability subject-matter experts in the development and execution of plans, training, and exercises. Officials in two of the five major cities we visited have involved social service agencies, nonprofit or other organizations, and transportation providers—such as schools for the blind and deaf, and paratransit providers for the disabled—in emergency preparedness activities. Some of these state and local entities are DOT grant recipients. Several emergency preparedness experts with whom we spoke recommended involving, in evacuation preparedness, state and local entities that represent or serve transportation-disadvantaged populations. Such entities can assist emergency management officials in efficiently determining the needs of these populations. Coordinating with state and local entities that are not traditionally involved in emergency management as part of preparedness efforts: DOT’s Catastrophic Hurricane Evacuation Plan Evaluation found that approximately two-thirds (or 43) of the 63 Gulf Coast evacuation plans included the use of public transit vehicles, school buses, and paratransit vehicles. The Nationwide Plan Review states that a critical but often overlooked component of the evacuation process is the availability of timely, accessible transportation (especially lift-equipped vehicles). In one of the five major cities we visited, transportation-disadvantaged populations are evacuated using social service transportation providers with ambulances, school buses, and other vehicles including those with lift-equipment. Training state and local entities that are not traditionally involved in emergency management as part of preparedness efforts: Officials at two of the five major cities we visited have trained, or are planning to train, social service agencies to coordinate and communicate with emergency responders. One of the five major cities we visited found that, during hurricanes, community-based organizations that serve the elderly were operating on a limited basis or not at all. Therefore, this city’s government mandated that community-based organizations have continuity of operations plans in place to increase their ability to maintain essential services during a disaster. This city also provided training and technical assistance to help organizations develop such plans. In another major city, the paratransit providers that are DOT grant recipients received emergency response training, and have identification that informs law enforcement officials that these providers are authorized to assist in emergency evacuations. Training emergency responders to operate multi-passenger vehicles: Two of five major cities we visited are considering training police officers and fire fighters to obtain a type of commercial driver’s license that would allow them to operate multi-passenger vehicles. This would provide a greater number of available drivers and more flexibility for evacuation assistance. Incorporating transportation-disadvantaged populations in exercises: DHS recommended in its Nationwide Plan Review that jurisdictions increase the participation of persons with disabilities and disability subject-matter experts in training and exercises. Several experts we interviewed also emphasized the importance of including transportation- disadvantaged populations in exercises, and one explained that the level of understanding of these populations’ needs among emergency management and public safety officials is very low. Three of the five major cities we visited incorporate transportation-disadvantaged populations into their evacuation exercises. State and local governments in some of the states we visited, as well as in those reviewed in the DHS and DOT reports, have taken steps to address legal and social barriers that could prevent them from successfully evacuating transportation-disadvantaged populations: Establishing memoranda of understanding and mutual aid agreements: Memoranda of understanding are legal arrangements that allow jurisdictions to borrow vehicles, drivers, or other resources in the event of an emergency. Mutual aid agreements are contracts between jurisdictions in which the jurisdictions agree to help each other by providing resources to respond to an emergency. These agreements often identify resources, coordination steps, and procedures to request and employ potential resources, and may also address liability concerns. DHS’s Nationwide Plan Review reported that few emergency operations plans considered the practical implementation of mutual aid, resource management, and other logistical aspects of mutual aid requests. DHS found that 23 percent of urban areas needed to augment or initiate memoranda of understanding to improve their use of available modes of transportation in evacuation planning. DOT’s Catastrophic Hurricane Evacuation Plan Evaluation report stated that Gulf Coast evacuation plans have limited information addressing the use of mutual aid agreements or memoranda of understanding with private motor coach companies, paratransit providers, ambulance companies, railroad companies, and air carriers. However, three of the five major cities we visited have established formal arrangements, such as memoranda of understanding and mutual aid agreements, with neighboring jurisdictions. Establishing plans to evacuate and shelter pets: DHS’s Nationwide Plan Review found that 23 percent of 50 states and 9 percent of 75 of the largest urban areas satisfactorily address evacuation, sheltering, and care of pets and service animals at the same evacuation destination as their owners. This is important not only to encourage the evacuation of transportation- disadvantaged populations, but the evacuation of those with personal vehicles as well. DOT’s Catastrophic Hurricane Evacuation Plan Evaluation found that about one-fifth (19 percent) of 63 Gulf Coast jurisdictions were prepared to evacuate and shelter pets and service animals. One of the five major cities we visited worked with the Society for the Prevention of Cruelty to Animals to arrange a tracking and sheltering system for pets. Because officials at this major city have encountered difficulties in providing shelter space for pets and their owners together, they arranged for a pet shelter and shuttle service for owners to care for their pets. Ensuring that evacuees can bring assistance devices or service animals: Transportation-disadvantaged individuals may be unwilling or unable to evacuate if they are unsure that they will be able to bring assistance devices such as wheelchairs, life-support systems, and communications equipment as well as service animals. DOT’s Catastrophic Hurricane Evacuation Plan Evaluation found that only one-third (32 percent) of 63 Gulf Coast jurisdictions had made satisfactory provisions for transporting these items along with evacuees. Providing extensive information about evacuations and sheltering: In an effort to encourage citizens to evacuate, one of the five major cities we visited provided detailed information about evacuation and sheltering procedures. Despite extensive public education campaigns to raise awareness about evacuations, in two of five major cities we visited officials stated that some people will still choose not to evacuate. In the officials’ experience, when an evacuation vehicle arrived at the homes of transportation-disadvantaged populations who had registered for evacuation assistance, some refused to evacuate. These individuals cited multiple reasons, such as disbelief in the danger presented by the storm, discomfort in evacuating, and the absence of a caregiver or necessary medication. Emphasizing self-preparedness: Officials from three of the five major cities and two of the four states we visited emphasized citizen self- preparedness, such as developing an evacuation preparedness kit that includes medications, food, water and clothes. While the Federal Government Provides Some Evacuation Assistance, Gaps Remain Although the federal government has provided some assistance to state and local governments in preparing for their evacuation of transportation- disadvantaged populations, gaps in this assistance remains. For example, federal guidance provided to state and local emergency officials does not address preparedness challenges and barriers for transportation- disadvantaged populations. Gaps also exist in the federal government’s role in and responsibilities for providing evacuation assistance when state and local governments are overwhelmed in a catastrophic disaster. For example, the National Response Plan does not clearly assign the lead, coordinating, and supporting agencies to provide evacuation assistance or outline these agencies’ responsibilities. Reports by the White House and others suggest that this lack of clarity slowed the federal response in evacuating disaster victims, especially transportation-disadvantaged populations, during Hurricane Katrina. Amendments to the Stafford Act in October 2006 have further clarified that FEMA, within DHS, is the single federal agency responsible for leading and coordinating evacuation assistance. The Federal Government Provides Some Evacuation Preparedness Assistance to State and Local Governments The federal government provides some assistance to state and local governments in preparing for the evacuation of transportation- disadvantaged populations by establishing requirements, funding, and guidance and technical assistance for evacuation preparedness. Examples include: Requirements: Federal law requires that local emergency planning officials develop emergency plans, including an evacuation plan that contains provisions for a precautionary evacuation and alternative traffic routes. In any program that receives federal funding, additional federal protections clearly exist for persons with disabilities, who, depending on the nature of the disability, potentially could be transportation- disadvantaged. An executive order addresses emergency preparedness for persons with disabilities, and the Americans with Disabilities Act and the Rehabilitation Act requires consideration of persons with disabilities. According to Executive Order 13347, in the context of emergency preparedness, executive departments and federal agencies must consider the unique needs of their employees with disabilities and those persons with disabilities whom the agency serves; encourage this consideration for those served by state and local governments and others; and facilitate cooperation among federal, state, local, and other governments in the implementation of the portions of emergency plans relating to persons with disabilities. Since October 2006, federal law now requires federal agencies to develop operational plans that address, as appropriate, support of state and local government in conducting mass evacuations, including provisions for populations with special needs, among others. Executive Order 13347 also created the Interagency Coordinating Council on Emergency Preparedness and Individuals with Disabilities to focus on disability issues in emergency preparedness. Additionally, as noted by DHS, the Americans with Disabilities Act requires state and urban areas to include accessibility for persons with disabilities in their emergency preparedness process. Within DHS, the Office of Civil Rights and Civil Liberties reviews and assesses civil rights and civil liberties abuse allegations. Other civil rights laws might also apply to transportation- disadvantaged populations, depending on how such populations are identified. Federal laws prohibit discrimination on the basis of race, color, religion, sex, and national origin. National origin discrimination includes discrimination on the basis of limited English proficiency, and states and localities are required to take reasonable steps to ensure that people with limited English proficiency have meaningful access to their programs. Recipients of DHS grants are allowed to use a reasonable portion of their funding to ensure that they are providing the meaningful access required by law. DHS also has ongoing work to foster a culture of preparedness and promote individual and community preparedness, such as through information available as part of its Ready.gov Website and Citizen Corps program. Changes in federal law were enacted in October 2006 to further protect some transportation-disadvantaged populations. These include: the establishment of a National Advisory Council to ensure effective and ongoing coordination of federal preparedness, protection, response, recovery, and mitigation for natural disasters, acts of terrorism, and other man-made disasters, with a cross-section of members, including representatives of individuals with disabilities and other populations with special needs; the appointment of a Disability Coordinator to ensure that needs of individuals with disabilities are being properly addressed in emergency preparedness and disaster relief; the establishment of an exercise program to test the National Response Plan, whereby the program must be designed to address the unique requirements of populations with special needs and provide assistance to state and local governments with the design, implementation, and evaluation of exercises; and a requirement that federal agencies develop operational plans to respond effectively to disasters, which must address support of state and local governments in conducting mass evacuations, including transportation and provisions for populations with special needs. Funding: DHS grants are the primary federal vehicle for funding state and local evacuation preparedness efforts, and these grants can be used to plan evacuations for transportation-disadvantaged populations. DHS’s 2006 Homeland Security Grant Program encourages state and local governments to increase their emergency preparedness by focusing on a subset of 37 target capabilities that DHS considers integral to nationwide preparedness for all types of hazards. The state and local governments choose which subset of those capabilities best fits their preparedness needs. One of these target capabilities addresses evacuations. If a state determines that it needs to plan for the evacuation of transportation- disadvantaged populations, it can use funds from its DHS grant for such planning activities. Changes in federal law in October 2006 require states with mass evacuation plans funded through Urban Area Security Initiative and Homeland Security Grant Program grants to “develop procedures for informing the public of evacuation plans before and during an evacuation, including individuals with disabilities or other special needs, with limited English proficiency, or who might otherwise have difficulty in obtaining such information.” Under this section, FEMA can establish guidelines, standards, or requirements for ensuring effective mass evacuation planning for states and local governments if these governments choose to apply for grant funding for a mass evacuation plan. Guidance and Technical Assistance: The federal government provides evacuation preparedness guidance—including planning considerations, studies, and lessons learned—for state and local governments. We found that the primary source of such guidance for state and local officials is FEMA’s State and Local Guidance 101, which includes a section on evacuation preparedness considerations. This guidance recommends preparing to evacuate transportation-disadvantaged populations. Additionally, DHS has a Lessons Learned Information Sharing online portal for state and local emergency management and public safety officials where the aforementioned federal guidance can be found. The federal government also provides voluntary technical evacuation assistance—such as planning consultants and modeling software—to state and local officials. For example, FEMA, the United States Army Corps of Engineers, and the National Weather Service conduct hurricane evacuation studies from which they provide technical assistance on several preparedness issues (such as analyses on storm modeling, sheltering, and transportation) for state and local officials. Another example is the evacuation liaison team—comprised of FEMA, DOT, and the National Hurricane Center—that works with state and local governments to coordinate interstate transportation during hurricane evacuations. The federal government has also undertaken several smaller efforts to address evacuation preparedness for transportation-disadvantaged populations. (See app. V.) Despite Some Federal Assistance to State and Local Governments, Gaps Remain in Evacuation Preparedness for Transportation- Disadvantaged Populations Although the federal government provides some assistance to state and local governments for preparing to evacuate transportation-disadvantaged populations, gaps in this assistance remain, including the following: Requirements: Until October 2006, while federal law required that emergency plans include an evacuation plan, there was no specific requirement that the evacuation plan address how to transport those who could not self-evacuate. Federal law now requires that state and local governments with mass evacuation plans incorporate special needs populations into their plan. However, this requirement does not necessarily ensure the incorporation of all transportation-disadvantaged populations. This is because state and local governments do not share a consistent definition of special needs populations. In the course of our review, we found that state and local governments interpreted the term in a much more narrow fashion that did not encompass all transportation- disadvantaged populations, which are important to evacuation preparedness. Third, even though civil rights laws require that no person be excluded on the basis of age, sex, race, color, religion, national origin, or disability, federal laws may not provide protection for transportation- disadvantaged populations during federally funded emergency preparedness efforts (including evacuation planning) because some of these populations do not clearly fall into one of these protected classes. For example, federal laws do not require state and local governments to plan for the evacuation of tourists or the homeless. In addition, although the Americans with Disabilities Act requires state and urban areas to include accessibility for persons with disabilities in their emergency preparedness process, an April 2005 report from the National Council on Disability found little evidence that DHS has encouraged state or local grant recipients to incorporate disability and access issues into their emergency preparedness efforts. Additionally, in four of five major cities we visited, advocacy groups representing persons with disabilities told us that persons with disabilities were often not involved in, or could be better integrated into, emergency management training and exercises. In addition, the National Council on Disability and the Interagency Council on Emergency Preparedness for Individuals with Disabilities are respectively working to strengthen relevant legislation and ensure that federal agencies consider transportation-disadvantaged populations in federally funded planning, training, and exercises. For example, the National Council on Disability is recommending that the Congress amend the Stafford Act to encourage federal agencies to link a recipient’s emergency preparedness grants to compliance with civil rights laws. Similarly, the Interagency Council on Emergency Preparedness for Individuals with Disabilities added disability subject-matter experts to DHS’s Nationwide Plan Review and worked with DHS’s Preparedness Directorate to add transportation-disadvantaged components to Top Officials Four, a federal, state, and local government training exercise held in June 2006 that involved senior agency officials from across the federal government. Funding: While DHS’s grant programs provide funding that can be applied toward evacuation planning, training, and exercises for transportation- disadvantaged populations (as affirmed by language in the Post-Katrina Emergency Management Reform Act of 2006), only two of the five major cities and none of the four states we visited requested DHS grants for activities related to the evacuation of transportation-disadvantaged populations. In addition, we could not determine the amount of funds spent on evacuation planning nationwide because, although DHS is in the process of developing a grant tracking system, it does not currently know how much of its grant funds have been used or are being used by state and local governments to prepare for the evacuation of transportation- disadvantaged populations. Officials at two of the five major cities and two of the four states we visited told us that DHS’s grant programs have a continued emphasis on funding the procurement of equipment rather than planning, and on preparedness for terrorist acts rather than on other disasters. For example, an official from one of the four states we visited told us that an evacuation preparedness activity was denied by DHS because it did not closely intersect with terrorism preparedness, one of DHS’s grant requirements prior to fiscal year 2006. Therefore, emergency management officials believe they were discouraged from using DHS funding to plan for natural disasters, such as hurricanes. The Office of Civil Rights and Civil Liberties at DHS—responsible for reviewing and assessing civil rights and civil liberties abuse allegations and, as part of the Nationwide Plan Review, participating in the assessment of persons with disabilities—is currently involved in the grant-guidance development process for fiscal year 2007. DHS has indicated that the office’s involvement in the grant process is a priority. Guidance and Technical Assistance: While acknowledging the need to prepare for the evacuation of transportation-disadvantaged populations, the most widely used FEMA guidance does not provide details about how to plan, train, and conduct exercises for evacuating these populations or how to overcome the challenges and barriers discussed earlier. Officials from three of the five major cities we visited said that additional guidance from DHS would assist their evacuation planning efforts. Further, one- third of the respondents to a DHS Nationwide Plan Review question on emergency planning for transportation-disadvantaged populations requested additional guidance, lessons learned, and best practices from DHS. DHS officials told us that they intend to release new emergency preparedness planning guidance in early calendar year 2007. In addition, although DHS has an online portal—its Lessons Learned Information Sharing portal—which includes the aforementioned guidance and other emergency preparedness information, officials from two of the five major cities and two of the four states we visited told us that specific information is not easy to find, in part, because the portal is difficult to navigate. Upon using the portal, we also found this to be true. For example, the search results appeared to be in no particular order and were not sorted by date or relevant key terms, and searched terms were not highlighted or shown anywhere in the abstracts of listed documents. In addition, some studies were not available through the portal, including studies from some of the experts with whom we have spoken and provided us with useful information on evacuation preparedness for transportation-disadvantaged populations. In commenting on a draft of this report, DHS officials told us that they had improved the overall functionality of DHS’s Lessons Learned Information Sharing portal. We revisited the portal as of December 7, 2006 and it appears to have improved some of its search and organizational functions. We have found, however, that some of the issues we previously identified still remain, including, when using the portal’s search function, no direct link to key evacuation preparedness documents, such as DHS’s Nationwide Plan Review Phase I and II reports. Aside from the portal, federal evacuation studies of, and lessons learned from, the chemical stockpile and radiological emergency preparedness programs could also help state and local officials prepare for these populations. Because chemical stockpile and radiological emergency preparedness programs work with communities that include transportation-disadvantaged populations, some of the studies and lessons learned about these programs address evacuation challenges for these populations. For example, a Department of Energy National Laboratory study on emergency preparedness in Alabama includes information on how to address the needs of transportation-disadvantaged populations in evacuations. However, officials from the chemical stockpile and radiological emergency preparedness programs told us that DHS has not widely disseminated these studies and lessons learned or made them easily available to state and local officials. The federal government has provided technical assistance primarily focused on self-evacuations. Therefore, while Louisiana and surrounding states received technical assistance from FEMA, DOT, and the National Hurricane Center to help manage evacuation traffic prior to Hurricane Katrina, federal officials with whom we spoke were unaware of any similar technical assistance provided for the evacuation of transportation-disadvantaged populations and other populations. In preparation for the 2006 hurricane season, DHS officials reported to us that DHS, along with DOT, provided some technical assistance to three Gulf Coast states on evacuating persons with disabilities and those with function and medical limitations. Gaps Also Remain in Federal Agencies’ Role and Responsibilities for Providing Evacuation Assistance When State and Local Governments are Overwhelmed Although the Stafford Act gives the federal government the authority to assist state and local governments with an evacuation, we found that the National Response Plan—the federal government’s plan for disaster response—does not clearly define the lead, coordinating, and supporting agencies to provide evacuation assistance for transportation- disadvantaged and other populations or outline these agencies’ responsibilities when state and local governments are overwhelmed by a catastrophic disaster. In our conversations with DHS officials prior to October 2006, officials did not agree that FEMA (an agency within DHS) was the single federal agency responsible for leading and coordinating evacuation assistance. However, after amendments to the Stafford Act in October 2006, DHS officials have agreed that this is DHS’s responsibility. The absence of designated lead, coordinating, and supporting agencies to provide evacuation assistance in the National Response Plan was evident in the federal response for New Orleans during Hurricane Katrina. As both the White House Homeland Security Council report and the Senate Government Affairs and Homeland Security Committee report noted, the federal government was not prepared to evacuate transportation- disadvantaged populations, and this severely complicated and hampered the federal response. Specifically, the Senate report stated that “the federal government played no role in providing transportation for pre-landfall evacuation” prior to the disaster despite federal officials’ awareness that as many as 100,000 people in New Orleans would lack the means to evacuate. The Senate report also stated that DHS officials did not ask state and local officials about the steps being taken to evacuate the 100,000 people without transportation, whether they should deploy buses and drivers to the area, or whether the federal government could help secure multimodal transportation (e.g., buses, trains, and airlines) for the pre- landfall evacuation. The White House report stated that, as a result of actions not taken, the federal government’s evacuation response suffered after Hurricane Katrina made landfall. For example, communication problems created difficulty in providing buses and limited situational awareness contributed to difficulties in guiding response efforts; the result was poor coordination with state and local officials in receiving evacuees. This contributed to delayed requests for vehicles and the delayed arrival of vehicles to transport disaster victims, confusion over where vehicles should be staged, where disaster victims would be picked up, and where disaster victims should be taken. We found that there is no entity under the National Response Plan that is responsible for dispatch and control of such evacuation vehicles. Given the problems experienced during the evacuation of New Orleans, the White House and Senate reports concluded that the federal government must be prepared to carry out mass evacuations when disasters overwhelm state and local governments. To achieve that goal, the White House report recommended that DOT be designated as the agency responsible for developing the federal government’s capability to carry out mass evacuations when state and local governments are overwhelmed. In the aftermath of Hurricane Katrina, the federal government has taken several steps to improve its ability to respond to a catastrophic disaster and, for the 2006 hurricane season, provide additional evacuation support to state and local governments. First, in May 2006, DHS made several changes to the National Response Plan, including one related to evacuations. Consistent with a previous recommendation we made, DHS revised the catastrophic incident annex of the National Response Plan to include disasters that may evolve or mature to catastrophic magnitude (such as an approaching hurricane). Therefore, in future disasters, if the federal government has time to assess the requirements and plans, it will tailor its proactive federal response and pre-positioning of assets, such as vehicles, to address the specific situation. Second, for the 2006 hurricane season, DOT was prepared to assist the Gulf Coast states of Alabama, Louisiana, and Mississippi in providing evacuation assistance, clarified command and control by identifying key federal contacts, and worked with the states to finalize plans for pre-positioning of federal assets and commodities in the region. In addition, a DOT official responsible for overseeing DOT’s emergency activities told us that, while the agency was providing transportation services or technical assistance to some of the Gulf Coast states for the 2006 hurricane season, it had not taken the role of lead or coordinating federal agency responsible for providing evacuation assistance. This official also stated that if additional federal evacuation assistance beyond transportation services and technical assistance are needed, DHS would need to delegate such support to other agencies. Further, this official told us that DOT does not yet have any specific plans to provide similar evacuation support in catastrophic disasters after the 2006 hurricane season. Further, because of the damage caused by Hurricane Katrina and the continuing vulnerabilities of southeastern Louisiana, DOT, in cooperation with DHS, has provided additional support to Louisiana. This additional support included working with the state to identify those who could not evacuate on their own; establishing an interagency transportation management unit to coordinate the routing of buses; entering into contracts to provide transportation by bus, rail, and air; and providing transportation from state and local pre-established collection points to shelters, rail sites, or air transportation sites. DHS and DOT planned to assist Louisiana in evacuating the estimated 96,000 persons who could not evacuate by their own means if the state orders an evacuation. Finally, amendments to the Stafford Act in October 2006 have further clarified that FEMA, within DHS, is the single federal agency responsible for leading and coordinating evacuation assistance. DHS officials have since agreed that this is DHS’s responsibility. However, despite these improvements, DHS has not yet clarified, in the National Response Plan, the leading, coordinating, and supporting federal agencies to provide evacuation assistance when state and local governments are overwhelmed, and what their responsibilities are. In commenting on a draft of this report, DHS told us that as part of its National Response Plan review and revision process, DHS plans to encompass several key revisions regarding evacuations, including clarifying roles and responsibilities of federal agencies as well as private sector and nongovernmental agencies. Conclusions The experience of Hurricane Katrina illustrated that when state, local, and federal governments are not well prepared to evacuate transportation- disadvantaged populations during a disaster, thousands of people may not have the ability to evacuate on their own and may be left in extremely hazardous circumstances. While state and local governments have primary responsibility for planning, training, and conducting exercises for the evacuation of these populations, gaps in federal assistance have hindered the ability of many state and local governments to sufficiently prepare to address the complex challenges and barriers of evacuating transportation- disadvantaged populations. This includes the lack of any requirement to plan, train, and conduct exercises for the evacuation of transportation- disadvantaged populations as well as gaps in guidance and technical assistance, such as problems with DHS’s Lessons Learned Information Sharing online portal. In addition, information that DOT grantees and stakeholders have could be useful in evacuation preparedness efforts. It is uncertain whether state and local governments will be better positioned to evacuate transportation-disadvantaged populations in the future. Furthermore, the experience of Hurricane Katrina reinforced the fact that some disasters are likely to overwhelm the ability of state and local governments to respond, and that the federal government needs to be prepared in these instances to carry out an evacuation of transportation- disadvantaged populations. Because DHS has not yet clarified in the National Response Plan the lead, coordinating, and supporting federal agencies to provide evacuation support for other transportation- disadvantaged populations nor outlined these agencies’ responsibilities, the federal government cannot ensure that it is taking the necessary steps to prepare for evacuating such populations; this could contribute to leaving behind of some of society’s most vulnerable populations in a future catastrophic disaster. The National Response Plan review and revision process provides DHS with the opportunity to clarify the lead, coordinating, and supporting agencies to provide evacuation assistance and outline these agencies’ responsibilities in order to strengthen the federal government’s evacuation preparedness. Recommendations for Executive Action To improve federal, state, and local preparedness for the evacuation of transportation-disadvantaged populations, we are making three recommendations to the Secretary of Homeland Security: Clarify, in the National Response Plan, that FEMA is the lead and coordinating agency to provide evacuation assistance when state and local governments are overwhelmed, and also clarify the supporting federal agencies and their responsibilities. Require that, as part of its grant programs, all state and local governments plan, train, and conduct exercises for the evacuation of transportation- disadvantaged populations. Improve technical assistance by (1) working with DOT to provide more detailed guidance and technical assistance on how to plan, train, and conduct exercises for evacuating transportation-disadvantaged populations; and (2) continuing to improve the organization of and search functions for its Lessons Learned Information Sharing online portal to better facilitate access to information on evacuations of transportation- disadvantaged for federal, state, and local officials. In addition, to encourage state and local information sharing as part of their evacuation preparedness for transportation-disadvantaged populations, we are making one recommendation to the Secretary of Transportation: Encourage DOT’s grant recipients and stakeholders, through guidance and outreach, to share information that would assist emergency management and transportation officials in identifying and locating as well as determining the evacuation needs of and providing transportation for transportation-disadvantaged populations. Agency Comments and Our Evaluation We received written comments on a draft of this report from DHS. (See app. II). DHS also offered additional technical and clarifying comments which we incorporated as appropriate. DHS’s letter stated that the draft adequately identified the pertinent issues that have troubled state and local emergency management officials, and that it would consider our recommendations. DHS’s letter also stated that some recommendations in our draft report have been partly implemented, including improvements to the overall functionality of the lessons learned information sharing portal. We revisited DHS’s Lessons Learned Information Sharing portal as of December 7, 2006 and it appears to have improved some of its search and organizational functions. We have found, however, that some of the issues we previously identified still remain. Therefore, we revised our recommendation to reflect the need for continued improvement of this portal. DHS’s letter raised concerns that our discussion of a single federal agency to lead and coordinate evacuations reflected a misunderstanding of the federal response process because, for large and complex disasters, no single federal agency can provide the entire response support required. We did not intend to suggest that a single federal agency can provide such support for evacuation. Rather, we stated that the lead, coordinating, and supporting federal agencies to provide evacuation assistance when state and local governments are overwhelmed were not clear in the National Response Plan. DHS’s letter notes, in contrast to an earlier discussion we had with DHS officials, that DHS is the single agency responsible for leading and coordinating evacuation support to the states, and that this responsibility was emphasized by the amendments to the Stafford Act in October 2006. We modified our draft as appropriate to reflect DHS’s role in response to these amendments, but we retained our recommendation related to this issue because agency roles and responsibilities to provide evacuation assistance still need to be clarified in the National Response Plan. DHS’s letter stated that many issues related to evacuations are being considered in ongoing revisions to the National Response Plan, including the roles and responsibilities of federal agencies as well as and private sector and nongovernmental agencies. We are encouraged to learn that these issues are part of the National Response Plan review and revision process. DHS also commented that our draft report implied that the events of Hurricane Katrina were a “typical occurrence.” This is not an accurate summary of our findings. Rather, our report emphasizes that there has been a heightened awareness of evacuation preparedness for transportation-disadvantaged populations as a result of Hurricane Katrina, and that we and others remain concerned about the level of preparedness among federal, state, and local governments. We received oral comments on a draft of this report from DOT officials, including the National Response Program Manager, Office of Intelligence, Security, and Emergency Response, Office of the Secretary. DOT officials generally agreed with the information contained in the report and stated they would consider our recommendation. DOT officials offered additional technical and clarifying comments which we incorporated as appropriate. We are sending copies of this report to congressional committees and subcommittees with responsibilities for DHS and DOT. We will also make copies available to others upon request. This report will be available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions about this report, please contact me at (202) 512- 2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Staff who made key contributions to this report are listed in appendix V. Appendix I: Scope and Methodology Our review focuses on the evacuation of transportation-disadvantaged populations. Because we issued a report in July 2006 on the evacuation of hospitals and nursing homes, we did not include them in the scope of this review. To assess the challenges state and local governments face in evacuating transportation-disadvantaged populations, we reviewed the Department of Homeland Security’s (DHS) Nationwide Plan Review and the Department of Transportation’s (DOT) Catastrophic Hurricane Evacuation Plan Evaluation. These reports describe many more states, urban areas, counties, and parishes than we were able to visit, providing a broader context to our findings. To assess the experience of transportation- disadvantaged populations during Hurricane Katrina, we reviewed the White House Report: Federal Response to Hurricane Katrina—Lessons Learned; the House of Representatives’ report, A Failure of Initiative: Final Report of the Select Bipartisan Committee to Investigate the Preparation for and Response to Hurricane Katrina; the Senate report, Hurricane Katrina: A Nation Still Unprepared; the DHS Inspector General’s report, A Performance Review of the Federal Emergency Management Agency’s Disaster Management Activities in Response to Hurricane Katrina; the National Organization on Disability’s Report on Special Needs Assessment for Katrina Evacuees Project; and the American Highway Users Alliance Emergency Evacuation Report 2006. We also held a panel organized in cooperation with, and held at, the National Academies. The panelists are experts in the field of disaster housing and were selected from a list of 20 provided by the National Academies. We asked for a mix of academics and practioners with knowledge on sheltering issues related to hurricanes Katrina and Rita as well as previous disasters. These panelists were Pamela Dashiell (Holy Cross Neighborhood Association), Buddy Grantham (Joint Hurricane Housing Task Force), Robert Olshansky (University of Illinois), Jae Park (Mississippi Governor’s Office of Recovery and Renewal), Walter Peacock (Texas A&M University), Lori Peek (Colorado State University), Brenda Phillips (Oklahoma State University), and Debra Washington (Louisiana Housing Finance Agency). To identify challenges and barriers, we reviewed selected reports on evacuations. Studies and papers from Argonne National Laboratory, the National Consortium on the Coordination of Human Services Transportation, and the Congressional Research Service contributed to our identification of challenges to evacuating transportation- disadvantaged populations. To obtain perspectives from officials involved in preparing for the evacuation of these populations, we reviewed the aforementioned federal reports. We also conducted interviews with state and local emergency management, transit and transportation, and public safety agency officials, as well as local metropolitan planning and advocacy organizations at five major cities and four state capitals: Buffalo and Albany, New York; Los Angeles and Sacramento, California; Miami and Tallahassee, Florida; New Orleans and Baton Rouge, Louisiana; and the District of Columbia. Because these sites were selected as part of a non-probability sample, the results cannot be generalized. We undertook site visits to these locations between March 2006 and June 2006. In selecting these major cities, we applied the following criteria: regional diversity; major city with a population of over 250,000; high percentage of population without personal vehicles; high or medium overall vulnerability to hazards; high percent of total population who are elderly, low income, or have a disability; and varied public transit ridership levels. In making our site selections, we used data from the 2000 U.S. Census on the percentage of occupied housing units with no vehicle available, city populations aged 65 and older, civilian non-institutionalized disabled persons aged five and older, and persons below the poverty level. To determine overall vulnerability, we applied Dr. Susan Cutter’s “Overall Vulnerability Index” from her presentation “Preparedness and Response: Learning from Natural Disasters” to DHS on February 14, 2006. Dr. Cutter is a professor of geography at the University of South Carolina, and is part of the National Consortium for the Study of Terrorism and Responses to Terrorism, which is funded by DHS. The Overall Vulnerability Index incorporates three indices measuring social, environmental, and all- hazards vulnerability. The social vulnerability index incorporates social demographic factors such as race and income, but also includes factors such as distance from hospitals. The environmental index includes the proximity of dangerous facilities (such as chemical and nuclear plants) and the condition of roadways, among other factors. The all-hazards vulnerability index analyzed all disasters recorded in the last 60 years, and rated urban areas for the frequency of hazards and the resulting financial impact. Public transit ridership was taken from data in the Federal Transit Administration’s National Transit Database. We determined that all the data we used were sufficiently reliable for use as criteria in our site selection process. To better understand issues related to emergency management and evacuations, particularly of transportation-disadvantaged populations, we interviewed several academics and experts who presented at the 2006 Transportation Research Board conference and the 2006 Working Conference on Emergency Management and Individuals with Disabilities and the Elderly; we also interviewed other academics and experts who were recommended to us by officials, associations, organizations, and others. These academics and experts were Madhu Beriwal (Innovative Emergency Management); Susan Cutter (University of South Carolina); Elizabeth Davis (EAD and Associates); Jay Goodwill and Amber Reep (University of South Florida); John Renne (University of New Orleans); William Metz and Edward Tanzman (Argonne National Laboratory); Brenda Phillips (Oklahoma State University); Tom Sanchez (Virginia Tech); and Kathleen Tierney (University of Colorado at Denver). To determine what actions state and local governments have taken to address challenges in evacuating transportation-disadvantaged populations, we interviewed, at the four states and five major cities we visited, state and local emergency management agency officials (who prepare for and coordinate evacuations), transit and transportation agency officials (who provide and manage transportation during evacuations), and public safety (fire and police) agency officials (who assist with transportation-disadvantaged populations during an evacuation). We also interviewed advocacy organizations. Much of the work that state and local governments are conducting to address these challenges is ongoing. In assessing how federal assistance has aided the state and local governments we visited in addressing these challenges and what further assistance the federal government is proposing, we reviewed the Stafford Act; the Homeland Security Act of 2002; the Post-Katrina Emergency Management Reform Act of 2006; the National Response Plan (including the Catastrophic Incident Annex and the Catastrophic Incident Supplement); DHS’s Nationwide Plan Review and DOT’s Catastrophic Hurricane Evacuation Plan Evaluation; and various studies and reports on Hurricane Katrina such as those prepared by the White House, House of Representatives, and Senate. We interviewed officials from DHS, DOT, and DOD to obtain their perspective on the federal role in evacuations. To obtain the perspective of federal agencies and councils focused on issues specifically related to transportation-disadvantaged populations, we interviewed representatives from the Administration on Aging, the Federal Interagency Coordinating Council on Access and Mobility, the Interagency Coordinating Council on Emergency Preparedness and Individuals with Disabilities, the National Council on Disability, and the Interagency Council on Homelessness. We also interviewed representatives from several national organizations and associations to help evaluate how federal programs and policies on evacuations have affected transportation-disadvantaged populations. These organizations and associations include the National Organization on Disability, the American Association of Retired Persons, the American Public Transportation Association, the Association of Metropolitan Planning Organizations, and the Community Transportation Association of America. Appendix II: Comments from the Department of Homeland Security GAO Comments 1. DHS commented that it partially implemented one of our recommendations by improving the overall functionality of the lessons learned information sharing portal. We revisited DHS’s Lessons Learned Information Sharing portal as of December 7, 2006 and it appears to have improved some of its search and organizational functions. We have found, however, that some of the issues we previously identified still remain. For example, when using the portal’s search function, there was no direct link to key evacuation preparedness documents, such as to DHS’s Nationwide Plan Review reports. Therefore, we revised our recommendation to reflect the need for continued improvement of this portal. 2. DHS commented that grant programs have administrative requirements that stress the importance of focusing on special needs populations. These requirements, while encouraging, do not ensure that state and local governments plan, train, and conduct exercises for the evacuation of transportation-disadvantaged populations. During the course of our review, we found that state and local officials do not share a consistent definition of special needs and had interpreted the term in a manner which does not encompass all transportation- disadvantaged populations that should be included in evacuation preparedness. We define transportation-disadvantaged populations to include individuals who, by choice or other reasons, do not have access to a personal vehicle. These can include persons with disabilities, low-income, homeless, or transient persons; children without an adult present at home, tourists and commuters who are frequent users of public transportation; and those with limited English proficiency who tend to rely on public transit more than English speakers. 3. DHS commented that our draft report did not adequately address the need to determine how to identify, and actively evacuate all special needs populations, including those who are transportation- disadvantaged. We recognize, in our report, the difficulty that state and local emergency management officials face in identifying and locating transportation-disadvantaged populations, determining their transportation needs, and providing for their transportation. Two of our report’s three sections address this very issue. 4. DHS commented that our draft report did not recognize that transportation of special needs populations is primarily a local responsibility. Our report recognizes this fact and clearly states that state and local governments are primarily responsible for managing responses to disasters, including the evacuation of transportation- disadvantaged populations. 5. DHS commented that its National Response Plan Review and Revision process is currently being conducted and that it will address clarification of roles and responsibilities of key structures, positions and levels of the government and private sector as well as other nongovernmental agencies among other issues related to evacuations. We are encouraged by DHS’s efforts in this regard. 6. DHS commented for large and complex disasters, no single federal agency can provide the entire response support required. We agree that disaster response is a coordinated interagency effort, but believe that clarification of the lead, coordinating, and supporting agencies for evacuation support is needed in the National Response Plan to ensure a successful response. DHS also commented that it is responsible for managing that interagency effort and is, in fact, the single federal agency responsible for leading and coordinating evacuation support to states. Implementation of enacted Stafford Act legislative changes from October 2006 will help address the federal role in providing evacuation assistance for transportation of disadvantaged populations. We agree that DHS, more specifically FEMA, is responsible for leading and coordinating evacuation support to states. 7. DHS commented that our definition of transportation-disadvantaged populations was a disservice to the disabled population. While we recognize that evacuation is a complex issue and believe that persons with disabilities are faced with significant evacuation challenges in the event of a disaster and should be a focus of evacuation preparedness, it is important that federal, state, and local government emergency preparedness efforts address planning for all transportation- disadvantaged populations. 8. DHS commented that our draft report implies that the situation that occurred during Katrina was a “typical occurrence.” It is not our intent to imply this. However, the events of Hurricane Katrina raised significant awareness about federal, state, and local preparedness to evacuate transportation-disadvantaged populations, and reports, such as DHS’s Nationwide Plan Review and DOT’s Catastrophic Hurricane Evacuation Plan Evaluation, have further highlighted the need for increased evacuation preparedness by these governments. Appendix III: GAO’s Observations on Federal Proposed Recommendations and Initial Conclusions In 2006, the White House and several federal agencies released reports that reviewed federal, state, and local evacuation preparedness and response to Hurricane Katrina. Many of these reports include recommendations or initial conclusions for federal, state, and local governments. We have included a list of recommendations—including some already referenced in our report—that address the evacuation of transportation-disadvantaged populations. Our observations about each recommendation, based on our review, are also listed. (See table 1.) Appendix IV: Other Federal Initiatives Related to Evacuating Transportation- Disadvantaged Populations The following is a list of initiatives we identified during our review that the federal government has undertaken to address the evacuation of transportation-disadvantaged populations. The Federal Transit Administration has awarded the American Public Transportation Association a $300,000 grant to establish and administer a transit mutual aid program. The goal of the program is to provide immediate assistance to a community in need of emergency transit services, with a focus on evacuation and business continuity support. The American Public Transportation Association will obtain formal commitments from willing transit agencies and, with committed resources, develop and maintain a database of transit vehicles, personnel, and equipment. The target for the database is to have between 250 and 500 buses nationwide, as well as support equipment and personnel, ready to respond at any time. Moreover, the American Public Transportation Association will reach out to federal, state, and regional agencies to ensure that during an emergency, these agencies can provide a coordinated and effective response. The Community Transportation Association of America conducted an expert panel discussion—sponsored by the National Consortium on the Coordination of Human Services Transportation—on the role of public and community transportation services during an emergency. The resulting white paper (which outlines community strategies to evacuate and challenges for transportation-disadvantaged populations during emergencies) and emergency preparedness checklist is intended as guidance for transportation providers and their partner organizations. This panel was conducted in cooperation with the Federal Interagency Coordinating Council on Access and Mobility, and DHS’s Interagency Coordinating Council on Emergency Preparedness and Individuals with Disabilities. The Federal Transit Administration has awarded a grant to the University of New Orleans to develop a manual and professional development course for transit agencies to enhance their emergency preparedness. The Federal Transit Administration, along with the Federal Interagency Coordinating Council on Access and Mobility, has created a pamphlet entitled “Disaster Response and Recovery Resource for Transit Agencies” to provide local transit agencies and transportation providers with useful information and best practices in emergency preparedness and disaster response and recovery. The resource provides summary information for general background, and includes best practices and links to more specific resources and more detailed information for local agencies concerning critical disaster related elements such as emergency preparedness, disaster response, and disaster recovery. The Federal Interagency Coordinating Council on Access and Mobility— which awards grants to states for human service transportation coordination between state agencies—added an emergency preparedness priority to its grant guidelines, thereby encouraging state to consider emergency preparedness among its grant priorities. As of July 2006, nine states have addressed emergency preparedness as a priority. The Federal Highway Administration is producing a series of primers for state and local emergency managers and transportation officials to aid them in developing evacuation plans for incidents that occur with or without notice. A special primer is under development to aid state and local officials in designing evacuation plans that include transportation- disadvantaged populations. This primer will be released no later than March 2007. The Transportation Research Board has convened a committee to examine the role of public transportation in emergency evacuation. The committee will evaluate the role that the public transportation systems serving the 38 largest urbanized areas in the United States could play in the evacuation of, egress, and ingress of people to or from critical locations in times of emergency. The committee is expected to issue a report by April 20, 2008. Appendix V: GAO Contact and Staff Acknowledgments Katherine Siggerud, (202) 512-2834 or [email protected]. Staff Acknowledgments In addition to the contact named above, Steve Cohen, Assistant Director; Ashley Alley; Elizabeth Eisenstadt; Colin Fallon; Deborah Landis; Christopher Lyons; SaraAnn Moessbauer; Laina Poon; Tina Won Sherman; and Alwynne Wilbur made key contributions to this report. Related GAO Products Catastrophic Disasters: Enhanced Leadership, Capabilities, and Accountability Will Improve the Effectiveness of the Nation’s Preparedness, Response, and Recovery System. GAO-06-618. Washington, D.C.: September 6, 2006. Disaster Preparedness: Limitations in Federal Evacuation Assistance for Health Facilities Should Be Addressed. GAO-06-826. Washington, D.C.: July 20, 2006. Disaster Preparedness: Preliminary Observations on the Evacuation of Vulnerable Populations due to Hurricanes and Other Disasters. GAO-06- 790T. Washington, D.C.: May 18, 2006. Hurricane Katrina: GAO’s Preliminary Observations Regarding Preparedness, Response, and Recovery. GAO-06-442T. Washington, D.C.: March 8, 2006. Disaster Preparedness: Preliminary Observations on the Evacuation of Hospitals and Nursing Homes Due to Hurricanes. GAO-06-443R. Washington, D.C.: February 16, 2006. Statement by Comptroller General David M. Walker on GAO’s Preliminary Observations Regarding Preparedness and Response to Hurricanes Katrina and Rita. GAO-06-365R. Washington, D.C.: February 1, 2006. Transportation Services: Better Dissemination and Oversight of DOT’s Guidance Could Lead to Improved Access for Limited English-Proficient Populations. GAO-06-52. Washington, D.C.: November 2, 2005. Transportation Services: Better Dissemination and Oversight of DOT’s Guidance Could Lead to Improved Access for Limited English-Proficient Populations (Chinese Edition). GAO-06-186. Washington, D.C.: November 2, 2005. Transportation Services: Better Dissemination and Oversight of DOT’s Guidance Could Lead to Improved Access for Limited English-Proficient Populations (Korean Version). GAO-06-188. Washington, D.C.: November 2, 2005. Transportation Services: Better Dissemination and Oversight of DOT’s Guidance Could Lead to Improved Access for Limited English-Proficient Populations (Spanish Version). GAO-06-185. Washington, D.C.: November 2, 2005. Transportation Services: Better Dissemination and Oversight of DOT’s Guidance Could Lead to Improved Access for Limited English-Proficient Populations (Vietnamese Version). GAO-06-187. Washington, D.C.: November 2, 2005. Transportation-Disadvantaged Seniors: Efforts to Enhance Senior Mobility Could Benefit from Additional Guidance and Information. GAO-04-971. Washington, D.C.: August 30, 2004. Transportation-Disadvantaged Populations: Federal Agencies Are Taking Steps to Assist States and Local Agencies in Coordinating Transportation Services. GAO-04-420R. Washington, D.C.: February 24, 2004. Transportation-Disadvantaged Populations: Some Coordination Efforts Among Programs Providing Transportation Services, but Obstacles Persist. GAO-03-697. Washington, D.C.: June 30, 2003. Transportation-Disadvantaged Populations: Many Federal Programs Fund Transportation Services, but Obstacles to Coordination Persist. GAO-03-698T. Washington, D.C.: May 1, 2003.
During the evacuation of New Orleans in response to Hurricane Katrina in 2005, many of those who did not own a vehicle and could not evacuate were among the over 1,300 people who died. This raised questions about how well state and local governments, primarily responsible for disaster planning, integrate transportation-disadvantaged populations into such planning. GAO assessed the challenges and barriers state and local officials face; how prepared these governments are and steps they are taking to address challenges and barriers; and federal efforts to provide evacuation assistance. GAO reviewed evacuation plans; Department of Homeland Security (DHS), Department of Transportation (DOT), and other studies; and interviewed officials in five major city and four state governments. State and local governments face evacuation challenges in identifying and locating transportation-disadvantaged populations, determining their needs, and providing for their transportation. These populations are diverse and constantly changing, and information on their location is often not readily available. In addition, these populations' evacuation needs vary widely; some require basic transportation while others need accessible equipment, such as buses with chair lifts. Legal and social barriers impede addressing these evacuation challenges. For example, transportation providers may be unwilling to provide evacuation assistance because of liability concerns. State and local governments are generally not well prepared--in terms of planning, training, and conducting exercises--to evacuate transportation-disadvantaged populations, but some have begun to address challenges and barriers. For example, DHS reported in June 2006 that only about 10 percent of state and about 12 percent of urban area emergency plans it reviewed adequately addressed evacuating these populations. Furthermore, in one of five major cities GAO visited, officials believed that few residents would require evacuation assistance despite the U.S. Census reporting 16.5 percent of car-less households in that major city. DHS also found that most states and urban areas significantly underestimated the advance planning and coordination required to effectively address the needs of persons with disabilities. Steps being taken by some such governments include collaboration with social service and transportation providers and transportation planning organizations--some of which are DOT grantees and stakeholders--to determine transportation needs and develop agreements for emergency use of drivers and vehicles. The federal government provides evacuation assistance to state and local governments, but gaps in this assistance have hindered many of these governments' ability to sufficiently prepare for evacuations. This includes the lack of any specific requirement to plan, train, and conduct exercises for the evacuation of transportation-disadvantaged populations as well as gaps in the usefulness of DHS's guidance. Although federal law requires that state and local governments with mass evacuation plans incorporate special needs populations into their plans, this requirement does not necessarily ensure the incorporation of all transportation-disadvantaged populations. Additionally, while DHS has made improvements to an online portal for sharing related information, this information remains difficult to access because of poor search and organizational functions. Moreover, although the federal government can provide evacuation assistance when state and local governments are overwhelmed, the federal government is not prepared to do so. Amendments to the Stafford Act in October 2006 affirmed that the Federal Emergency Management Agency (FEMA) (an agency within DHS) is responsible for leading and coordinating evacuation assistance. DHS has not yet clarified, in the National Response Plan, the lead, coordinating, or supporting agencies in such cases.
GAO_GAO-15-397
Background The ISS supports research projects with state of the art facilities for Earth and space science, biology, human physiology, physical science, and materials research, and provides a platform to demonstrate new space- related technologies. The facilities include modular multipurpose payload racks and external platforms to store and support experiments, refrigerators and freezers for biological and life science samples, research lockers or incubators, and a combustion chamber to observe combustion patterns in microgravity, among other research equipment. The ISS currently has three crew members in the U.S. operating segment who, according to NASA officials, devote a total of approximately 35 hours per week to conduct research. The remaining crew time is used for operations and maintenance of the ISS, training, exercise, and sleep. NASA plans to increase the number of astronauts in the U.S. operating segment of the ISS from three to four once a U.S. capability to transport crew to and from the ISS is available. Cargo transportation to the ISS is done through a commercial resupply services contract that was signed with Orbital Sciences Corporation (Orbital) and Space Exploration Technologies Corporation (SpaceX) in 2008. SpaceX currently has a capsule that can also return significant amounts of cargo to Earth and is the only vehicle currently servicing the ISS that has this capability. Orbital and SpaceX are scheduled to provide 8 and 15 resupply flights, respectively, through December 2017. As of January 2015, SpaceX has launched five successful resupply missions and Orbital has launched two successful resupply missions. Orbital resupply flights to the ISS were deferred pending a review of a mishap that occurred during a resupply launch in October 2014, which resulted in the loss of that mission. According to NASA officials, a “return to flight” plan was submitted by Orbital and accepted by the ISS program in January 2015. Legislation on Management of the ISS National Laboratory Since 2005, Congress has directed several changes regarding the management and utilization of the ISS. The NASA Authorization Act of 2005 designated the U.S. segment of the ISS as a National Laboratory.The 2005 act directed the NASA Administrator to seek to increase ISS utilization by other federal entities and the private sector through partnerships, cost-sharing agreements, and other arrangements that would supplement NASA funding of the ISS. It also allowed the Administrator to enter into a contract with a nongovernment entity to operate the ISS National Laboratory. The NASA Authorization Act of 2008 further directed NASA to establish the ISS National Laboratory Advisory Committee, which was to be composed of individuals representing organizations that had formal agreements with NASA to utilize the U.S. portion of the ISS. act stated that the committee shall monitor, assess, and make recommendations regarding effective utilization of the ISS as a national laboratory and platform for research, and submit a report containing these assessments and recommendations at least annually to the NASA Administrator. National Aeronautics and Space Administration Authorization Act of 2008, Pub. L. No. 110-422, § 602. designate a NASA liaison, with whom the selected not-for-profit entity would cooperate and consult with in carrying out its responsibilities under the agreement. An individual in the Space Life and Physical Sciences Research and Applications Division of the Human Exploration and Operations Mission Directorate is currently serving as the NASA liaison. The 2010 act outlined seven management and research and development activities that NASA was required to provide funding for the not-for-profit entity to initiate. Those activities stated briefly, are to: Plan and coordinate ISS National Laboratory research activities; Develop and implement guidelines, selection criteria, and flight support requirements for non-NASA utilization of the ISS research capabilities and available facilities; Interact with the ISS National Laboratory Advisory Committee and review recommendations provided by that committee; Coordinate transportation requirements in support of the ISS research Cooperate with NASA, other departments and agencies of the U.S. government, and commercial entities to sustain ground support facilities for the ISS; Develop and implement scientific outreach and education activities designed to ensure effective utilization of ISS research capabilities; and Address other matters relating to the utilization of the ISS National Laboratory for research and development as the Administrator may consider appropriate. The 2010 act also requires the ISS National Laboratory-managed experiments to be guaranteed access to and use of at least 50 percent of the U.S. research capacity allocation including power, facilities to keep experiments cold, and requisite crew time onboard the ISS through September 30, 2020. The Administrator can allocate additional capacity to the ISS national laboratory if this capacity is in excess of NASA research requirements. If any NASA research plan requires more than the at least 50 percent of the U.S. research capacity allocation of ISS resources, the plan should be submitted for consideration of proposed research to be conducted within the ISS National Laboratory capacity of ISS resources. The person designated as the NASA liaison to the not-for- profit entity has the authority to provide those resources beyond its 50 percent allocation on an exception basis if a proposed experiment is considered essential for purposes of preparing for exploration beyond low-Earth orbit, based on a joint agreement between the NASA liaison and the not-for-profit entity. CASIS and NASA Responsibilities Outlined in Cooperative Agreement In August 2011, after a competitive process, NASA signed a cooperative agreement with CASIS, a not-for-profit entity, to manage the activities of the ISS National Laboratory through September 30, 2020. Cooperative agreements differ from contracts. Generally, cooperative agreements are used when the principal purpose of a transaction is to stimulate or support research and development for a public purpose, and substantial involvement is expected between the executive agency and the award recipient when carrying out the activity identified in the agreement. In contrast, contracts are used when the principal purpose is acquisition of property or services for the direct benefit or use of the federal government. CASIS is bound by the responsibilities outlined in the cooperative agreement, which tasks CASIS with maximizing the value of the ISS National Laboratory by stimulating interest and use of the ISS for scientific research by directly soliciting potential users and fostering a market to attract others. CASIS is also charged with maximizing the use of the ISS for advancing science, technology, engineering, and mathematics (STEM) education. Pursuant to the cooperative agreement, NASA will provide CASIS $15 million annually through 2020, of which it will seek to award at least $3 million for research grants. CASIS officials said that the remainder of NASA funding is used for infrastructure and direct costs such as labor and travel-related expenses. According to the cooperative agreement, CASIS will solicit non-NASA funding for research by targeting various sources—such as government grants, foundation funding, charitable contributions, private equity, venture financing, and private investors—and facilitate matching of projects that meet the research objectives with those qualified funding sources. Additionally, the cooperative agreement requires the development of an annual program plan, which includes a detailed plan of CASIS’s proposed activities for the following year, which CASIS must meet using its “best efforts,” and annual and quarterly performance metrics. The cooperative agreement outlines responsibilities for NASA such as providing resources and accommodations to CASIS to meet ISS National Laboratory requirements and performing the payload operations integration to ensure safe and effective flight readiness and vehicle integration. The Cooperative Agreement Technical Officer, a NASA employee within the ISS Program Office at Johnson Space Center, is charged with oversight of the cooperative agreement. The Cooperative Agreement Technical Officer is to coordinate the approval of the Annual Program Plan and track performance to the plan using the metrics reflected in CASIS’s quarterly reports. CASIS Implemented Most of the Required Activities, but Is Not Able to Interact with Advisory Committee CASIS has taken steps to carry out its responsibilities to manage and promote research activities on the ISS National Laboratory as outlined in its cooperative agreement. For example, CASIS identified key research areas and released seven requests for proposals to solicit interest for research projects. Our survey of a sample of researchers who had submitted proposals to CASIS revealed generally positive comments about CASIS’s management effort. For example, many respondents indicated that CASIS’s processes were clear and that it evaluated their proposals fairly. CASIS, however, has not been able to coordinate with the ISS National Laboratory Advisory Committee (INLAC), as required, because NASA has yet to staff the committee. CASIS Took Steps to Implement Management Activities CASIS has taken steps to fulfill its responsibilities contained in its cooperative agreement with NASA, and has initiated the activities required by the NASA Authorization Act of 2010.the activities contained in the 2010 act as well as the corresponding Table 1 summarizes responsibilities for CASIS and NASA outlined in the cooperative agreement. To determine its research and technology development objectives in accordance with the cooperative agreement, CASIS identified and prioritized the most promising research areas—which CASIS refers to as pathways—with guidance from the Science and Technology Advisory Panel, a CASIS committee comprised of both academic and commercial experts. These pathways are identified by compiling a list of research categories and determining financial feasibility. According to CASIS and NASA officials, research pathways are generated from various sources such as the Decadal surveys—studies conducted once every decade by the National Research Council that prioritize the research focus for the next 10 years in various scientific disciplines—and past NASA studies. To date, CASIS identified protein crystal growth, stem cell research, materials science, enabling technology to support science in space, Earth imaging, and remote sensing as key research pathways and developed a request for proposals (RFP) for each of these research pathways. CASIS released seven RFPs since it was established with the first occurring in June 2012, about 10 months after it was established. CASIS also accepts unsolicited proposals from researchers and other sources such as As of January 2015, CASIS partnership accelerators and competitions.had received 206 proposals from all sources and awarded approximately $20 million in research grants to 77 projects, and paid almost $13 million to the awarded grants. Table 2 shows information related to the types of proposals CASIS has received and the number of grants awarded. CASIS-sponsored research investigations awarded through its first RFP in 2012—involving protein crystal growth and microgravity—flew to the ISS National Laboratory in April 2014 and were returned to Earth in October 2014. These research investigations are currently in post flight analysis. As of December 2014, there were 11 CASIS-sponsored research investigations being conducted aboard the ISS National Laboratory. According to NASA and CASIS officials, as CASIS increases the number of experiments for the ISS National Laboratory, the demand for crew time and certain research facilities aboard the ISS is expected to increase and they project the ISS National Laboratory will be challenged with meeting that demand. NASA officials explained that while the demand for crew time is currently manageable, it remains allocated at or near 100 percent, as the three crew members on the U.S. segment of the ISS utilize most of the 35 hours scheduled per week to conduct research. Crew time is expected to double on the ISS National Laboratory once the crew increases from three to four astronauts in fiscal year 2018 because, according to NASA officials, the additional crew member would be able to devote most of his or her time to research. NASA officials stated they are also working with CASIS to build automation into research experiments to reduce the monitoring time by crew members. Both CASIS and NASA expect increased demand for facility resources such as the Animal Enclosure Module used for rodent research and the remote sensing cameras used for Earth observation. Sharing of the ISS National Laboratory facilities requires considerable communication and agreement. NASA and CASIS officials said both organizations have on- going discussions about how to share resources, coordinate research and ensure all users are represented when meeting the demand for crew time and ISS National Laboratory facilities and hardware. NASA officials explained that they reprioritize as necessary to ensure resources are not overstressed. Develop and Implement Guidelines, Selection Criteria, and Flight Support Requirements To initiate the development of guidelines and selection criteria, CASIS implemented procedures for prioritizing research, guidelines for proposal development, and evaluation and selection of research proposals, in accordance with the cooperative agreement. Procedures for prioritizing research: CASIS has implemented a multi-layer review process to identify and develop the overall research portfolio and prioritize future research pathways. See figure 2 for the process CASIS follows to prioritize research pathways. Guidelines on proposal development and flight support requirements: CASIS established guidelines that are incorporated in the applicable RFP for researchers to follow as they develop their proposals. The RFPs include specific criteria for proposal that CASIS uses as a basis for initial acceptance or denial of proposal submissions. For example, one RFP issued in 2014 contained minimum eligibility criteria such as the research being flight ready within 12 months of award and the research having secured funding, and included provisions that excluded the use of new sensors or instruments for remote sensing, and required that selected proposals be completed by 6 months post-flight. Each RFP also had unique criteria which can be dependent upon the research pathway and the facilities available on the ISS National Laboratory in the proposed time line. CASIS has separate guidance for unsolicited proposal submissions. CASIS has documented the specific activities for meeting flight requirements, which includes the role of implementation partners and NASA in meeting these requirements. Implementation partners are subcontractors to CASIS and specialize in aerospace technologies and services. They have an integral role in providing hardware, flight integration services, and ground services to support CASIS- sponsored research. NASA performs the activities necessary to incorporate the research on a flight vehicle, such as providing the resources and accommodations to meet ISS National Laboratory requirements, and managing launch operations through payload return to Earth. Evaluation and selection of research proposals: CASIS implemented a policy that documents the submission and general review process for solicited and unsolicited proposals as well as proposals that it evaluates as part of agreements with outside organizations such as partnerships or subcontracts. The process begins with a two-step initial submission review for solicited proposals and a preliminary review for unsolicited proposals, then a five-step evaluation process. Figure 3 details the CASIS proposal evaluation process. We surveyed a random sample of 14 researchers who submitted proposals to CASIS from 2012 through 2014 to obtain their perspectives on CASIS’s performance in this and other areas. Although the results of this survey are non-generalizable because of our small sample size, overall the respondents were generally positive about their interaction with CASIS. For example, 11 of the 14 respondents indicated that CASIS’s evaluation criteria were clearly articulated and 12 of 14 respondents believed their proposals were evaluated fairly. Of the 14 respondents, 13 said they were likely to submit future proposals to CASIS. In addition, all 14 respondents indicated that they were notified in a timely manner of the disposition of their proposal. CASIS declined proposals for 8 respondents. Of these 8, 7 said that they were provided feedback concerning why their proposal was declined. Several respondents, however, said that they were provided only a short bulleted response that fell short of addressing the scientific merit of the proposal. One respondent said they received a letter summarizing reviewers’ comments that had several good points and was fair, but it was not detailed as it contained less information than what other grantors provide. According to CASIS guidelines, researchers whose projects are not selected for award are provided feedback and, are invited to revise and resubmit their projects as an unsolicited proposal. Of those that we surveyed, only 1 of 8 respondents who had a proposal declined had resubmitted the proposal, while another respondent said that it was not made clear that proposals could be resubmitted. Coordination of Transportation Requirements Under the cooperative agreement, NASA is required to provide the ISS National Laboratory research facilities and resources and coordinate with CASIS when preparing CASIS-sponsored research for launch. CASIS has an integral role in the payload development and integration process during three distinct phases—pre-flight, operations, and post-flight. During the pre-flight phase, the CASIS operations team works with the researcher and implementation partners to understand project objectives and requirements such as power, crew and hardware compatibility needs, flight integration time frames, and design and integration support. CASIS submits the science objectives, requirements, and a development schedule to NASA. The NASA ISS National Laboratory Office also assigns staff to each CASIS-sponsored researcher to help coordinate and navigate the payload development and integration process and ensure that flight planning remains on track. During the operations and in-flight phase, CASIS provides operational support by collaborating with the implementation partner or the researcher to oversee NASA’s integration of the research project or hardware into the flight vehicle. The post-flight phase involves the return of payload samples or hardware from the flight vehicle to the researcher to begin post-processing activities, which CASIS monitors. The ability to secure transportation for selected research investigations to the ISS facility is outside of CASIS’s control and has presented challenges. NASA provides launch services to the ISS National Laboratory through its commercial resupply services contracts and CASIS receives cargo allocations for its sponsored research. Launch failures and delays, however, have resulted in cost increases. For example, the recent rocket launch failure to the ISS in October 2014 resulted in the loss of several CASIS-sponsored research investigations at a total cost of almost $175,000 which includes hardware and materials, labor consulting and grants. In addition, launch delays for another cargo resupply mission resulted in over $300,000 in cost increases for several researchers for additional materials and samples. CASIS officials explained that the majority of cost increases are related to biological research, which represents approximately 50 percent of the CASIS-sponsored research. These biological payloads have a limited viability or very specific requirements associated with the timing of the payload flight and often require consumables such as gas, nutrients, and water that must be replenished when a launch is delayed. Absorbing the increased cost has been a challenge for CASIS, but it is addressing the increased costs of delays by asking researchers that have biological payloads to identify the impact and associated costs for launch delays in their budgets so it can plan for budget reserves, if necessary. Cooperative Efforts and Partnerships The cooperative agreement requires CASIS to manage planning and coordination of research activities for both ground and on-orbit execution. According to CASIS officials, CASIS is addressing this requirement by leveraging the resources of companies that provide hardware, technical expertise and ground support. Eleven implementation partners have received over $5.4 million in funding from CASIS or its sponsored researchers since the establishment of CASIS through September 2014 to provide hardware, flight integration, and ground services for 58 research investigations. CASIS officials reasoned that by leveraging existing companies that can provide specialized hardware and integration capabilities on an as required basis, CASIS can effectively manage the ISS National Laboratory without having to maintain all the requisite skills or capabilities within its organization. CASIS-sponsored researchers are encouraged to select an implementation partner during the proposal submission process from a list of preferred partners. CASIS officials said that they assembled this list of implementation partners beginning with companies that had relationships with NASA for ISS-related operations and expanded the list through its own business development operations. These partners can provide hardware, and technical services, and consultation to researchers that address the project’s science requirements and research needs aboard the ISS National Laboratory. Although CASIS provides a list of implementation partners, the researchers are responsible for entering into formal business arrangements with these partners and including the costs of the implementation partner support in their proposed budget. CASIS officials noted the cost can vary based on the amount of involvement required by the implementation partners and can range from $50,000 to $300,000 per flight. Scientific Outreach and Educational Initiatives In accordance with the cooperative agreement, CASIS is building a geographic network to facilitate outreach initiatives and cultivate new partnerships and has implemented educational initiatives that provide opportunities for educators and students to learn about and have access to the ISS National Laboratory. Specifically, Network outreach: CASIS has organized its outreach to scientific and academic communities in seven geographical areas. These areas are supported by more than 30 CASIS employees and consultants and each area has a research emphasis. See figure 4 for the locations of CASIS’s networks and the research emphasis for each. The outreach efforts conducted through CASIS’s networks are primarily relationship-based and focused on engaging financial support, forging long term partnerships, and ultimately generating potential research and technology development projects for flight on the ISS National Laboratory. According to CASIS officials, academic institutions, research-specific organizations, philanthropic entities, and industry partners that CASIS identified through this network can benefit from the CASIS-sponsored research and technology development aboard the ISS National Laboratory. For example, Boston was identified as one of the geographic areas because it has over 100 universities and over 300 biotech companies that can support the commercialization of life sciences research and CASIS’s mission. CASIS is working to expand its network. CASIS has developed 45 new partnerships to date and is leveraging a variety of new partnership opportunities. For example, in 2014, CASIS initiated two strategic campaigns, Good Earth—an international collaboration seeking to maximize ISS Earth observation capabilities—and Good Health—an effort to capitalize on the unique benefits of the microgravity environment so interventions can be developed to preserve health on Earth. CASIS officials expect both campaigns to bring together large scale collaborations to stimulate ISS utilization over the coming years. CASIS also supported the Rice Business Competition by providing a $25,000 grant during 2014 to a startup company that showed the most promise for developing a technology or business that would benefit from access to the ISS National Laboratory. This partnership also gives CASIS access to many forum events and panels. According to CASIS officials, it has been challenging to raise additional funding from external sources to supplement the amount of funding provided by NASA to support and sustain its operations because CASIS is a new non-profit entity. Although CASIS’s business development team is actively identifying partnerships and funding opportunities with commercial and non-profit granting organizations, CASIS officials said that it takes time to identify, develop, and mature these partnerships. CASIS and NASA officials said that the value of doing research aboard the ISS National Laboratory has to be further demonstrated so commercial industries can be convinced it is worth the high investment. Both NASA and CASIS officials said that demonstrating the value of research on the ISS as a substitute for ground-based research is a tremendous and important effort that is necessary to open a marketplace for space research. NASA officials stated that doing research aboard the ISS National Laboratory can take upwards of 2 to 3 years to plan and execute, time lines that are generally not acceptable to commercial companies that desire a more rapid return on their investments. Ten of the 14 respondents to our survey reported that CASIS was effective in reaching out to the research community. For example, several researchers were made aware of CASIS opportunities by attending presentations from CASIS staff at industry meetings or campus visits. Respondents also offered areas for improvement for CASIS to increase utilization of the ISS National Laboratory. For example, five respondents said that CASIS could increase its visibility by attending more conferences, using more print ads, and working more with NASA on joint RFPs. Education: CASIS established its education strategic plan, which included building education programs that promote the ISS as a science, technology, engineering, and mathematics (STEM) learning platform; partnering with existing education entities such as schools, universities, and other educational foundations and associations; and reaching out to underrepresented and nontraditional demographics. CASIS also implemented various educational initiatives that it developed both internally and externally in conjunction with its partners. For example, in fiscal year 2014, CASIS supported 12 educational initiatives. CASIS sponsored the Space Station Academy, a 4-week online program designed to take participants on a simulated mission to the ISS as “virtual astronauts.” This pilot program involved 25 students and 25 educators. In addition, CASIS supports its educational efforts through education grant funding and partnerships. See appendix III for more information on additional CASIS educational initiatives. NASA Has Not Staffed Advisory Committee with Which CASIS Is Required to Interact The one required activity in the cooperative agreement that CASIS has been unable to address is its interaction with the ISS National Laboratory Advisory Committee (INLAC) because the committee has not been staffed by NASA. The NASA Authorization Act of 2008 required NASA to establish the INLAC under the Federal Advisory Committee Act. The INLAC was required to include membership from organizations that have formal agreements with NASA to utilize the U.S. portion of the ISS. As outlined in the 2008 act, this committee is required to exist for the lifespan of the ISS and is to function in an advisory capacity to the NASA Administrator by assessing and monitoring ISS National Laboratory resource utilization and reporting its assessments and recommendations at least annually. According to the cooperative agreement, CASIS will coordinate with the INLAC as established under section 602 of the NASA Authorization Act of 2008 and review recommendations provided by the INLAC. Although NASA formally established the committee in 2009, NASA has not fully implemented the 2008 act because the committee has yet to be staffed. NASA officials told us that with CASIS in place, the great majority of non-NASA ISS users do not have an agreement with NASA because they work with CASIS. They added that there are exceptions where NASA works with other agencies, but those are typically for exploration technology or defense-related projects. In addition, NASA officials indicate that the INLAC has not been staffed because they believe that the structure and function of the current CASIS Board of Directors has proven to be a better alternative to a NASA advisory committee since the CASIS board represents a broad experience base including military, medical research, strategic partnerships, and engineering, among others. Further, NASA officials said that the Research Subcommittee of the Human Exploration and Operations Committee to the NASA Advisory Council also provides research advisory oversight of the ISS National Laboratory. This subcommittee’s objectives, however, have a focus on human spaceflight and the membership of this subcommittee is to consist of individuals from the research committee with a broad awareness of human spaceflight related activities. CASIS officials also believe that their board is performing some of the INLAC’s advisory duties, but acknowledge that the board does not meet the section 602 requirements under the 2008 act—to monitor and report annually to the NASA Administrator its assessments and recommendations of ISS National Laboratory utilization—nor does its membership meet the criteria specified in the act. Without a staffed INLAC, NASA currently lacks a single advisory committee that represents all users of the ISS National Laboratory and provides ongoing monitoring and assessments and recommendations of ISS National Laboratory resource utilization, as required by the charter. As a result, CASIS is not able to fulfill its responsibilities as outlined in the cooperative agreement and as established under section 602 of the NASA Authorization Act of 2008. CASIS and NASA Have Taken Steps to Measure and Assess CASIS Performance, but Measurable Targets Needed CASIS’s Metrics Consistent with Most Key Attributes, but Lack Quantifiable Goals CASIS has established metrics, but not targets against which its performance can be measured by NASA. The metrics CASIS developed in collaboration with NASA for fiscal year 2015 meet most key attributes of successful performance measures. These metrics are based on CASIS responsibilities outlined in the cooperative agreement and are related to CASIS strategic goals and objectives. Metrics are included in an Annual Program Plan, which CASIS prepares with input from NASA. We have previously reported that successful performance measures as a whole should have four general characteristics: demonstrate results, be limited to a vital few, cover multiple priorities, and provide useful information for decision making. We cited specific attributes as key to successful performance measures, such as linkage, clarity, measurable targets, objectivity, and balance. The four characteristics are overarching, thus they do not necessarily directly link to the attributes. Furthermore, the attributes may not be equal, and a noted weakness does not mean that a measure is not useful. Weaknesses identified should be considered areas for further refinement. Table 3 defines the key attributes of successful performance measures. We assessed CASIS’s fiscal year 2015 metrics, and found that the metrics met almost all of these key attributes. The results of our assessment are shown in table 4. We also assessed the metrics CASIS had developed for fiscal year 2014, and similarly found that the metrics met most of the key attributes. The results of our assessment of CASIS’s fiscal year 2014 metrics for key attributes of successful performance measures can be found in appendix IV. Our analyses indicated that CASIS did not establish measurable targets or goals for either fiscal year 2014 or 2015 metrics, which limits its ability to use these metrics to assess performance. We have previously reported that performance metrics should have quantifiable, numerical targets or other measurable values, which help assess whether overall goals and objectives were achieved.it is unclear how NASA objectively assesses CASIS’s performance. Without defined measurable targets or goals, CASIS officials noted that operating as a new entity with no history made it difficult to establish performance targets, but this is beginning to change. CASIS officials initially told us in July 2014 that establishing targets would be arbitrary because CASIS processes and metrics are still evolving. Subsequently, in January 2015, they indicated that since CASIS now has some operating history, they will be able to do so. The Chairman of the CASIS Board of Directors told us that measurable targets should be developed and that this is a priority for the Board. However, CASIS has not established a date by which measurable targets will be developed. Further, CASIS officials indicated that not all metrics will have measurable targets initially because some metrics are subjective, such as those that attempt to measure the quality of research or a new technology generated by CASIS-sponsored research. The Chairman said that the CASIS Board of Directors is also working to develop targets for subjective measures, and they hope to have them in place in the next several years. Although the ability to objectively measure performance is limited without measurable targets, CASIS and NASA officials generally agreed about how long-term success for CASIS will be defined. According to CASIS officials, success would ultimately be defined by demonstrating that the research and technology development performed aboard ISS National Laboratory benefits Earth and that commercial markets can be sustained in low-Earth orbit. NASA officials similarly said that developing commercial markets in space and bringing products back to Earth will determine success. NASA Assesses CASIS Annually, but Performance Assessment Is Not Documented NASA performs an annual assessment of CASIS’s performance consistent with its responsibilities in the cooperative agreement, but this assessment is not documented. The Cooperative Agreement Technical Officer (CATO) uses the metrics in CASIS’s quarterly and annual reports to monitor CASIS’s efforts. The cooperative agreement also requires CASIS to propose an adjustment to the metrics if performance is not going to be met. However, without performance targets, CASIS cannot determine whether the metrics need to be adjusted. Further, without these targets, NASA and CASIS cannot conduct assessments that are measurable or conclusive and, therefore, the assessments are subjective. According to the CATO, during the annual program review, he assesses CASIS metrics for trends, looking for improvements over time and questioning any perceived lack of progress. The CATO added that he discusses any issues identified during the annual review with CASIS officials, NASA management, and stakeholders. CASIS officials concurred, and told us this discussion with NASA highlights areas for further refinement. For example, as a result of such discussion, CASIS is now more proactively engaging NASA technical expertise on available flight hardware, and has broadened business development efforts aimed at attracting new commercial users of the ISS National Laboratory. Both CASIS and NASA officials told us that NASA does not document its annual program review of CASIS performance. Federal standards for internal controls call for information to be recorded and communicated to management and others who need it to carry out their responsibilities. This type of documented information is important to support decision making and conduct assessments. CASIS officials have not asked for a formal summary of the results of NASA’s annual program review because CASIS receives informal feedback on quarterly reports provided to NASA. CASIS also maintains minutes of regularly scheduled meetings with NASA where any issues that need to be discussed between CASIS and NASA are addressed. While NASA does not document this annual assessment, NASA officials told us that they were generally satisfied with CASIS performance. CASIS officials, however, said that the results of the annual review should be reported in some sort of formal manner to make the information more actionable. Because CASIS is allocated at least 50 percent of ISS research capacity, future success of the ISS as a research platform is partially dependent on the efforts CASIS has undertaken. However, without definitive and documented assessment factors, NASA will be challenged to take action in response to CASIS performance. For example, without documentation, NASA lacks support to terminate the cooperative agreement, if deemed necessary. Conversely, NASA also would have no record to justify extending the cooperative agreement to support a possible ISS life extension. The cooperative agreement will expire at the end of fiscal year 2020, but includes a provision for an extension. Conclusions The ISS offers the potential for scientific breakthroughs, a unique test bed for new technologies and applications, and a platform for increased commercial and academic research. Achieving greater utilization of the ISS and its unique capabilities, showing the benefit of commercial and academic research, and demonstrating success to generate increased interest from potential users could help NASA get a better return on its significant investment in the ISS. NASA currently lacks an advisory committee established under the Federal Advisory Committee Act that is composed of individuals representing organizations who have formal agreements with NASA to use the U.S. portion of the ISS. As a result, CASIS is not able to fulfill its responsibility as outlined in the cooperative agreement that requires it to coordinate with INLAC as established under the NASA Authorization Act of 2008 and review recommendations originated by the INLAC. A fully staffed and operational INLAC could provide information to senior NASA management on how to better utilize the constrained resources of the ISS—which could affect how CASIS attracts new users and fulfills its responsibility to increase utilization of the ISS National Laboratory. In addition, clearly defined measurable targets are essential for CASIS to demonstrate results, allow NASA to objectively assess CASIS performance, and help stakeholders assess whether overall goals and objectives for the ISS National Laboratory are achieved. Finally, NASA’s annual performance assessment of CASIS is not documented and the results are provided to CASIS on an informal basis. Not documenting the results of the annual program assessment is a practice contrary to good internal controls, which call for information to be recorded and communicated to management and others who need it to carry out their responsibilities, to include taking appropriate corrective actions. Without a clear, well-documented assessment of CASIS performance, NASA management and stakeholders could also be missing information important for decision making, for example, deciding to extend the cooperative agreement with CASIS beyond the September 2020 expiration if the service life of the ISS is extended or terminate the agreement, if necessary. Recommendations for Executive Action We recommend that the NASA Administrator take the following three actions: In order for NASA to fully implement the NASA Authorization Act of 2008 and for CASIS to fulfill its responsibility as outlined in the cooperative agreement, direct the Associate Administrator for the Human Exploration and Operations Mission Directorate to fully staff the INLAC. In order to set clear goals to allow NASA to objectively assess CASIS performance, require the ISS Program Manager work with CASIS to collectively develop and approve measurable targets for metrics for fiscal year 2016 and beyond. In order to provide CASIS management actionable information to better fulfill its responsibilities and NASA management with additional information by which to make future decisions concerning the extension of the agreement with CASIS, require the ISS Program Manager to document the annual program assessment of CASIS performance. Agency and Third- Party Comments and Our Evaluation NASA and CASIS each provided written comments on a draft of this report, which are reprinted in appendix V and appendix VI, respectively. NASA and CASIS also provided technical comments, which have been incorporated into the report, as appropriate. NASA partially concurred and CASIS non-concurred with one of our recommendations and both NASA and CASIS concurred with the other two recommendations. NASA partially concurred and CASIS non-concurred with our recommendation directing the Associate Administrator of the Human Exploration and Operations Mission Directorate within NASA to staff the INLAC. In response to this recommendation, both NASA and CASIS raised concerns that the current requirements for membership of the INLAC would create a conflict of interest. Specifically, NASA stated that the individuals who would make up the committee would likely have user agreements with CASIS and, in many cases, would be receiving funding from CASIS and NASA. Furthermore, because these entities would be competing for CASIS resource allocations, CASIS believes that they would not be sufficiently independent to perform the functions required of the committee. In response to these concerns, CASIS indicated the composition of membership as defined in the NASA Authorization Act of 2008 should be amended. NASA also responded that while meeting statutory obligations and obtaining knowledgeable input and recommendation to achieve optimal utilization of the ISS is important, it is the agency’s position that the CASIS Board of Directors serves the intent of the INLAC charter by providing recommendations regarding effective utilization of the ISS. As a result, NASA indicated that it plans to work with the Congress to adjust the INLAC requirement to address these concerns. We continue to believe our recommendation is valid. We do not see that staffing the INLAC as directed in the 2008 act would necessarily result in a conflict of interest and that the entities would be competing for CASIS resource allocations. The act required an advisory committee that represents all users of ISS National Laboratory and that provides ongoing monitoring and assessment and makes recommendations. According to the cooperative agreement between CASIS and NASA, CASIS is directed to coordinate with the INLAC and review the committee’s recommendations. The INLAC, however, functions only in an advisory capacity; therefore we do not see how a conflict of interest would be created by the membership of the INLAC. Furthermore, according to the NASA Authorization Act of 2010, CASIS shall be guaranteed access to not less than 50 percent of the United States research capacity allocation. Because CASIS has to agree with NASA for an allocation of resources at a level below 50 percent, we do not see how the composition of the INLAC would create a competition for resource allocation with CASIS. In addition, it was not clear to us in our review that the existing mechanisms in place accomplish these requirements. If NASA were to seek relief or changes to this requirement, it should clearly outline how these requirements can be met through existing bodies and processes. NASA and CASIS concurred with our recommendation directing the ISS Program Manager to work with CASIS to collectively develop and approve measurable targets for metrics in fiscal year 2016 and beyond. In response to this recommendation, NASA stated that fiscal year 2016 is a reasonable time to establish measurable targets with CASIS because the non-profit will be entering its fourth full year of operations. Similarly, CASIS responded that it is now in a position to develop targets for key metrics and plans to formalize the process in fiscal year 2016. NASA indicated that these targets should be established by December 31, 2015. Once complete, this action should address our recommendation to develop and approve measurable targets for CASIS’s metrics. NASA and CASIS also concurred with our recommendation directing the ISS Program Manager to document the annual program assessment of CASIS performance. In response to this recommendation, NASA said that it would begin documenting the agency’s annual program assessment in response to CASIS’s 2015 annual report. Once complete, this action should address our recommendation to document NASA’s annual assessment of CASIS’s performance. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from the date of this letter. We will send copies of the report to NASA’s Administrator and interested congressional committees. We will also make copies available to others upon request. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. Should you or your staff have any questions on matters discussed in this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology Our objectives were to assess the extent to which (1) the Center for the Advancement of Science in Space (CASIS) has initiated and implemented the required management activities for research aboard the International Space Station (ISS) National Laboratory, and (2) the National Aeronautics and Space Administration (NASA) and CASIS measure and assess CASIS’s performance. To determine the extent to which CASIS is performing the required management activities for non-NASA research aboard the ISS National Laboratory, we obtained and reviewed relevant legislation and documentation, and interviewed ISS program and CASIS officials. We reviewed the NASA Authorization Act of 2005, which designated the U.S. Operating Segment of the ISS as a National Laboratory; the NASA Authorization Act of 2008, which directed NASA to establish an ISS National Laboratory Advisory Committee; and the NASA Authorization Act of 2010, which required NASA to enter into a cooperative agreement with a nonprofit organization to manage the activities of the ISS National Laboratory. We also reviewed the cooperative agreement between NASA and CASIS and the CASIS fiscal year 2014 and 2015 Annual Program Plans for CASIS responsibilities related to the required activities outlined in Section 504(c) of the NASA Authorization Act of 2010. We examined the CASIS portfolio management and research prioritization process and various market analyses and studies that CASIS considered in establishing research areas. We reviewed the CASIS proposal review and evaluation process for solicited and unsolicited proposals as well as the Requests for Proposals that CASIS had issued to solicit research proposals. We studied fiscal year 2014 quarterly and annual reports to gain insight into the activities CASIS had undertaken to meet its responsibilities. We reviewed CASIS business development efforts, including funding and marketing processes and outreach efforts. We reviewed the partnerships CASIS has established with philanthropic institutions that could provide additional resources to sponsor research aboard the ISS National laboratory and implementation partners that provide logistical assistance to researchers. Additionally, we reviewed CASIS education efforts, particularly science, technology, engineering, and mathematics activities. We also reviewed GAO, NASA Inspector General, and NASA reports on sustaining the ISS. We interviewed several ISS program officials including the ISS Program Director, ISS Program Manager, ISS Program Scientist, and the Cooperative Agreement Technical Officer to gain their perspectives on the work CASIS was performing. We also interviewed officials in the Space Life and Physical Sciences Research and Applications division, including the NASA Liaison to CASIS, to gain perspective on the work NASA is sponsoring aboard the ISS. In addition, we interviewed the CASIS President and Executive Director, the CASIS Chief Operating Officer, the CASIS Chief Financial Officer, and the Chairman of the CASIS Board of Directors to better understand the processes and procedures being implemented, how proposals are evaluated, and the challenges that CASIS faces to further implement the responsibilities outlined in the cooperative agreement. To obtain additional information on CASIS’s performance and the effectiveness of its implementation of some of the required activities, we used information provided to us by CASIS to select a random sample of 20 principal investigators who had submitted either a solicited or unsolicited research proposal to CASIS. Of the 20 researchers selected, we conducted structured interviews with 14 researchers to obtain additional insights into CASIS’s performance. Although the randomly selected researchers are, in part, representative of the population of 172 researchers who had submitted proposals to CASIS through July 2014, the descriptive nature of the responses and the relatively small sample size does not permit the development of reliable, quantitative estimates that are generalizable to the population. However, we believe our interview results provide us with valuable information about researcher’s experiences and perspectives on CASIS’s performance in the area of soliciting, reviewing and providing feedback on proposals. To determine whether CASIS, in collaboration with NASA, has established performance metrics, we reviewed CASIS metrics as presented in its fiscal years 2013 to 2015 Annual Program Plans. We concentrated on fiscal year 2014 and 2015 metrics, but examined the previous metrics to determine how performance measures evolved. We also reviewed CASIS quarterly reports for fiscal year 2014 and the first quarter of fiscal year 2015 and the fiscal year 2014 annual report to determine how performance was measured and reported to NASA. We analyzed CASIS’s fiscal year 2014 and 2015 metrics to evaluate whether they adhered to GAO’s key attributes of successful performance measures, which were identified in previous work. Judgment was required to determine which attributes were applicable to assess and whether the performance measures met the definition of the attributes selected. To determine how NASA assesses CASIS performance, we reviewed the cooperative agreement to determine relevant NASA responsibilities, including the roles of the Cooperative Agreement Technical Officer and NASA Liaison. We also interviewed the NASA Liaison to CASIS, the Cooperative Agreement Technical Officer, and CASIS officials to gain their perspective on the evolution of metrics and how they are used to assess CASIS’s performance. Our work was performed at NASA Headquarters in Washington, D.C., and Johnson Space Center in Houston, Texas. We also visited CASIS headquarters in Melbourne, Florida. We conducted our review from April 2014 to April 2015 in accordance with generally accepted government auditing standards. These standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings based on our audit objectives. Appendix II: International Space Station National Laboratory Management Activities Required in the NASA Authorization Act of 2010 The National Aeronautics and Space Administration (NASA) Authorization Act of 2010 directed that the Administrator shall provide initial financial assistance to the organization with which the Administrator enters into a cooperative agreement to manage the International Space Station (ISS) National Laboratory. In August 2011, after a competitive process, NASA signed a cooperative agreement with the Center for the Advancement of Science in Space, Inc. (CASIS), a not-for-profit entity, to manage the activities of the ISS National Laboratory through September 30, 2020. The 2010 act outlined several management and research and development activities for CASIS, as the not-for-profit entity selected, to initiate, as follows: 1. Planning and coordination of the ISS national laboratory research activities. 2. Development and implementation of guidelines, selection criteria, and flight support requirements for non-NASA scientific utilization of ISS research capabilities and facilities available in United States-owned modules of the ISS or in partner-owned facilities of the ISS allocated to United States utilization by international agreement. 3. Interaction with and integration of the International Space Station National Laboratory Advisory Committee established under section 602 of the National Aeronautics and Space Administration Authorization Act of 2008 (42 U.S.C. 17752) with the governance of the organization, and review recommendations provided by that Committee regarding agreements with non-NASA departments and agencies of the United States Government, academic institutions and consortia, and commercial entities leading to the utilization of the ISS national laboratory facilities. 4. Coordination of transportation requirements in support of the ISS national laboratory research and development objectives, including provision for delivery of instruments, logistics support, and related experiment materials, and provision for return to Earth of collected samples, materials, and scientific instruments in need of replacement or upgrade. 5. Cooperation with NASA, other departments and agencies of the United States Government, the States, and commercial entities in ensuring the enhancement and sustained operations of non- exploration-related research payload ground support facilities for the ISS, including the Space Life Sciences Laboratory, the Space Station Processing Facility and Payload Operations Integration Center. 6. Development and implementation of scientific outreach and education activities designed to ensure effective utilization of ISS research capabilities including the conduct of scientific assemblies, conferences, and other fora for the presentation of research findings, methods, and mechanisms for the dissemination of non-restricted research findings and the development of educational programs, course supplements, interaction with educational programs at all grade levels, including student focused research opportunities for conduct of research in the ISS national laboratory facilities. 7. Such other matters relating to the utilization of the ISS national laboratory facilities for research and development as the Administrator may consider appropriate. Appendix III: The Center for the Advancement of Science in Space (CASIS) Educational Activities Educational Initiatives The Center for the Advancement of Science in Space (CASIS) Developed Educational Initiatives National Design Challenge Pilot Project A national education campaign that provides educators and their students the opportunity to design and implement an authentic research experiment on the International Space Station (ISS). Houston, Tex. – Six educators and 220 students completed experiments to fly to the ISS on Orb-3 in October 2014. Denver, Colo. – Three schools are currently developing experiments that will be sent to the ISS in spring on 2015. The pilot includes 105 middle and high school students. Brings middle and high school students to the Kennedy Space Center Visitor Complex and the Space Life Science Lab to interact with an astronaut and research scientist to send their experiment to the ISS. Six CASIS Academy Live events have been held at the Space Life Sciences Lab and the Kennedy Space Center for 390 Central Florida middle and high school students. Created to educate middle school students about the ISS. There have been nearly 15,000 total views of the CASIS Academy student website with the monthly average of 2,492 views. The educators’ webpage has a total of 1,530 views averaging 255 monthly. Work with volunteers across the nation who communicate the CASIS mission and information about recent research conducted on board the ISS National Laboratory. Volunteers serve as pilot tester, focus group and provide local training on CASIS education programs. Program brought awareness of the ISS and CASIS science, technology, engineering, and mathematics (STEM) activities to 450 educators and students at various workshops and presentations. Partnership with the Professional Golfers’ Association of America Center for Golf Learning and Performance, Cobra Puma Golf and St Lucie County Schools to bring together science and golf by offering a 5-day golf summer cap to underprivileged middle school students teaching them math and physics . Sixty-three middle school students participated in the Professional Golfers’ Association of America STEM Camp in summer 2014. CASIS Educational Partnership Programs Zero Robotics Middle School Program Five-week summer program for middle school students to work in teams with program staff, mentors, and scientists to learn about programming, robotics and space engineering while getting hands-on experience working with and programming Synchronized Position Hold, Engage, Reorient, Experimental Satellites. There were 550 students and 110 teachers from 9 different states who participated in the program in summer 2014. Offers students the ability to participate in near real-time life science research onboard the ISS to study foraging ant behavior. The ant experiment was flown to the ISS in December of 2013. A total of 8,814 students in 32 states participated in program in FY2014. Educational Initiatives Story Time From Space Objectives Videotapes of astronauts reading selected stories from the ISS. Status The videotapes were downloaded in January 2014. Fundraising efforts continue for Phase 2 in parallel with the development of the demonstration kit of materials that will complement the science content in the books. There have been 6,500 student s and educators participating in the program in 2014. Students engage in the experiments design and proposal writing process that culminates in flying an experiment on the ISS. CASIS is a national sponsor of Missions 5 and 6 in fiscal year 2014. This represents more than 8,000 students actively engaged in authentic research experiences. CASIS presented to 400 of these students and their parents at the Student Spaceflight Experiments Program National conference in Washington D.C. A 4-week online program designed to take participants on a simulated mission to the ISS as “virtual astronauts”. Offered to middle and high school students and children and adults outside of the school system. A total of 25 students and 25 educators participated in the prototype version of the Space Station Academy in July 2014. The High School Students United with NASA to Create Hardware program is a partnership between high schools and NASA where students design, build and implement an experiment in microgravity. The experiment is being developed by a team of students at Lakewood High School in Colorado. CASIS entered into a partnership with National Geographic Learning/Cengage to help develop an online interactive science program for grades K-6. Appendix IV: Assessment of the (CASIS) FY 2014 Metrics Against GAO’s Key Attributes of Successful Performance Measures Performance Metric (18) Number of total flight projects manifested as a result of solicited proposals or investments (20) Describe intended impacts/ outcomes of ISS NL research and development to life on Earth (21) Report scientific or technological breakthroughs related to use of the ISS NL (22) Report transformational/ translational science (23) Report projects or activities contributing to national scientific, educational, or technology initiatives (26) Report new initiatives to solicit interest in/engagement with CASIS toward broader utilization of the ISS (27) Number of awards given to unsolicited proposals (28) Dollar ($) amount given to unsolicited proposals (29) Number and dollar ($) amount of awards by type of responding organization (other government agencies, academic, individual, commercial, other) (30) Dollar ($) amount contributed to projects by non-CASIS sources, and their origins (including targeted giving, commercial entities, private investments) (31) Dollar ($) amount and description of flight projects provided by other government agencies (32) Describe actual impacts of ISS NL research and development to life on Earth (specific examples, as they occur) Appendix V: Comments from the National Aeronautics and Space Administration Appendix VI: Comments from the Center for the Advancement of Science in Space Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Shelby S. Oakley, Assistant Director; Richard A. Cederholm; Virginia Chanley; Maria Durant; Laura Greifner; Ralph Roffo; Sylvia Schatz; and Roxanna T. Sun made key contributions to this report.
The U.S. has spent almost $43 billion to develop, assemble, and operate the ISS over the past two decades. The NASA Authorization Act of 2010 required NASA to enter into a cooperative agreement with a not-for-profit entity to manage the ISS National Laboratory and in 2011 did so with CASIS. CASIS is charged with maximizing use of the ISS for scientific research by executing several required activities. Recently, questions have arisen about the progress being made to implement the required activities and the impact it has had on ISS's return on the investment. GAO was asked to report on the progress of CASIS's management of the ISS National Laboratory. GAO assessed the extent to which (1) CASIS has implemented the required management activities, and (2) NASA and CASIS measure and assess CASIS's performance. To perform this work, GAO reviewed the cooperative agreement between NASA and CASIS, CASIS's annual program plans, and other documentation and interviewed ISS, CASIS, and NASA officials. The Center for the Advancement of Science in Space (CASIS), manager of the International Space Station (ISS) National Laboratory, has taken steps to fulfill its management responsibilities contained in its cooperative agreement with the National Aeronautics and Space Administration (NASA), and has initiated the activities required by the NASA Authorization Act of 2010. GAO found that CASIS implemented procedures for prioritizing research; evaluated 206 proposals and awarded approximately $20 million in grants to 77 research projects through January 2015; and cultivated relationships with academic institutions, research-specific organizations, and other entities. CASIS, however, has not been able to fulfill its responsibility in the cooperative agreement to interact with the ISS National Laboratory Advisory Committee, which NASA was statutorily required to establish under the Federal Advisory Committee Act, because NASA has yet to staff the committee as required by the NASA Authorization Act of 2008. As a result, CASIS is not able to fulfill its responsibility in the cooperative agreement that requires it to coordinate with this committee and review any report or recommendations it originates. CASIS has established fiscal year 2015 metrics that meet most of GAO's key attributes for successful performance measures (see figure below); however, NASA and CASIS did not establish measurable targets for these performance metrics, and NASA's annual assessment of CASIS was not documented. Source: GAO analysis. | GAO-15-397 GAO's work on best practices for measuring program performance has found that performance metrics should have quantifiable targets to help assess whether goals and objectives were achieved by easily comparing projected performance and actual results. CASIS officials told GAO in July 2014 that setting measurable targets would be arbitrary because CASIS processes and metrics are still evolving. In January 2015, however, the Chairman of the CASIS Board of Directors told GAO that setting measurable targets is a priority for the board. CASIS, however, has yet to establish a date by which measurable targets will be developed. Using the established metrics, NASA is required by the cooperative agreement to perform an annual program review of CASIS's performance. This review is informal and not documented as ISS program officials provide the results to CASIS orally. This approach is inconsistent with federal internal control standards, which call for information to be recorded and communicated to those who need it to manage programs, including monitoring performance and supporting future decision making. Although NASA officials reported that they were generally satisfied with CASIS's performance, CASIS officials said a formal summary of the results would make the information more actionable.
GAO_HEHS-98-6
Background The four social service programs included in our review—child care, child welfare services, child support enforcement, and the Temporary Assistance for Needy Families (TANF) block grant—provide a broad range of services and benefits for children and families. While each program is administered by HHS’ Administration for Children and Families, primary responsibility for operating these programs rests with state governments. Within many states, local governments operate social service programs with considerable autonomy. The major goals, services, and federal funding for the four programs are described below. Child Care Federally funded child care services consist primarily of subsidized care for children of low-income families while their parents are working, seeking work, or attending training or education. Other subsidized child care activities include providing information, referrals, and counseling to help families locate and select child care programs and training for child care providers. State child care agencies can provide child care directly, arrange for care with providers through contracts or vouchers, provide cash or vouchers in advance to families, reimburse families, or use other arrangements. Two settings for which states pay for care are family day care, under which care is provided for a small group of children in the caregiver’s home, and center care, under which establishments care for a group of children in a nonresidential setting, such as nonprofit centers sponsored by schools or religious organizations and for-profit centers that may be independent or members of a chain. The primary federal child care subsidy program is the Child Care Development Block Grant (CCDBG). In fiscal year 1996, about $2 billion was distributed to states to assist low-income families obtain child care so they could work or attend training or education. Under CCDBG, states are not required to provide state funds to match federal funding. Child Welfare Services Child welfare services aim to (1) improve the conditions of children and their families and (2) improve—or provide substitutes for—functions that parents have difficulty performing. Whether administered by a state or county government, the child welfare system is generally composed of the following service components: child protective services that entail responding to and investigating reports of child abuse and neglect, identifying services for the family, and determining whether to remove a child from the family’s home; family preservation and family support services that are designed to strengthen and support families who are at risk of abusing or neglecting their children or losing their children to foster care and that include family counseling, respite care for parents and caregivers, and services to improve parenting skills and support child development; foster care services that provide food and housing to meet the physical needs of children who are removed from their homes and placed with a foster family or in a group home or residential care facility until their family can be reunited, the child is adopted, or some other permanent placement is arranged; adoption services that include recruiting potential adoptive parents, placing children in adoptive homes, providing financial assistance to adoptive parents to assist in the support of special needs children, and initiating proceedings to relinquish or terminate parental rights for the care and custody of their children; and independent living services that are activities for older foster children—generally age 16 and older—to help them make the transition from foster care to living independently. Almost all states are also operating or developing an automated foster care and adoption data collection system. Federal funding for child welfare services totaled about $4 billion in fiscal year 1996. Nearly 75 percent of these funds were for foster care services. Depending on the source, the federal match of states’ program costs can range from 50 to 78 percent. Child Support Enforcement The child support enforcement program enforces parental child support obligations by locating noncustodial parents, establishing paternity and child support orders, and collecting support payments. These services, established under title IV-D of the Social Security Act, are available to both welfare and nonwelfare families. In addition, states are operating or developing automated management information systems to help locate noncustodial parents and monitor child support cases. The federal government pays two-thirds of the states’ costs to administer the child support enforcement program. The states can also receive incentive funds based on the cost-effectiveness of child support enforcement agencies in making collections. In 1996, federal funding for program administration and incentives totaled almost $3 billion. TANF Block Grant The Personal Responsibility and Work Opportunity Reconciliation Act of 1996 made major changes to the nation’s welfare system. In place of AFDC and the Job Opportunities and Basic Skills Training (JOBS) programs, the 1996 law created a block grant for states, or TANF, that has more stringent requirements than AFDC for welfare parents to obtain jobs in return for their benefits. In 1996, the federal government spent about $11 billion on AFDC benefit payments, and JOBS provided almost $1 billion to help families on welfare obtain education, training, and work experience to become self-sufficient. TANF provides states flexibility in, among other things, providing assistance to needy families and promoting job preparation and work. Federal spending through the TANF block grant is currently funded at $16.4 billion per year. States are not required to match federal funds but must maintain specified historic levels of state spending on behalf of families eligible for TANF. Social Service Privatization Has Expanded in Recent Years The federal, state, and local governments have for decades privatized a broad range of government activities in both nonsocial and social service programs. This trend is continuing. Since 1990, more than half of the state and local governments we contacted have increased their contracting for services, as indicated by the number and type of services privatized and the percentage of social service budgets paid to private contractors. Spurred by political leaders and top program managers, states and localities privatized social services in an attempt to reduce program costs and improve services by using the technology and management flexibility they believe private contractors offer. In addition, studies we examined and federal, state, and local government officials we interviewed expect privatization to increase with the enactment of recent federal welfare legislation and anticipated managed care initiatives in child welfare. State and local officials also anticipated increased contracting for services in the child care and child support enforcement programs. Privatization Is Not a New Tool Privatization is commonly defined as any process aimed at shifting functions and responsibilities, in whole or in part, from the government to the private sector. Privatization can take various forms, including divestiture, contracting out, vouchers, and public-private partnerships. Most common is contracting, which typically entails efforts to obtain competition among private bidders to perform government activities. With contracting, the government remains the financier and is responsible for managing and setting policies on the type and quality of services to be provided. Depending on the program, government agencies can contract with other government entities—often through cooperative agreements—and with for-profit and nonprofit agencies. Using a variety of strategies, the federal, state, and local governments have for decades relied on private entities to provide a wide range of services and program activities. Programs as diverse as corrections, transportation, health services, and information resource management have been privatized to varying degrees. As all levels of government attempt to meet existing or growing workloads with fewer resources, privatization has more frequently been considered a viable means of service delivery. Child care, child welfare, child support enforcement, and welfare-to-work programs have long used contractors to provide certain services. For example, most states and local governments have relied on an existing network of private day care centers to provide certain child care services. Foster care services in child welfare have also traditionally been provided by private providers. Finally, state and local governments have also generally relied on contractors to provide certain automated data processing and related support activities. State and Local Governments Increase Social Service Privatization In addition to state and local governments’ past use of contractors in social services, a national study has reported recent growth in state privatization of these programs. In its 1993 national study, the Council of State Governments reported that almost 80 percent of the state social service departments surveyed in the study indicated they had expanded their use of privatization of social services in the preceding 5 years. The council’s study reported that child care services and several child welfare services, such as adoption, foster care, and independent living support services, were among the services in which privatization increased the most. During our review, we found that privatization of social services has generally continued to expand, despite certain challenges confronting state and local governments seeking to privatize services, as discussed below. Representatives of several national associations told us that state and local social service privatization has increased throughout the country in the last several years, as indicated by the percentage of state and local social service budgets paid to contractors. Among the state and local governments we contacted, most officials said the percentage of program budgets paid to contractors has increased since 1990. While the percentage of funds paid to private contractors has generally increased in the states and programs we selected, we found that the proportion of state and local social service budgets paid to private contractors varies widely among the programs we reviewed. According to local program officials, for example, the Los Angeles County child support enforcement program spent less than 5 percent of its $100 million program budget on contracted services in 1996. In comparison, program officials said the child care component of San Francisco’s Greater Avenues for Independence (GAIN) program spent all its program funds, or $2.1 million, on privatized services in 1996. State and local government officials we interviewed generally said that, in addition to the increased and varied portion of program budgets spent on privatized services, the number of functions performed by private contractors has increased since 1990. In Virginia, for example, officials said that the state has recently begun to contract out case management and assessment functions in its welfare-to-work program, a function previously performed by government employees. State and local governments have also recently begun to privatize a broad array of child support enforcement services. While it is not uncommon for states to contract out certain child support enforcement activities, in 1996 we reported that 15 states had begun to privatize all the activities of selected child support enforcement offices in an effort to improve performance and handle growing caseloads. For most of the state and local governments we interviewed, privatized social services are now provided by nonprofit organizations, especially in child welfare. However, most of the state and local officials we contacted indicated that they also contract with for-profit organizations to deliver social services. The state and local officials we interviewed told us that among their programs the proportion of the budget for private contractors that is spent on for-profit organizations varied, ranging from as low as zero for child welfare to as high as 100 percent for child support enforcement. Within each program, the proportion of funds paid to for-profit organizations has remained about the same since 1990. States and Localities Privatize for Various Reasons A variety of reasons have prompted states and localities to contract out social services. The growth in privatization has most often been prompted by strong support from top government officials, an increasing demand for public services, and the belief that private contractors are able to provide higher-quality services more cost-effectively because of their management flexibility. In addition, state and local governments have chosen to contract out to compensate for the lack of government expertise in certain service areas, such as in the development of automated information systems. The following examples highlight common privatization scenarios: Several local child support offices in Virginia each contracted with a for-profit organization to provide a full range of program services such as locating absent parents, establishing paternity and support orders, and collecting support payments. The local offices undertook these contracts to improve program effectiveness and efficiency. Some California counties privatized job training and placement services in their GAIN program as a way to meet new state-legislated program requirements or avoid hiring additional government employees. Some state and local governments have expanded already privatized services in programs such as child care to respond to a greater public demand for services. Texas contracts to provide food stamp and other benefits electronically to use the technical expertise of private providers. Privatization Expansion Is Expected to Continue State and local government officials and other experts told us they expect the growth of privatization to continue. Increasingly, future trends in privatization may incorporate additional functions traditionally performed by state and local governments. For example, as a result of the recent welfare legislation, state and local governments now have greater flexibility in deciding how welfare programs will be administered, including an expanded authority that allows them to use private contractors to determine eligibility, an activity that has traditionally been conducted by government employees. Additionally, the Congress has shown greater interest in broadening the range of government activities that could be privatized in other social service programs. Such activities include eligibility and enrollment determination functions in the Medicaid and Food Stamps programs. The Clinton administration has opposed these proposals to expand privatization, stating that the certification of eligibility for benefits and related operations, such as verification of income and other eligibility factors, should remain public functions. In addition to the changes anticipated from the welfare legislation and more recent legislative proposals, state and local officials anticipate that privatization will continue to increase in the three other social service programs we examined. In child welfare services, according to a 1997 Child Welfare League of America survey, 31 states are planning or implementing certain management functions or use of managed care approaches to apply some combination of managed care principles—currently used in physical and behavioral health services—in the management, financing, and delivery of child welfare services. These principles include contracting to meet all the needs of a specific group of clients for a set fee rather than being paid for each service they provide. Also, in child care programs, states are increasingly privatizing the management of their voucher systems. In these cases, contractors manage the system that provides vouchers or cash certificates to families who purchase child care services from authorized providers. Finally, in child support enforcement, state program officials expect that more states will begin to contract out the full range of child support services. Privatized Social Services Decreased in Certain Locations In two California counties we contacted, county officials, after initially contracting out for certain services, decided to discontinue the practice and now have those services performed by county employees. Los Angeles County, for example, had contracted with a for-profit organization to perform the case management function in its GAIN program; however, following a change in the composition of the county’s board of supervisors, the board opposed privatizing these functions. Program officials did not renew the contract. In San Bernardino County’s GAIN program, a portion of the job search services was initially contracted out because the county did not itself have the capacity to provide all such services when the program was first implemented. Once the county hired and trained the necessary public workers, the contractor’s services were no longer needed and the contract was terminated. In both these cases, local program officials were satisfied with the contractors’ performance. Competition, Contract Development, and Monitoring Issues Could Undermine Privatization Goals Federal, state, and local government officials, union representatives, national associations, advocacy groups, contractors, and other experts in social service privatization identified several challenges that state and local governments most often encountered when they privatized social services. These challenges include obtaining a sufficient number of qualified bidders, developing sufficiently detailed contract specifications, and implementing effective methods of monitoring contractor performance. The challenges may make it difficult for state and local governments to reduce program costs and improve services. State and local government officials we contacted reported mixed results from their past and present efforts to privatize social services. However, few empirical studies compare the program costs and quality of publicly and privately provided services, and the few studies that do make such comparisons report mixed results overall. Competitive Market for Social Services Is Sometimes Insufficient Competition has long been held as a principle central to the efficient and effective working of businesses in a free-market economy. In a competitive market, multiple parties attempt to secure the business of a customer by offering the most favorable terms. Competition in relation to government activities can occur when private sector organizations compete among themselves or public sector organizations compete with the private sector to conduct public sector business. In either case, competition for government business attempts to bring the same advantages of a competitive market economy—lower prices and higher-quality goods or services—to the public sector. Competitive markets can help governments reduce program costs and improve service quality. In many cases, the benefits from competition have been established for nonsocial service programs, such as trash collection, traffic enforcement, and other functions intended to maintain or improve a government’s infrastructure. State and local governments that have contracted out public works programs competitively have documented cost savings, improved service delivery, or gained customer satisfaction.By contracting out, for example, the city of Indianapolis has already accrued cost savings and estimated that it would save a total of $65 million, or 42 percent, in its wastewater treatment operations between 1994 and 1998. The city also reported that the quality of the water it treated improved. In addition, New York State estimated that it saved $3 million annually by contracting out certain economic development and housing loan functions. However, not all experts agree whether it is possible to achieve the same results with privatization of social service programs. Some experts believe that competition among social service providers can indeed reduce program costs and improve services for children and families since, in their view, private firms inherently deliver higher-quality services at lower costs than public firms. In contrast, other experts hold that social services are significantly different from services such as trash collection or grounds maintenance—so different, in fact, that one cannot assume that competition will be sufficient to increase effectiveness or reduce costs. Several factors make it difficult to establish and maintain competitive markets with contractors that can respond to the diverse and challenging needs of children and families. These factors include the lack of a large number of social service providers with sufficiently skilled labor, the high cost of entry into the social services field, and the need for continuity of care, particularly in services involving residential placement or long-term therapy. Some experts believe that these constraints reduce the likelihood of achieving the benefits anticipated from social service privatization.Appendix II contains a more detailed comparison of characteristics associated with privatizing social services and nonsocial services. Many state and local program officials we contacted reported that they were satisfied with the number of qualified bidders in their state or locality. However, some of these officials expressed concern about the insufficient number of qualified bidders, especially in rural areas and when the contracted service calls for higher-skilled labor. For example, in certain less-urban locations, officials found only one or two contractors with the requisite skills and expertise to provide needed services. In Wisconsin, some county child welfare officials told us that their less-populous locations made them dependent on a single off-site contractor to provide needed services. As a result, program officials believed, the contractor was less responsive to local service needs than locally based public providers usually are. Similarly, officials in Virginia’s welfare-to-work program said rural areas of the state have less-competitive markets for services, thereby minimizing benefits from contracting by raising contractor costs to levels higher than they would be in a more competitive market. State and local officials also encountered situations with few qualified bidders when they contracted for activities that required higher-skilled labor. In Texas, only one contractor bid to provide electronic benefit transfer services for recipients of cash assistance and other benefits, and the bid exceeded anticipated cost estimates. Faced with only one bidder, the state had to rebid the contract and cap the funds it was willing to pay. Although state and local program officials reported instances of insufficient qualified bidders, we found few empirical studies of social service programs that examine the link between the level of competition and costs, or service quality, and these studies taken together were inconclusive. Given the uncertainties of the market, several state and local governments can use creative approaches to augment the competitive environment in order to reduce program costs and improve services. For example, under “managed competition” a government agency may prepare a work proposal and submit a bid to compete with private bidders. The government may award the contract to the bidding agency or to a private bidder. In Wisconsin, counties are competing against nongovernment providers to provide welfare-to-work services in the state’s Wisconsin Works program. Some state and local governments have configured their service delivery system to encourage ongoing competition between private and public providers. In some cases, a jurisdiction awards a contract to a private provider to serve part of its caseload and allows its public agency to continue to serve the rest. The competition fostered between public and private providers can lead to improved services, as in both the Orange County and San Bernardino County GAIN programs. In these counties, program officials concluded that when public agencies provide services side-by-side with private providers, both government personnel and private sector personnel were motivated to improve their performance. In Orange County, GAIN program job placements increased by 54 percent in 1995 when both the public agency and a private provider provided job placements to different groups of clients, compared with 1994, when only the public agency provided job placement services to all clients. While many state and local government officials advocate privatization, others believe that it is possible, through better management, to reduce the costs and improve the quality of services delivered by programs that government employees administer. Internal management techniques include basing performance on results, consolidating and coordinating human services, and reforming management systems. For example, the Oregon Option, a partnership between the federal government and the state, aims to, among other things, improve the delivery of social services by forging partnerships among all levels of government for the purpose of focusing on measurable results. Officials Cited Challenges in Developing and Monitoring Contracts Successful contracting requires devoting adequate attention and resources to contract development and monitoring. Even when contractors provide services, the government entity remains responsible for the use of the public resources and the quality of the services provided. Governments that privatize social services must oversee the contracts to fully protect the public interest. One of the most important, and often most difficult, tasks in privatizing government activities is writing clear contracts with specific goals against which contractors can be held accountable. Although some program officials told us that they had an ample number of staff who were experienced with these tasks, others said that they had an insufficient number of staff with the requisite skills to prepare and negotiate contracts. When contract requirements are vague, both the government and contractor are left uncertain as to what the contractor is expected to achieve. Monitoring for Results Is Difficult Contract monitoring should assess the contractor’s compliance with statutes, regulations, and the terms of the agreement, as well as evaluate the contractor’s performance in delivering services, achieving desired program goals, and avoiding unintended negative results. In this and previous reviews of privatization efforts, we found that monitoring contractors’ performance was the weakest link in the privatization process. Increasingly, governments at all levels are trying to hold agencies accountable for results, amid pressures to demonstrate improved performance while cutting costs. Privatization magnifies the importance of focusing on program results, because contractor employees, unlike government employees, are not directly accountable to the public. However, monitoring the effectiveness of social service programs, whether provided by the government or through a contract, poses special challenges because program performance is often difficult to measure. State and local governments have found it difficult to establish a framework for identifying the desired results of social service programs and to move beyond a summary of a program’s activities to distinguish desired outcomes or results of those activities, such as the better well-being of children and families or the community at large. For example, a case worker can be held accountable for making a visit, following up with telephone calls, and performing other appropriate tasks; however, it is not as easy to know whether the worker’s judgment was sound and the intervention ultimately effective. Without a framework for specifying program results, several state and local officials said that contracts for privatized social services tend to focus more on the day-to-day operations of the program than on service quality. For example, officials in San Francisco’s child care program told us that their contracts were often written in a way that measured outputs rather than results, using specifications such as the number of clients served, amount of payments disbursed, and the total number of hours for which child care was provided. In addition, monitoring efforts focused on compliance with the numbers specified in the contracts for outputs rather than on service quality. These practices make it difficult to hold contractors accountable for achieving program results, such as providing children with a safe and nurturing environment so that they can grow and their parents can work. Reliable and complete cost data on government activities are also needed to assess a contractor’s overall performance and any realized cost savings. However, data on costs of publicly provided services are not always adequate or available to provide a sound basis for comparing publicly and privately provided services. In some cases, preprivatization costs may not be discernible for a comparable public entity, or the number of cases available may be insufficient to compare public and privatized offices’ performance. In other cases, the privatized service may not have been provided by the public agency. To address many of the difficulties in monitoring contractor performance, government social service agencies are in the early stages of identifying and measuring desired results. For example, California’s state child care agency is developing a desired-results evaluation system that will enable state workers to more effectively monitor the results of contractors’ performance. Many agencies may need years to develop a sound set of performance measures, since the process is iterative and contract management systems may need updating to establish clear performance standards and develop cost-effective monitoring systems. In the child support enforcement program, for example, performance measures developed jointly by HHS and the states provide the context for each state to assess the progress contractors make toward establishing paternities, obtaining support orders, and collecting support payments. Developing the agreed-upon program goals and performance measures was a 3-year process. Views Differ on Whether Privatized Services Will Protect Recipient Rights Some experts in social service privatization have expressed concern that contractors, especially when motivated by profit-making goals and priorities, may be less inclined to provide equal access to services for all eligible beneficiaries. These experts believe that contractors may first provide services to clients who are easiest to serve, a practice commonly referred to as “creaming,” leaving the more difficult cases to the government to serve or leaving them unserved. Among the organizations we contacted—federal, state, and local governments, unions, public interest and advocacy groups, and contractors—we found differing views on whether all eligible individuals have the same access to privatized services as they had when such services were publicly provided. Generally, federal, state, and local government officials whom we interviewed were as confident in contractors as they were in the government to grant equal access to services for all eligible citizens. For example, an official in Wisconsin said that after privatization of some county welfare-to-work services, she saw no decline in client access to services. In contrast, representatives from advocacy groups and unions were less confident that contractors would provide equal access to services for all eligible citizens than the government would. We found no conclusive research that evaluated whether privatization affects access to services. Various groups have also raised concerns about recent changes that permit contractors to perform program activities that government employees traditionally conduct. Advocacy groups, unions, and some HHS officials expressed concern about privatizing activities that have traditionally been viewed as governmental, such as determining eligibility for program benefits or services, sanctioning beneficiaries for noncompliance with program requirements, and conducting investigations of child abuse and neglect for purposes of providing child protective services. Under federal and state requirements, certain activities in most of the programs we studied were to be performed only by government employees. Under TANF, however, contractors can determine program eligibility. Several union representatives and contractors told us that they believe certain functions, including policy-making responsibilities and eligibility determinations, often based on confidential information provided by the service recipient and requiring the judgment of the case worker, should always be provided by government employees. Strategies to Protect Recipient Rights Have Been Identified but Are Difficult to Implement Officials from several of the organizations we interviewed believe that equal access to services and other recipient rights can be protected by making several practices an integral part of social service privatization. Two contractor representatives said that carefully crafted contract language could help ensure that contracted services remain as accessible as publicly provided services. Other officials told us that remedies for dispute resolution should be provided to help beneficiaries resolve claims against contractors. Another suggested practice would require government agencies to approve contractor recommendations or decisions regarding clients in areas traditionally under government jurisdiction. In the Los Angeles County GAIN program, for example, county officials had to approve contractor recommendations to sanction certain clients for noncompliance with program requirements before those sanctions could be applied. While these options may provide certain protections, they may be difficult to implement. The limited experiences of state and local governments in writing and monitoring contracts with clearly specified results could lead to difficulties in determining which clients are eligible for services and in determining whether or not these clients received them. In addition, advocacy groups and unions said some remedies for dispute resolution might be difficult to implement because contractors do not always give beneficiaries the information they need to resolve their claims. Finally, others noted that any additional government review of contractor decisions can be costly and can reduce contractor flexibility. State and Local Governments Report Mixed Results in Privatizing Social Services While numerous experts believe that contracted social services can reduce costs and improve service quality, a limited number of studies and evaluations reveal mixed results, as illustrated by the following examples: Our previous report on privatization of child support enforcement services found that privatized child support offices performed as well as or, in some instances, better than public programs in locating noncustodial parents, establishing paternity and support orders, and collecting support owed. The relative cost-effectiveness of the privatized versus public offices varied among the four sites examined. Two privatized offices were more cost-effective, one was as cost-effective, and one was less cost-effective. A California evaluation of two contracts in Orange County’s GAIN employment and training program found that the one contract for orientation services resulted in good service quality and less cost than when performed by county employees. The other contract for a portion of case management services had more mixed results; the contractor did not perform as well as county staff on some measures but was comparable on others. For example, county workers placed participants in jobs at a higher rate and did so more cost-effectively than private workers. Yet client satisfaction with contractor- and county-provided services was comparable. A comparison of public and private service delivery in Milwaukee County, Wisconsin, found that the cost of foster care services was higher when provided by private agencies than when provided by county staff. Further, the private agencies did not improve the quality of services when measured by the time it took to place a child in a permanent home or by whether the child remained in that home. State governments have contracted to upgrade automated data systems in the child support enforcement program. Since 1980, states have spent a combined $2.6 billion on automated systems—with $2 billion of the total being federally funded. As we reported earlier, these systems appear to have improved caseworker productivity by helping track court actions relating to paternity and support orders and amounts of collections and distributions. According to HHS, almost $11 billion in child support payments were collected in 1995—80 percent higher than in 1990. While it is too early to judge the potential of fully operational automated systems, at least 10 states are now discovering that their new systems will cost more to operate once they have been completed. One state estimated that its new system, once operational, would cost three to five times more than the old system and former operating costs could be exceeded by as much as $7 million annually. Potential savings from privatizing social services can be offset by various factors, such as the costs associated with contractor start-up and government monitoring. While direct costs attributable to service delivery may be reduced, state and local agencies may incur additional costs for transition, contract management, and the monitoring of their privatization efforts. Despite the lack of empirical evidence, most state and local government officials told us they were satisfied with the quality of privatized services. Some officials said that efficiencies were realized as a result of contractors’ expertise and management flexibility. In many cases, public agencies established collaborative relationships with private providers that helped them be more responsive to beneficiaries. Still other officials, however, said they saw no significant benefits resulting from privatization because outcomes for children and families were the same as when the government provided the service. For example, Milwaukee’s privatization of foster care services had not improved the proportion of children who remained in permanent homes, a specified goal of the program. HHS’ Oversight of States and Localities May Need to Change in a New Environment The increase in privatization combined with the difficulties states are having in developing methods to monitor program results raise questions about how HHS can ensure that broad program goals are achieved. It will be challenging for HHS to develop and implement approaches to help states assess results of federally funded programs and track them over time so that state and local governments are better prepared to hold contractors accountable for the services they provide. Currently, monitoring program results poses a challenge throughout the government. Some state and local government officials whom we interviewed believed they should pay greater attention to program results, given the increased use of private contractors. Several officials mentioned that HHS could help the states and localities develop methods of assessing program results by clarifying program goals, providing more responsive technical assistance, and sharing best practices. The fact that officials in most of the states we contacted said they currently do not have methods in place to assess program results suggests that unless HHS provides states with this help, it will have difficulty assessing the effectiveness of social service programs nationally. HHS’ current focus on compliance with statutes and regulations poses a challenge in monitoring the effectiveness of state programs and in identifying the effects of privatization on these programs. HHS carries out its oversight function largely through audits conducted by the Office of the Inspector General, program staff, and other HHS auditors. HHS officials told us that the department has focused its auditing of the states on compliance with federal statutes and regulations more than other areas of focus, such as results achieved or client satisfaction. For example, HHS may conduct a compliance audit to verify that state programs spent federal money in ways that are permitted by federal regulations. The Government Performance and Results Act of 1993 may provide an impetus for HHS to place a greater emphasis on monitoring the effectiveness of state programs. Under this act, federal agencies are required to develop a framework for reorienting program managers toward achieving better program results. As a federal agency, HHS must refocus from compliance to developing and implementing methods to assess social service program results. However, this transition will not be easy, given the challenge that government agencies face when attempting to orient their priorities toward achieving better program results and the difficulty inherent in defining goals and measuring results for social service programs. Some agencies within HHS have made progress in including the assessment and tracking of program results within their oversight focus. For example, within HHS, the Office of Child Support Enforcement has recently increased its emphasis on program results by establishing, in conjunction with the states, a strategic plan and a set of performance measures for assessing progress toward achieving national program goals. Child support enforcement auditors have also recently begun to assess the accuracy of state-reported data on program results. These initiatives may serve as models for HHS as it attempts to enhance accountability for results in social service programs supported with federal funds. Conclusions Our work suggests that privatization of social services has not only grown but is likely to continue to grow. Under the right conditions, contracting for social services may result in improved services and cost savings. Social service privatization is likely to work best at the state and local levels when competition is sufficient, contracts are effectively developed and monitored by government officials, and program results are assessed and tracked over time. The observed increase in social service privatization highlights the need for state and local governments to specify desired program results and monitor contracts effectively. At the same time, the federal government, through the Government Performance and Results Act of 1993, is focusing on achieving better program results. These concurrent developments should facilitate more effective privatized social services. More specifically, HHS in responding to its Government Performance and Results Act requirement could help states find better ways to manage contracts for results. This could, in turn, help state and local governments ensure that they are holding contractors accountable for the results they are expected to achieve, thus optimizing their gains from privatization. Agency Comments and Our Evaluation We provided draft copies of this report to HHS, the five states we selected for review, and other knowledgeable experts in social service privatization. HHS did not provide comments within the allotted 30-day comment period. We received comments from California, Texas, and Virginia. These states generally concurred with our findings and conclusions. Specifically, officials from Texas and Virginia agreed that developing clear performance measures and monitoring contractor performance present special challenges requiring greater priority and improvement. These states also support a stronger federal-state partnership to help them address these special challenges. Comments received from other acknowledged experts in social service privatization also concurred with the report and cited the need to increase competition, develop effective contracts, and monitor contractor performance, thereby increasing the likelihood that state and local governments would achieve intended results sought through social service privatization. The comments we received did not require substantive or technical changes to the report. As agreed with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies of this report to the Secretary of HHS and HHS’ Assistant Secretary for Children and Families. We will also make copies available to others on request. If you or your staff have any questions about this report, please contact Kay E. Brown, Assistant Director, or Mark E. Ward, Senior Evaluator, at (202) 512-7215. Other major contributors to this report are Gregory Curtis, Joel I. Grossman, Karen E. Lyons, and Sylvia L. Shanks. Scope and Methodology To meet the objectives of this study, we reviewed and synthesized studies and published articles on social service privatization by conducting a literature review and synthesis of articles, studies, and other documents selected from economic, social science, and business bibliographic files. We also considered articles and studies recommended by other organizations. As a result of these efforts, we selected 14 articles or studies on social service privatization in the United States. These articles are listed in the bibliography. We chose the four programs included in our study because they constitute an increasingly important component of the nation’s welfare system in terms of both the diversity of services they provide and the magnitude of federal funding used to support state program administration. To select states for study, we reviewed GAO reports and other studies of privatization, concluding that we would interview state and local government officials in California, Massachusetts, Texas, Virginia, and Wisconsin regarding their respective child care, child welfare, child support enforcement, and family assistance programs supported by TANF. We selected these states to learn how state and local governments have implemented privatized services among the four social service programs we included in our review. We chose these states also because we were aware that they had some experience in the privatization of social services and we could thus examine a mix of state- and county-administered social service programs. To broaden our coverage of the diverse views on privatization, we also interviewed officials of HHS, national associations and advocacy groups, unions, and contractors. During our interviews, we obtained and reviewed agency documents. For our interviews, we used semistructured guides containing both closed and open-ended questions to obtain information on the extent of recent social service privatization, type of program functions being privatized, issues leading to the decision to privatize, issues in implementation of social service privatization, degree and type of monitoring and evaluation conducted, and federal policy implications stemming from social service privatization. We conducted 36 interviews in total concerning the four social service programs we studied. In conducting our interviews, we asked the interviewees to respond from the perspective that seemed to us most consistent with their knowledge base and area of primary interest. For example, we asked state program officials to respond from the perspective of their entire state, whereas we asked local officials to base their responses solely on their experiences in their own locality. Similarly, we asked officials in HHS, national associations and advocacy groups, unions, and contractors to provide a national perspective on key issues surrounding privatization in each of the four social service programs. The interview responses that we report on reflect the views of only the officials we interviewed. The following information lists the federal, state, and local government, union, advocacy group, national association, and contractor contacts we made. The number of interviews conducted with representatives of each organization appears in parentheses. Federal Government Department of Health and Human Services, Administration for Children and Families (6) State and Local Governments California Department of Education (1) Department of Social Services (3) Department of Public Social Services, Employment Program Bureau (1); District Attorney’s Office, Bureau of Family Support Operations (1) Jobs and Employment Services Department (1) San Francisco City and County Department of Human Services, Employment and Training Services (1); Department of Human Services, Family and Children’s Services Division (1) Social Service Agency, Family and Children Services Division (1) Massachusetts Department of Social Services (1) Department of Transitional Assistance (1) Virginia Department of Social Services (2) Texas Department of Human Services (1) Department of Protective and Regulatory Services (1) State Attorney General’s Office (1) Wisconsin Department of Health and Family Services (1) Department of Workforce Development (1) Department of Human Services (1) Department of Child Services (1) Unions (1) Advocacy Groups American Public Welfare Association (1) Center for Law and Social Policy (1) Child Welfare League of America (1) National Associations National Association of Counties (1) National Conference of State Legislatures (1) National Governors Association (1) Contractors Maximus, Government Operations Division (1) Lockheed Martin IMS (1) We conducted our study between October 1996 and July 1997 in accordance with generally accepted government auditing standards. Characteristics Associated With Privatization of Nonsocial and Social Services Performance is difficult to measure because most services cannot be judged on the basis of client outcomes; treatment approaches cannot be standardized, nor can the appropriateness of workers’ decisions be effectively assessed (Table notes on next page) Bibliography Chi, K.S. “Privatization in State Government: Trends and Options.” Prepared for the 55th National Training Conference of the American Society for Public Administration, Kansas City, Missouri, July 23-27, 1994. Donahue, J.D. “Organizational Form and Function.” The Privatization Decision: Public Ends, Private Means. New York: Basic Books, 1989. Pp. 37-56. Drucker, P.F. “The Sickness of Government.” The Age of Discontinuity: Guidelines to Changing Our Society. New York: Harper and Row, 1969. Pp. 212-42. Eggers, W.D., and R. Ng. Social and Health Service Privatization: A Survey of County and State Governments, Policy Study 168. Los Angeles, Calif.: Reason Foundation, Oct. 1993. Pp. 1-18. Gronbjerg, K.A., T.H. Chen, and M.A. Stagner. “Child Welfare Contracting: Market Forces and Leverage.” Social Service Review (Dec. 1995), pp. 583-613. Leaman, L.M., and others. Evaluation of Contracts to Privatize GAIN Services, County of Orange, Social Services Agency, December 1995. Matusiewicz, D.E. “Privatizing Child Support Enforcement in El Paso County.” Commentator, Vol. 6, No. 32 (Sept.-Oct. 1995), p. 16. Miranda, R. “Privatization and the Budget-Maximizing Bureaucrat.” Public Productivity and Management Review, Vol. 17, No. 4 (summer 1994), pp. 355-69. Nelson, J.I. “Social Welfare and the Market Economy.” Social Science Quarterly, Vol. 73, No. 4 (Dec. 1992), pp. 815-28. O’Looney, J. “Beyond Privatization and Service Integration: Organizational Models for Service Delivery.” Social Service Review (Dec. 1993), pp. 501-34. Smith, S.R., and M. Lipsky. “Privatization of Human Services: A Critique.” Nonprofits for Hire: The Welfare State in the Age of Contracting. Cambridge, Mass.: Harvard University Press, 1994. Pp. 188-205. Smith, S.R., and D.A. Stone. “The Unexpected Consequences of Privatization,” Remaking the Welfare State: Retrenchment and Social Policy in America and Europe, Michael K. Brown (ed.). Philadelphia, Pa.: Temple University Press, 1988. Pp. 232-52. VanCleave, R.W. “Privatization: A Partner in the Integrated Process.” Commentator, Vol. 6, No. 32 (Sept.-Oct. 1995), pp. 14-17. Weld, W.F., and others. An Action Agenda to Redesign State Government. Washington, D.C.: National Governors’ Association, 1993. pp. 42-63. Related GAO Products The Results Act: Observations on the Department of Health and Human Services’ April 1997 Draft Strategic Plan (GAO/HEHS-97-173R, July 11, 1997). Child Support Enforcement: Strong Leadership Required to Maximize Benefits of Automated Systems (GAO/AIMD-97-72, June 30, 1997). Privatization and Competition: Comments on S. 314, the Freedom From Government Competition Act (GAO/T-GGD-97-134, June 18, 1997). The Government Performance and Results Act: 1997 Governmentwide Implementation Will Be Uneven (GAO/GGD-97-109, June 2, 1997). Managing for Results: Analytic Challenges in Measuring Performance (GAO/HEHS/GGD-97-138, May 30, 1997). Welfare Reform: Three States’ Approaches Show Promise of Increasing Work Participation (GAO/HEHS-97-80, May 30, 1997). Welfare Reform: Implications of Increased Work Participation for Child Care (GAO/HEHS-97-75, May 29, 1997). Foster Care: State Efforts to Improve the Permanency Planning Process Show Some Promise (GAO-HEHS-97-73, May 7, 1997). Privatization: Lessons Learned by State and Local Governments (GAO/GGD-97-48, Mar. 14, 1997). Child Welfare: States’ Progress in Implementing Family Preservation and Support Activities (GAO/HEHS-97-34, Feb. 18, 1997). Child Support Enforcement: Early Results on Comparability of Privatized and Public Offices (GAO/HEHS-97-4, Dec. 16, 1996). Child Support Enforcement: Reorienting Management Toward Achieving Better Program Results (GAO/HEHS/GGD-97-14, Oct. 25, 1996). Executive Guide: Effectively Implementing the Government Performance and Results Act (GAO/GGD-96-118, June 1996). District of Columbia: City and State Privatization Initiatives and Impediments (GAO/GGD-95-194, June 28, 1995). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO examined issues related to social service privatization, focusing on the: (1) recent history of state and local government efforts to privatize federally funded social services; (2) key issues surrounding state and local privatized social services; and (3) federal policy implications of state and local social service privatization. GAO found that: (1) since 1990, more than half of the state and local governments GAO contacted have increased their contracting for services, as indicated by the number and type of services privatized and the percentage of social service budgets paid to private contractors; (2) many experts GAO consulted expect privatization to expand further; (3) GAO's research found that the recent increases in privatization were most often prompted by political leaders and top program managers, who were responding to an increasing demand for public services and a belief that contractors can provide higher-quality services more cost-effectively than can public agencies; (4) in attempts to provide more cost-effective services, more states are contracting out larger portions of their child support enforcement programs; (5) state and local governments are turning to contractors to provide some services and support activities in which they lack experience or technical expertise; (6) state and local governments face several key challenges as they plan and implement strategies to privatize their social services; (7) first is the challenge to obtain sufficient competition to realize the benefits of privatization; (8) second, state and local governments often have little experience in developing contracts that specify program results in sufficient detail to effectively hold contractors accountable; (9) third, it can be difficult for states to monitor performance in some social service programs; (10) increased privatization raises questions about how the Department of Health and Human Services (HHS) will fulfill its obligation to ensure that broad program goals are achieved; (11) assessing program results presents a significant challenge throughout the government, yet it is an important component of an effective system for holding service providers accountable; (12) the difficulties the states have in monitoring privatized social services focus attention on the need to improve accountability for results; (13) some of the state and local officials GAO interviewed believe that HHS should clarify its program goals and develop performance measures states can use to monitor and evaluate contractor efforts; (14) the Government Performance and Results Act of 1993 requires federal agencies like HHS to focus their efforts on achieving better program results; (15) HHS' practice of holding states accountable primarily for compliance with statutes and regulations may make the transition particularly difficult; and (16) however, promising approaches are available within HHS in moving to a program results orientation.
GAO_GAO-05-572T
Background Since the 1940s, VA has provided vocational rehabilitation assistance to veterans with service-connected disabilities to help them find meaningful work and achieve maximum independence in daily living. In 1980, the Congress enacted the Veterans’ Rehabilitation and Education Amendments, which changed the focus of VA’s vocational rehabilitation program from providing primarily training aimed at improving the employability of disabled veterans to helping them find and maintain suitable jobs. VA estimates that in fiscal year 2004 it spent more than $670 million on its VR&E program to serve about 73,000 participants. This amount represents about 2 percent of VA’s $37 billion budget for nonmedical benefits, most of which involves cash compensation for service connected disabilities. VR&E services include vocational counseling, evaluation, and training that can include payment for tuition and other expenses for education, as well as job placement assistance. Interested veterans generally apply for VR&E services after they have applied and qualified for disability compensation based on a rating of their service-connected disability. This disability rating—ranging from 0 to 100 percent in 10 percent increments—entitles veterans to monthly cash payments based on their average loss in earning capacity resulting from a service-connected injury or combination of injuries. To be entitled to VR&E services, veterans with disabilities generally must have a 20 percent disability rating and an employment handicap as determined by a vocational rehabilitation counselor. Although cash compensation is not available to servicemembers until after they separate from the military, they can receive VR&E services prior to separation under certain circumstances. To make these services available prior to discharge, VA expedites the determination of eligibility for VR&E by granting a preliminary rating, known as a memorandum rating. Implementing Task Force Recommendations Should Improve VR&E Services We generally agree with the Task Force’s key findings, which broadly address three areas of VR&E’s operations. (See table 1.) First, the Task Force found that VR&E has not been a priority in terms of returning veterans with service-connected disabilities to the workforce. Between 1984 and 1998, we issued three reports all of which found that the VR&E program had not emphasized its mandate to find jobs for disabled veterans. In 1992, we found that over 90 percent of eligible veterans went directly into education programs, while less than 3 percent went into the employment services phase. We also found that VA placed few veterans in suitable jobs. We reported in 1996 that VA rehabilitated less than 10 percent of veterans found eligible for vocational rehabilitation services and recommended switching the focus to obtaining suitable employment for disabled veterans. VA program officials told us that staff focused on providing training services because, among other reasons, the staff was not prepared to provide employment services because it lacked adequate training and expertise in job placement. Years later, the Task Force similarly reported that top VR&E management had not demonstrated a commitment to providing employment services and lacked the staffing and skill resources at the regional offices to provide these services. The Task Force also found that VR&E has a limited capacity to manage its growing workload. The Task Force had concerns about, among other things, VR&E’s organizational, program, and fiscal accountability; workforce and workload management; information and systems technology; and performance measures. In our report on the Task Force, we stated that, although we have not specifically reviewed VR&E’s capacity to manage its workload, we agree that many of the VR&E management systems identified by the Task Force as needing improvement are fundamental to the proper functioning of federal programs, regardless of workload. In addition, the Task Force found that the VR&E system must be redesigned for the 21st century employment environment. The Task Force reported that the VR&E program does not reflect the dynamic nature of the economic environment and constant changes in the labor market. The report suggested that, as a result, only about 10 percent of veterans participating in the VR&E program had obtained employment. We agree with the Task Force finding that the VR&E system needs to be modernized. Our high risk report emphasized that outmoded criteria used to establish eligibility need to be updated. The Task Force made 105 recommendations, which we grouped into six categories. (See table 2.) The first category of recommendations was directed at streamlining VR&E program eligibility and entitlement for veterans in most critical need, including (1) servicemembers who have been medically discharged or are pending medical discharge; (2) veterans with a combined service-connected disability rating of 50 percent or greater; and (3) veterans receiving compensation for the loss, or loss of the use, of a limb. In our report, we commented that, among other things, VA’s outmoded disability criteria raise questions about the validity of its disability decisions because medical conditions alone are generally poor predictors of work incapacity. For example, advances in prosthetics and technology for workplace accommodations can enhance work capacity by compensating for impairments. As a result, the Task Force recommendation to focus on severity of disability rather than on employability may not ensure that veterans with the most severe employment handicaps receive priority services from VR&E. Second, the Task Force sought to replace the current VR&E process with a 5-track employment-driven service delivery system. The five tracks include rapid access employment for veterans with skills, self-employment, re- employment at a job held before military service, traditional vocational rehabilitation services and, when employment is not a viable option, independent living services. We commented that the 5-track process could help VR&E focus on employment while permitting the agency to assist veterans less likely to obtain gainful employment on their own. We added, however, that the new system would require a cultural shift from the program’s current emphasis on long-term education to more rapid employment. We also observed that, as long as the education benefits available under VR&E provide more financial assistance than those available through other VA educational benefits programs, eligible veterans will have strong incentives to continue to use VR&E to pursue their education goals. Third, the Task Force recommended that VR&E expand counseling benefits to provide VR&E services to servicemembers before they are discharged and to veterans who have already transitioned out of the military. We agreed that providing vocational and employment counseling prior to military discharge is essential to enable disabled servicemembers to access VR&E services as quickly as possible after they are discharged. In prior reports, we highlighted the importance of early intervention efforts to promote and facilitate return to the workplace. In 1996, for example, we reported research findings that rehabilitation offered as close as possible to the onset of disabling impairments has the greatest likelihood of success. In addition, receptiveness to participate in rehabilitation and job placement activities can decline after extended absence from work. Fourth, the Task Force made several recommendations directed at redesigning the VR&E central office to provide greater oversight of regional office operations and to increase staff and skill sets to reflect the new focus on employment. We agreed that program accountability could be enhanced through more central office oversight. We pointed out that, over the past 3 years, VA Inspector General reports had identified VR&E programs at regional offices that did not adhere to policies and procedures and sometimes circumvented accountability mechanisms, such as those for managing and monitoring veterans’ cases and those requiring the development of sound plans prior to approving purchases for those veterans seeking self-employment. Fifth, the Task Force recommended that VR&E improve the capacity of its information technology systems. Many of the Task Force’s recommendations in this area are consistent with GAO’s governmentwide work reporting that agencies need to strengthen strategic planning and investment management in information technology. In addition, we recognized that VR&E would benefit from a more systematic analysis of current information technology systems before making further investment in its current systems. Finally, the Task Force recommended that VR&E strengthen coordination within VA between VR&E and the Veterans Health Administration, and between VR&E and the Departments of Defense (DOD) and Labor. Improving coordination with agencies that have a role in assisting disabled veterans make the transition to civilian employment should help these agencies more efficiently use federal resources to enhance the employment prospects of disabled veterans. VA Continues to Face Significant Challenges in Improving Its VR&E Program While VR&E responds to the Task Force recommendations, it faces immediate challenges associated with providing vocational rehabilitation and employment services to injured servicemembers returning from Afghanistan and Iraq. As we reported in January 2005, VR&E is challenged by the need to provide services on an early intervention basis; that is, expedited assistance provided on a high priority basis. VR&E also lacks the information technology systems needed to manage the provision of services to these servicemembers and to veterans. In addition, VR&E is only now beginning to use results-based criteria for measuring its success in assisting veterans achieve sustained employment. VR&E Challenged to Provide Services as Early as Possible VR&E faces significant challenges in expediting services to disabled servicemembers. An inherent challenge is that individual differences and uncertainties in the recovery process make it difficult to determine when a seriously injured service member will be able to consider VR&E services. Additionally, as we reported in our January 2005 report, given that VA is conducting outreach to servicemembers whose discharge from military service is not yet certain, VA is challenged by DOD’s concerns that VA’s outreach about benefits, including early intervention with VR&E services, could adversely affect the military’s retention goals. Finally, VA is currently challenged by a lack of access to DOD data that would, at a minimum, allow the agency to readily identify and locate all seriously injured servicemembers. VA officials we interviewed both in the regional offices and at central office reported that this information would provide them with a more reliable way to identify and monitor the progress of those servicemembers with serious injuries. However, DOD officials cited privacy concerns about the type of information VA had requested. Our January 2005 report found that VR&E could enhance employment outcomes for disabled servicemembers, especially if services could be provided early in the recovery process. Unlike previous conflicts, a greater portion of servicemembers injured in Afghanistan and Iraq are surviving their injuries—due, in part, to advanced protective equipment and in- theater medical treatment. Consequently, VR&E has greater opportunity to assist servicemembers in overcoming their impairments. While medical and technological advances are making it possible for some of these disabled servicemembers to return to military occupations, others will transition to veteran status and seek employment in the civilian economy. According to DOD officials, once stabilized and discharged from the hospital, servicemembers usually relocate to be closer to their homes or military bases and be treated as outpatients by the closest VA or military hospital. At this point, the military generally begins to assess whether the servicemember will be able to remain in the military—a process that could take months to complete. The process could take even longer if servicemembers appeal the military’s initial disability decision. We also reported that VA had taken steps to expedite VR&E services for seriously injured servicemembers returning from Afghanistan and Iraq. Specifically, VA instructed its regional offices to make seriously injured servicemembers a high priority for all VA assistance. Because the most seriously injured servicemembers are initially treated at major military treatment facilities, VA also deployed staff to these sites to provide information on VA benefits programs, including VR&E services to servicemembers injured in Afghanistan and Iraq. Moreover, to better ensure the identification and monitoring of all seriously injured servicemembers, VA initiated a memorandum of agreement proposing that DOD systematically provide information on those servicemembers, including their names, location, and medical condition. Pending an agreement, VA instructed its regional offices to establish local liaison with military medical treatment facilities in their areas to learn who the seriously injured are, where they are located, and the severity of their injuries. Reliance on local relationships, however, has resulted in varying completeness and reliability of information. In addition, we found that VA had no policy for VR&E staff to maintain contact with seriously injured servicemembers who had not initially applied for VR&E services. Nevertheless, some regional offices reported efforts to maintain contact with these servicemembers, noting that some who are not initially ready to consider employment when contacted about VR&E services may be receptive at a future time. To improve VA’s efforts to expedite VR&E services, we recommended that VA and DOD collaborate to reach an agreement for VA to have access to information that both agencies agree is needed to promote servicemembers’ recovery to work. We also recommended that the Secretary of Veterans Affairs direct that Under Secretary for Benefits to develop a policy and procedures for regional offices to maintain contact with seriously injured servicemembers who do not initially apply for VR&E services, in order to ensure that they have the opportunity to participate in the program when they are ready. Both VA and DOD generally concurred with our findings and recommendations. Outmoded Information Technology Systems Pose a Challenge GAO’s governmentwide work has found that federal agencies need to strengthen strategic planning and investment management in information technology. The Task Force expressed particular concern that VR&E’s information technology systems are not up to the task of producing the information and analyses needed to manage these and other activities. The Task Force pointed out that VR&E’s mission-critical automated case- management system is based on a software application developed by four VA regional offices in the early 1990s and redesigned to operate in the Veterans Benefits Administration’s information technology and network environments. The Task Force identified specific concerns with the operation of VR&E’s automated case management system. For example, 52 of VR&E’s 138 out- based locations cannot efficiently use the automated system because of VBA’s policy to limit staff access to high-speed computer lines. As a result of this policy, many VR&E locations use dial-up modem capabilities, which can be unreliable and slow. The Task Force concluded that VR&E’s automated system is so intertwined with the delivery of VR&E services that lack of reliable access and timely system response has degraded staff productivity and its ability to provide timely services to veterans. In addition, the Task Force pointed out that the number of reports that VR&E’s automated case management system can generate is limited. For example, workload data available from the automated system provide only a snapshot of the veterans in the VR&E program at a given point in time. The automated system cannot link a veteran’s case status with the fiscal year in which the veteran entered the program so that the performance of veterans entering the program in a fiscal year can be measured over a period of time. Also, the Task Force reported that VR&E does not have the capabilities it needs to track the number of veterans who drop out of the program or interrupt their rehabilitation plans. VR&E Faces the Challenge of Developing Meaningful Outcome Measures VA faces the challenge of using results-oriented criteria to measure the long-term success of the VR&E program. The Task Force recommended that VR&E develop a new outcomes-based performance measurement system to complement the proposed 5-track employment-driven service delivery system. Currently, VR&E still identifies veterans as having been successfully rehabilitated if they maintain gainful employment for 60 days. In its fiscal year 2004 performance and accountability report, VR&E included four employment-based performance measures: the percentage of participants employed during the first quarter (90 days) after leaving the program, the percentage still employed after the third quarter (270 days), the percentage change in earnings from pre-application to post-program, and the average cost of placing a participant in employment. However, as of February 2005, VR&E was still in the process of developing data for these measures and had not reported results. Until VR&E is farther along in this process, it will continue to measure performance using the 60-day criteria, which may not accurately predict sustained employment over the long-term. In 1993, we reported that the 60-day measure of success used by state vocational rehabilitation agencies may not be rigorous enough because gains in employment and earnings of clients who appeared to have been successfully rehabilitated faded after 2 years. Moreover, the earnings for many returned to pre-vocational rehabilitation level after 8 years. As VR&E further develops its four employment-based performance measures, it will also face challenges associated with coordinating its efforts with those of other federal agencies, including the Departments of Labor and Education, as they seek to develop common measures of vocational rehabilitation success. Mr. Chairman, this concludes my prepared remarks. I will be happy to answer any questions that you or other Members of the Subcommittee may have. Contact and Acknowledgments For further information, please contact Cynthia A. Bascetta at (202) 512- 7215. Also contributing to this statement were Irene Chu and Joseph Natalicchio. Related GAO Products VA Disability Benefits and Health Care: Providing Certain Services to the Seriously Injured Poses Challenges (GAO-05-444T, Mar. 17, 2005) Vocational Rehabilitation: More VA and DOD Collaboration Needed to Expedite Services for Seriously Injured Servicemembers (GAO-05-167, Jan. 14, 2005) VA Vocational Rehabilitation and Employment Program: GAO Comments on Key Task Force Findings and Recommendations (GAO-04- 853, Jun. 15, 2004) Vocational Rehabilitation: Opportunities to Improve Program Effectiveness (GAO/T-HEHS-98-87, Feb. 4, 1998) Veterans Benefits Administration: Focusing on Results in Vocational Rehabilitation and Education Programs (GAO/T-HEHS-97-148, Jun. 5, 1997) Vocational Rehabilitation: VA Continues to Place Few Disabled Veterans in Jobs (GAO/HEHS-96-155, Sept. 3, 1996) Vocational Rehabilitation: Evidence for Federal Program’s Effectiveness Is Mixed, (GAO/PEMD-93-19, Aug. 27, 1993) Vocational Rehabilitation: VA Needs to Emphasize Serving Veterans With Serious Employment Handicaps (GAO/HRD-92-133, Sept. 28, 1992) VA Can Provide More Employment Assistance to Veterans Who Complete Its Vocational Rehabilitation Program (GAO/HRD-84-39, May 23, 1984) This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The Department of Veterans Affairs' Vocational Rehabilitation and Employment (VR&E) program has taken on heightened importance due, in large measure, to the number of servicemembers returning from Afghanistan and Iraq with serious injuries and their need for vocational rehabilitation and employment assistance. This statement draws on over 20 years of GAO's reporting on VA's provision of vocational rehabilitation and employment assistance to American veterans and focuses primarily on the results of two recent GAO reports. The first, issued in June 2004, commented on the report of the VA-sponsored VR&E Task Force, which performed a comprehensive review of VR&E activities and made extensive recommendations that, if implemented, would affect virtually every aspect of VR&E's operations. The second, issued in January 2005, focused on the steps VA has taken and the challenges it faces in providing services to seriously injured veterans returning from Afghanistan and Iraq. The past year has presented the Department of Veterans Affairs (VA) with an unprecedented opportunity to begin strengthening its provision of vocational rehabilitation and employment services to veterans. The VR&E Task Force has developed a blueprint for the changes needed to improve numerous programmatic and managerial aspects of VR&E's operations. We generally agree with the Task Force's three key findings. We also generally agree with the Task Force's key recommendations to streamline eligibility and entitlement, institute a new employment-driven service delivery process, expand counseling benefits, reorganize and increase VR&E staffing, and improve information technology capabilities and intra- and inter-agency coordination. VR&E faces three overriding challenges as it responds to the Task Force recommendations. First, providing early intervention assistance to injured servicemembers returning from Afghanistan and Iraq is complicated by (1) differences and uncertainties in the recovery process, which make it difficult for VR&E to determine when a servicemember will be able to consider its services; (2) the Department of Defense's (DOD) concerns that VA's outreach could work at cross purposes to the military's retention goals; and (3) lack of access to DOD data that would allow VA to readily identify and locate all seriously injured servicemembers. Second, VR&E needs to upgrade its information technology system. The Task Force report pointed out that VR&E's IT system is limited in its ability to produce useful reports. Third, VR&E needs to use new results-based criteria to evaluate and improve performance. The Task Force recommended that VR&E develop a new employment-oriented performance measurement system, including measures of sustained employment longer than 60 days. In fiscal year 2004, VR&E included four employment-based performance criteria in its performance and accountability report. However, as of February 2005, VR&E had not yet reported results using these longer-term measures.
GAO_GAO-04-765
Background Medicare is a federal program that helps pay for a variety of health care services and items on behalf of about 41 million elderly and disabled beneficiaries. Medicare part B covers DME for the beneficiary’s use in the home, prosthetics, orthotics, and supplies if they are medically necessary and prescribed by a physician. Part B also covers certain outpatient prescription drugs that are used with DME or that are not usually self- administered by the patient. Some of these drugs are classified as supplies. Medicare Payment for DME, Prosthetics, Orthotics, and Supplies In submitting claims for Medicare payment, suppliers use codes in the Healthcare Common Procedure Coding System (HCPCS) to identify DME, prosthetics, orthotics, and supplies that they are providing to beneficiaries. These codes are used for health insurance billing purposes to identify health care services, equipment, and supplies used in beneficiaries’ diagnoses and treatments. Individual HCPCS codes used by suppliers can cover a broad range of items that serve the same general purpose, but vary in price, characteristics, and quality. The HCPCS National Panel, a group composed of CMS and other insurers, maintains the HCPCS codes. Medicare uses a variety of methodologies, which are specified in law, for determining what it will pay for specific types of DME, prosthetics, orthotics, and supplies. Medicare has established a fee schedule for DME and supplies, which lists the fees paid for these items in each state. Prosthetics and orthotics are paid according to 10 regional fee schedules. Prior to the passage of MMA, outpatient prescription drugs covered by Medicare part B were paid on a fee schedule based on 95 percent of the manufacturers’ average wholesale price (AWP), a price determined by manufacturers themselves. Except for these outpatient prescription drugs, the amounts paid under the fee schedules are generally based on the amounts charged by suppliers in 1986 and 1987 (or the amount set by Medicare if the item was subsequently added to the fee schedule). Suppliers are reimbursed according to the supplier’s actual charge or the Medicare fee schedule amount, whichever is lower. Over the years, we have reported that Medicare fees for certain medical equipment, supplies, and outpatient drugs were excessive compared with retail and other prices. For example, in 2000, we reported that retail price data collected by the four DME regional carriers showed that Medicare payments were much higher than the median surveyed retail prices for five commonly used medical products. While Medicare paid 5 percent less than AWP for covered prescription drugs, in 2001 we reported that prices widely available to physicians averaged from 13 percent to 34 percent less than AWP for a sample of physician-administered drugs. For two inhalation drugs covered by Medicare—albuterol and ipratropium bromide—prices widely available to pharmacy suppliers in 2001 reflected average discounts of 85 percent and 78 percent from AWP, respectively. Medicare Competitive Bidding In 1997, BBA required CMS to establish up to five demonstration projects to be operated over 3-year periods that used competitive bidding to set fees for Medicare part B items and services. BBA required that at least one demonstration project include oxygen and oxygen equipment; all demonstration areas be metropolitan statistical areas (MSA) or parts of MSAs; and criteria for selecting demonstration areas include availability and accessibility of services and probability of savings. CMS contracted with one of the four DME regional carriers—Palmetto Government Benefits Administrators (Palmetto)—to implement the competitive bidding demonstration for DME, prosthetics, orthotics, and supplies. The demonstration was implemented in two locations—Polk County, Florida, and the San Antonio, Texas, area. Two cycles of bidding took place in Polk County, with competitively set fees effective from October 1, 1999, to September 30, 2001, and from October 1, 2001, to September 30, 2002. There was one cycle of bidding in San Antonio, and competitively set fees were effective from February 1, 2001, to December 31, 2002. Bidding and implementation processes were similar at both locations. CMS set up competitive bidding for groups of related DME, prosthetics, orthotics, and supplies and held a separate competition for each group. Items included in the demonstration were identified by HCPCS codes. Suppliers were required to bid on each HCPCS code included in the product group in which they were competing. Table 1 shows the eight product groups in CMS’s competitive bidding demonstration at the two locations. The competitive bidding process was used to determine the suppliers included in the demonstration and the rates they would be paid. From among the bidders, the agency and Palmetto selected multiple demonstration suppliers to provide items in each group of related products. These suppliers were not guaranteed that they would increase their business or serve a specific number of Medicare beneficiaries. Instead, the demonstration suppliers had to compete for beneficiaries’ business. With few exceptions, only demonstration suppliers were reimbursed by Medicare for competitively bid items provided to beneficiaries permanently residing in the demonstration area. However, beneficiaries already receiving certain items were allowed to continue to use their existing nondemonstration suppliers. All demonstration suppliers were reimbursed for each competitively bid item provided to beneficiaries at the demonstration fee schedule amounts. The new fee schedules were based on the winning suppliers’ bids for items included in the demonstration. Any Medicare supplier that served demonstration locations could provide items not included in the demonstration to beneficiaries. About 1 year after CMS’s demonstration authority ended, MMA required the agency to conduct competitive bidding for DME, supplies, off-the-shelf orthotics, and enteral nutrients and related equipment and supplies. Competition is to be implemented in 10 of the largest MSAs in 2007, 80 of the largest MSAs in 2009, and additional areas thereafter. Items excluded from this authority are inhalation drugs; parenteral nutrients, equipment, and supplies; Class III devices; and customized orthotics that require expertise to fit individual beneficiaries. CMS may phase in implementation of competitive bidding first for the highest cost and highest volume items or those items with the greatest savings potential. The law requires that a Program Advisory and Oversight Committee be established to provide recommendations to CMS on its implementation of competitive bidding. MMA also gives CMS significant new authority to use competitive bidding results as a basis for determining reasonable payment rates throughout the country in 2009. CMS has the authority to apply the information obtained from competitive bidding to adjust payments in parts of the country outside of the competitive areas for DME, supplies, off-the-shelf orthotics, and enteral nutrients and related equipment and supplies. Thus, CMS will be able to more easily adjust its payment rates nationally to reflect market prices within the largest MSAs by using information gleaned through competitive bidding. CMS’s Experience Can Guide Agency Efforts to Implement Competitive Bidding While MMA sets specific requirements for competitive bidding, it also leaves certain implementation issues to CMS. As CMS implements competitive bidding, its payment- setting experience in the demonstration will prove useful as the agency considers items for competitive bidding and approaches to streamline implementation, collect information on specific items provided to beneficiaries, and ensure that beneficiaries’ access to quality items and services is not compromised. Many High-Cost Items Could Be Included in Large- Scale Competitive Bidding Selecting items with high levels of Medicare spending may prove fruitful in generating significant savings in the first years of large-scale competitive bidding efforts. The demonstration provided CMS with experience in item selection, and MMA provides direction and guidance for future efforts. By including items that accounted for a large share of Medicare spending, the demonstration generated estimated gross savings that were substantially more than its implementation costs. In addition to the items included in the demonstration, others are worth considering for selection in future competitive bidding. For the competitive bidding demonstration, Palmetto and CMS chose items from six of the eight product groups that accounted for almost 78 percent of Medicare allowed charges in calendar year 2002, as table 2 shows. The demonstration also included items from two other product groups with lower levels of Medicare spending—urological supplies and surgical dressings. According to a CMS official, CMS did not include glucose monitors and supplies in competitive bidding because beneficiaries must frequently use brand-name supplies with their monitors. Ensuring that specific brands of glucose test strips were included would have complicated the first test of competitive bidding in the demonstration. However, the CMS official noted that CMS could consider including glucose supplies in future competitive bidding. Similarly, lower and upper limb prosthetics were not included because these items are generally custom made or fitted to beneficiaries and, for simplicity, the demonstration focused on noncustomized items. Our analysis of national Medicare spending for DME, prosthetics, orthotics, and supplies found that items included in the demonstration accounted for about half of all Medicare allowed charges in 2002. This was less than the total billing for all items in the product group because not all the individual items identified by HCPCS codes within product groups were included in the demonstration. For example, CMS excluded power wheelchairs from the competition. Estimated savings for competitively bid items in the demonstration would total about 20 percent of the fee schedule amounts, according to the demonstration evaluators. This equaled an estimated gross savings of $8.5 million in allowed charges, which include Medicare payments and beneficiary cost-sharing amounts. The estimated cost of the demonstration was about $4.8 million—about 40 percent lower than the estimated $8.5 million reduction in allowed charges associated with the demonstration. The demonstration’s $4.8 million cost included $1.2 million for planning and development from September 1, 1995, through July 1, 1998, and $3.6 million for demonstration operating expenses through December 2002. For future efforts, MMA states that initial competitive bidding may include items with the highest Medicare cost and volume or items determined by the agency to have the largest savings potential. Working within these parameters for competitive bidding, CMS could select some items included in the demonstration as well as items with high Medicare spending that were not included in the demonstration. For example, nondemonstration items that CMS could choose include power wheelchairs and lancets and test strips used by diabetics. These three items accounted for about $1.7 billion, or about 17 percent, of Medicare allowed charges for DME, prosthetics, orthotics, and supplies in 2002. A CMS official and DME regional carrier medical directors told us that these items could be considered for inclusion in future competitive bidding. Two medical directors also suggested that continuous positive airway pressure devices and accessories, with $137 million in allowed charges— or 1.4 percent of Medicare allowed charges for DME, prosthetics, orthotics, and supplies in 2002—could be considered for inclusion in future competitive bidding. CMS officials suggested that these devices and accessories could be included in early implementation of competitive bidding. Furthermore, if CMS is able to lower operating costs through efficiencies and streamlining, CMS could consider selecting additional products for competitive bidding with comparatively low levels of program spending for competitive bidding, such as commodes, canes, and crutches. Larger-Scale Competitive Bidding May Benefit from Streamlined Implementation While the demonstration laid the groundwork for future competition, given the expanded scale of future competitive bidding, CMS will have to focus on a second issue—ways to streamline implementation. The demonstration took place in just two MSAs and affected less than 1 percent of fee-for-service beneficiaries. In contrast, by 2009, MMA requires CMS to implement competitive bidding in 80 of the largest MSAs in the country. Our analysis showed that about half of Medicare’s fee-for-service beneficiaries live in the 80 largest MSAs. In order to expand competitive bidding, CMS could potentially use two streamlining approaches— developing standardized steps that are easily replicated in different locations and using mail-order delivery for selected items for which fees are determined through nationwide competitive bidding. In conducting the demonstration, CMS and Palmetto gained practical experience in planning how competitive bidding could be conducted, communicating with beneficiaries and suppliers, choosing demonstration items, developing software to process demonstration claims, establishing policies, and soliciting and evaluating supplier bids. In expanding the scope of competitive bidding, CMS will be able to leverage its experience to develop a standardized or “cookie-cutter” approach that can be applied in multiple locations. This would include a standard set of competitively bid items, procedures and policies, and informational materials for suppliers and beneficiaries. Through standardization, the costs of implementation in individual MSAs would likely be reduced relative to program savings. In the demonstration, adding a second location allowed CMS and Palmetto to spread much of the implementation costs across two locations, rather than one. The incremental costs of adding the San Antonio location, once the demonstration had been planned and begun in Polk County, were relatively low. For the San Antonio location, the estimated annual implementation costs ranged from $100,000 in a nonbidding year to $310,000 when bidding occurred, according to the second evaluation report. Another potential streamlining approach would be to provide items by mail-order delivery—a convenience for beneficiaries—with uniform fees determined through nationwide competitive bidding. Because MMA authorizes CMS to designate the geographic areas for competition for different items, designating the entire country as the competitive area for selected items is a possibility. In addition, MMA states that areas within MSAs that have low population density should not be excluded from competition if a significant national market exists through mail-order for a particular item or service. In contrast to conducting competitive bidding on a piecemeal basis in multiple geographic areas, a consolidated nationwide approach would allow CMS to more quickly implement competitive bidding on a large scale. This approach would enable companies that provide, or demonstrate the ability to provide, nationwide mail-order service to compete for Medicare beneficiaries’ business. Items that lend themselves to mail delivery are light, easy to ship, and used by beneficiaries on an ongoing basis. Precedents exist for mail-order delivery of items that have been subject to competitive bidding. Demonstration suppliers provided surgical dressings, urological supplies, and inhalation drugs to beneficiaries by mail. In San Antonio, 30 percent of beneficiaries reported receiving their inhalation drugs through the mail, according to a demonstration evaluator, and Medicare paid an estimated 25 percent less than the fee schedule for Texas for these drugs. Glucose test strips and lancets are two items currently mailed to Medicare beneficiaries’ homes that could be included in a future nationwide competition. In 2002, these items accounted for $831 million, or about 8.6 percent, of Medicare allowed charges for DME, prosthetics, orthotics, and supplies. Because glucose test strips generally must be used with the glucose monitors made by the same manufacturer, CMS would need to ensure that the most commonly used types of test strips were included. Better Information on Specific Items Provided to Beneficiaries Could Ensure More Appropriate Payment Finding ways to collect better information on the specific items provided to beneficiaries is the third issue for CMS to consider as it implements competitive bidding on a larger scale. Industry and advocacy groups have raised concerns that competitive bidding may encourage some suppliers to reduce their costs by substituting lower-quality or lower-priced items. However, CMS lacks the capability to identify specific items provided to beneficiaries because suppliers’ claims use HCPCS codes, which can cover items that differ considerably in characteristics and price. Therefore, during the demonstration, CMS would not have been able to determine if suppliers tended to provide less costly items to beneficiaries. Furthermore, as CMS proceeds with competitive bidding, it will be difficult for the agency to appropriately monitor the type or price of specific items for which it is paying. A single HCPCS code can cover a broad range of items serving the same general purpose but with differing characteristics and prices. For example, in April 2004, the HHS OIG reported that prices available to consumers on supplier Web sites it surveyed for different models of power wheelchairs represented by a single HCPCS code ranged from $1,600 to almost $17,000. The 2003 Medicare fee schedule amount for all of the power wheelchairs under this code was a median of $5,297. Because Medicare pays the same amount for all of the items billed under the same HCPCS code, suppliers have an incentive to provide beneficiaries with the least costly item designated by that code. Since the Medicare program does not routinely collect specific information on items within a code for which it is paying, it is unable to determine if suppliers are providing lower-priced items or higher-priced items to beneficiaries. Using information from related work to determine the specific power wheelchairs provided to beneficiaries, the HHS OIG found that beneficiaries tend to receive lower- priced wheelchairs. The OIG recommended that CMS create a new coding system for the most commonly provided power wheelchairs to account for the variety in models and prices. CMS is currently working to develop a new set of codes to better describe the power wheelchairs currently on the market and plans to develop payment ceilings for each of the new codes. Under competitive bidding, suppliers might have even greater incentive to substitute less costly products listed under a code. For example, one of the demonstration suppliers explained that while a specific curved-tip catheter was superior for patients with scar tissue or obstructions, competitive bidding would encourage suppliers to substitute other, less-expensive catheters that can be paid under the same code. Thus, even if competitive bidding reduces fees paid, when suppliers substitute less costly items for more costly items, Medicare can pay too much for the actual items provided to beneficiaries. CMS officials pointed out that this is also true under the current fee schedule. CMS might better monitor the items being provided to beneficiaries if it subdivided certain HCPCS codes or collected identifying information. Subdividing HCPCS codes for items with significant variations in characteristics and price into smaller groupings is a way to narrow the differences among the items provided under a single code. The four DME regional carriers or the advisory committee established under MMA might be able to assist CMS in identifying those individual codes for items with the most significant variations in characteristics and price. Once these codes had been identified, CMS would be in a position to decide whether to request the panel that makes decisions on HCPCS codes for DME, orthotics, and supplies to consider whether to divide the codes into better- defined item groupings. Another way to get better information on the range of items provided under a code is to collect specific, identifying information (such as manufacturer, make, and model information) on selected, high-cost competitively bid items provided to beneficiaries. The DME regional carriers require suppliers to provide such information when it is requested for detailed reviews of claims for power wheelchairs. If CMS requested these data from suppliers for selected items provided under a HCPCS code for a statistically representative sample of claims, it would be able to analyze trends in the actual items provided to beneficiaries in competitive bidding areas or monitor the provision of items under the same code in competitive and noncompetitive areas. Ensuring Quality and Service for Beneficiaries Is Critical Because of concerns that competitive bidding may prompt suppliers to cut their costs by providing lower-quality items and curtailing services, a fourth issue for CMS to consider is ensuring that quality items and services are provided to beneficiaries. Quality assurance steps could include monitoring beneficiary satisfaction, as well as setting standards for suppliers, providing beneficiaries with a choice of suppliers, and selecting winning bidders based on quality in addition to amounts bid. During the demonstration, the agency and Palmetto gained practical experience in implementing quality assurance steps. This experience could prove instructive as CMS moves forward with competitive bidding efforts. As competitive bidding proceeds, routine monitoring of beneficiaries’ complaints, concerns, and satisfaction can be used as a tool to help ensure that beneficiaries continue to have access to quality items. During the demonstration, the agency and Palmetto used full-time, on-site ombudsmen to respond to complaints, concerns, and questions from beneficiaries, suppliers, and others. In addition, to gauge beneficiary satisfaction, the evaluators of the demonstration fielded two beneficiary surveys by mail— one for oxygen users and another for users of other products included in the demonstration. These surveys contained measures of beneficiaries’ assessments of their overall satisfaction, access to equipment, and quality of training and service provided by suppliers. Evaluators reported that their survey data indicated that beneficiaries generally remained satisfied with both the products provided and with their suppliers. As competitive bidding expands and affects larger numbers of beneficiaries, small problems could be potentially magnified. Therefore, continued monitoring of beneficiary satisfaction will be critical to identifying problems with suppliers or with items provided to beneficiaries. When such problems are identified in a timely manner, CMS may develop steps to address them. In the past, when implementing significant Medicare changes, such as new payment methods for skilled nursing facilities and home health services, the agency has lacked timely and accurate information about how the changes affected beneficiary access. Nevertheless, it may not be practical in a larger competitive bidding effort to replicate the monitoring steps used in the demonstration. Developing less staff-intensive approaches to monitoring would reduce implementation costs. For example, a Palmetto official told us that while having an on-site ombudsman function may prove useful in the initial stages of competitive bidding, using a centralized ombudsman available through a toll-free number staffed by a contractor could provide some of the same benefits at a lower cost. In addition, certain monitoring enhancements could prove useful. For example, CMS did not use a formal mechanism for ombudsmen to summarize or report information on complaints from beneficiaries or suppliers, according to the demonstration ombudsmen. Collecting and analyzing complaint information may provide a credible gauge of problems related to beneficiary access to quality products. Continued use of satisfaction surveys could help track beneficiaries’ satisfaction with items and services over time. However, advocacy group representatives have cautioned that beneficiaries may not have the technical knowledge to accurately assess the quality of the items or services being provided. Supplemental information might be obtained through standardized surveys of individuals who refer beneficiaries to suppliers, physicians, and supplier representatives, who may be better equipped to assess the technical quality of products and services. Two MMA requirements—the selection of multiple suppliers to serve beneficiaries and the establishment of supplier standards—help ensure that beneficiaries are satisfied with suppliers and the items they provide. The selection of multiple suppliers to serve beneficiaries was part of the competitive bidding process used during the demonstration. The establishment of supplier standards is broader than the competitive bidding program in that it applies to all suppliers, regardless of whether they choose to participate in competitive bidding. MMA requires that CMS select multiple suppliers that meet quality and financial standards to maintain choice in a competitive acquisition area. According to a CMS official, choosing to include multiple suppliers in the demonstration for each product group allowed beneficiaries to switch suppliers if dissatisfied with the quality of the services or items provided. CMS officials stated that selecting multiple suppliers encouraged suppliers to compete on the basis of quality and service to gain beneficiaries’ business. After completing the bid evaluation process, CMS generally selected about 50 percent of the suppliers that bid in each group, with an average of 12 suppliers selected across the product groups. MMA also requires that CMS establish and implement quality standards for all suppliers of DME, prosthetics, orthotics, and supplies. These standards must be at least as stringent as the 21 general standards that all suppliers of DME, prosthetics, orthotics, and supplies are required to comply with in order to obtain and retain their Medicare billing privileges. (See app. II.) For the demonstration, suppliers were also required to meet standards developed by Palmetto that were more stringent and explicit than the current 21 general standards. For example, the demonstration standards required that only qualified staff deliver, set up, and pick up equipment and supplies and established time frames for suppliers to pick up equipment after a beneficiary had requested its removal. Palmetto monitored suppliers’ adherence to the standards through initial and annual site visits. Applying quality measures as criteria to select winning suppliers is another demonstration assurance step that can be used in future efforts. During the demonstration bid evaluation process, Palmetto solicited references from financial institutions and from at least five individuals who had referred beneficiaries to each bidding supplier. In reviewing referrals, Palmetto looked for evidence of quality and service. This included evidence of financial stability and good credit standing, a record of providing products that met beneficiaries’ needs, compliance with Medicare’s rules and regulations, acceptable business practices, ethical behavior, and maintenance of accurate records. The bid evaluation process also included inspections of bidding suppliers’ facilities that focused on indicators of quality and service. These on-site inspections were more comprehensive than those normally performed for Medicare suppliers of DME, prosthetics, orthotics, and supplies. For example, inspectors were tasked with determining if the supplier had access to the full range of products for which it had bid, documentation of infection control procedures, instructions on using equipment, and patient files with required information. In some cases, a demonstration supplier’s selection was conditional on the supplier making specified improvements. For example, according to a CMS official, some suppliers were told to clarify instructions for beneficiaries, properly store oxygen equipment, or improve procedures for following up with patients after initial service was provided. CMS and Palmetto officials told us that comprehensive inspections were useful in ensuring the selection of quality suppliers. Conclusions CMS can use its experience from the demonstration to make informed decisions as it implements large-scale competitive bidding within the framework established by MMA. The demonstration showed that competitive bidding has the potential to garner significant savings for both the Medicare program and its beneficiaries, especially on items with high levels of Medicare spending. While the potential exists for significant savings, moving from small-scale to large-scale competitive bidding calls for streamlining implementation. Developing a cookie-cutter approach to competitive bidding—for example, using the same policies and processes in multiple locations—could help CMS roll out its implementation in over 80 locations more easily, while employing mail-order to deliver items with prices set through nationwide competitive bidding could allow CMS to more quickly implement competitive bidding on a large scale. To ensure that competitive bidding savings are not achieved by the suppliers’ substitution of lower-cost items, CMS can consider ways to collect better information on the specific items that suppliers are providing to beneficiaries. Finally, careful monitoring of beneficiaries’ experiences will be essential to ensure that problems are quickly identified. This will allow CMS to adjust its implementation and quality assurance steps as it manages competition on a greater scale. Recommendations for Executive Action To increase potential savings from competitive bidding, streamline implementation, help ensure that Medicare is paying appropriately for items, and promote beneficiary satisfaction, we recommend that the Administrator of CMS take the following seven actions: consider conducting competitive bidding for demonstration items and items that represent high Medicare spending that were not included in the competitive bidding demonstration; develop a standardized approach for competitive bidding for use at consider using mail delivery for items that can be provided directly to beneficiaries in the home, as a way to implement a national competitive bidding strategy; evaluate individual HCPCS codes to determine if codes need to be subdivided because the range in characteristics and price of items included under the individual codes is too broad; periodically obtain specific identifying information on selected high- cost items to monitor the characteristics of items subject to competitive bidding that are provided to beneficiaries, such as manufacturer, make, and model number; monitor beneficiary satisfaction with items and services provided; and seek input from individuals with technical knowledge about the items and services suppliers provide to beneficiaries. Agency Comments and Our Evaluation In its written comments on a draft of this report, CMS agreed with most of the recommendations and agreed to give serious consideration to the report throughout the development and implementation of national competitive bidding. CMS agreed to consider conducting competitive bidding for demonstration items and items that represent high Medicare spending that were not included in the demonstration. CMS indicated that the agency was working to develop a list of items for the first bidding cycle in 2007. CMS also agreed to develop a standardized approach for competitive bidding that could be used in multiple locations and indicated the agency’s intention to outline such an approach through regulation. CMS stated it would explore the feasibility of our recommendation to consider using mail-order delivery for items that could be provided directly to beneficiaries in the home, as a way to implement a national competitive bidding strategy. Based on CMS’s comments, we clarified the discussion in the report to indicate businesses that currently provide, or have the potential to provide, national mail-order delivery would be appropriate to include as bidders in nationwide competition. CMS also agreed with our recommendations to periodically obtain specific identifying information on selected high-cost items and to monitor beneficiary satisfaction with the items and services provided and indicated that it would be establishing a process to do so. CMS agreed with our recommendation to seek input from individuals with technical knowledge about the items and services suppliers provide to beneficiaries. The agency noted that pursuant to MMA, CMS would be convening a panel of experts, the Program Advisory and Oversight Committee, to assist with implementation of competitive bidding. CMS disagreed with one of our draft recommendations—to evaluate individual HCPCS codes to determine if they needed to be subdivided because the range in price of items included under the codes was too broad. The agency stated that subdividing codes according to price would lead to Medicare setting codes for particular brand names in circumstances where a manufacturer has established higher prices for products that do not have meaningful clinical differences or higher quality. In response to the agency’s comment, we modified our discussion of HCPCS codes and revised our recommendation to state that CMS, in reevaluating individual HCPCS codes, should consider both the characteristics and prices of items. We have reprinted CMS’s letter in appendix III. CMS also provided us with technical comments, which we have incorporated as appropriate. We are sending copies of this report to the Administrator of CMS, appropriate congressional committees, and other interested parties. We will also make copies available to others upon request. This report is also available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions about this report, please call me at (312) 220-7600 or Sheila K. Avruch at (202) 512-7277. Other key contributors to this report are Sandra D. Gove, Lisa S. Rogers, and Kevin Milne. Scope and Methodology To assess issues that the Centers for Medicare & Medicaid Services (CMS) might consider as it implements the Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) provisions concerning competitive bidding, we reviewed the relevant provisions of MMA. We also reviewed the first and second evaluation reports on the Medicare competitive bidding demonstration and discussed methodology and findings with the evaluators. We interviewed officials from CMS and Palmetto Government Benefits Administrators (Palmetto) about experience gained during the demonstration. For the product selection issue, we analyzed calendar year 2002 Medicare durable medical equipment (DME), prosthetics, orthotics, and supply claims data obtained from the statistical analysis durable medical equipment regional carrier (SADMERC). Through this analysis, we identified the product groups and items that represented the largest Medicare allowed charges and the allowed charges for items included in the demonstration. We also used these data to identify items that accounted for higher Medicare spending but were excluded from the demonstration. We determined that the data obtained from the SADMERC were sufficiently reliable for addressing the issues in this report. These data were extracted from a CMS file that includes all Medicare claims payment data. CMS has a number of computerized edits to help ensure that Medicare payment data are accurately recorded, and the SADMERC has internal controls to ensure that data extracted from the CMS file are timely and complete. Where appropriate, we tested data manually against published sources for consistency. To identify items that could be included in future competitive bidding, we interviewed CMS and Palmetto officials and the medical directors at the four DME regional carriers. For the issue of streamlining implementation, we obtained information on the cost of the demonstration from the second evaluation report. To estimate the number of fee-for-service beneficiaries who will be affected by future competitive bidding, we adjusted the Census 2000 population estimates for individuals age 65 and over to account for the number of beneficiaries enrolled in Medicare’s managed care program by using data obtained from the Medicare Managed Care Market Penetration State/County Data Files. We assessed the reliability of the Census 2000 data by reviewing relevant documentation and working with an official from the U.S. Census Bureau. We assessed the reliability of the Medicare Managed Care Market Penetration State/County Data Files by reviewing relevant documentation. We determined these data sources to be sufficiently reliable for the purposes of our report. We also obtained information from CMS on the demonstration items that beneficiaries obtained by mail and conducted research to identify items delivered directly to customers’ homes by private sector organizations. We also solicited input from the medical directors at the four DME regional carriers concerning items that could be delivered by mail-order and included in a nationwide competition. For the issue concerning information on specific items provided to beneficiaries, we reviewed prior GAO reports and testimonies. In addition, we interviewed the following representatives of industry and advocacy groups: Abbott Laboratories; the Advanced Medical Technology Association; the American Association for Homecare; the American Occupational Therapy Association; the American Orthotic and Prosthetic Association; the Consortium for Citizens with Disabilities; the Diabetic Product Suppliers Coalition; LifeScan, Inc.; Johnson & Johnson Company; Kinetic Concepts, Inc.; Tyco Healthcare Group; the National Alliance for Infusion Therapy; Roche Diagnostics; and the United Ostomy Association. For the issue relating to ensuring quality items and services for beneficiaries, we discussed quality assurance steps and approaches for monitoring beneficiary satisfaction used during the demonstration with CMS and Palmetto officials and the demonstration’s evaluators. We also interviewed the two demonstration ombudsmen to discuss beneficiaries’ concerns and experiences in obtaining items during the demonstration. We discussed issues related to competitive bidding and beneficiaries’ access to quality products and services with suppliers of DME, including three suppliers that participated in the demonstration; the industry and advocacy groups listed above; and the DME regional carrier medical directors. In addition, we compared quality standards for demonstration suppliers with the 21 supplier standards that apply to all Medicare suppliers of DME, prosthetics, orthotics, and supplies. Medicare’s 21 Standards for Medicare Suppliers of DME, Prosthetics, Orthotics, and Supplies Suppliers of DME, prosthetics, orthotics, and supplies must meet 21 standards in order to obtain and retain their Medicare billing privileges. An abbreviated version of these standards, which became effective December 11, 2000, is presented in table 3. MMA requires CMS to develop new standards that must be at least as stringent as current standards for all Medicare suppliers of DME, prosthetics, orthotics, and supplies. Supplier compliance will be determined by one or more designated independent accreditation organizations. Comments from the Centers for Medicare & Medicaid Services GAO’s Mission The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Congressional Relations Public Affairs
The Medicare Prescription Drug, Improvement, and Modernization Act of 2003 (MMA) requires the Centers for Medicare & Medicaid Services (CMS) to conduct large-scale competitive bidding for durable medical equipment, supplies, off-the-shelf orthotics, and enteral nutrients and related equipment and supplies provided to beneficiaries. The Balanced Budget Act of 1997 mandated that GAO study an earlier Medicare competitive bidding demonstration. To address this mandate, GAO assessed this past experience in relation to four issues that CMS might consider as it implements large-scale competitive bidding: (1) items for competitive bidding, (2) how to streamline implementation, (3) ways to collect information on specific items provided to beneficiaries, and (4) steps to ensure quality items and services. CMS's experience in the Medicare competitive bidding demonstration may prove instructive as the agency implements provisions in MMA to conduct large-scale competitive bidding for durable medical equipment, supplies, off-the-shelf orthotics, and enteral nutrients and related equipment and supplies. The experience gained during the demonstration provides insight as the agency considers four implementation issues. Items for competitive bidding: Items for competitive bidding could include those selected for the demonstration and others that account for high levels of Medicare spending. For example, nondemonstration items that CMS could choose for competitive bidding include power wheelchairs and lancets and test strips used by diabetics. In 2002, these three items accounted for about $1.7 billion in charges for the Medicare program and its beneficiaries. How to streamline implementation: Because of the large scale of future competitive bidding, it will be prudent for CMS to consider ways to streamline implementation. Two ways to streamline are developing a standardized competitive bidding approach that can be replicated in multiple geographic locations and using mail-order delivery for selected items, with uniform fees established through a nationwide competition. Ways to collect information on specific items provided to beneficiaries: Gathering specific information on competitively bid items provided to beneficiaries could help ensure that suppliers do not substitute lower-priced items to reduce their costs. Currently, CMS is not able, or does not routinely, collect specific information on the items that suppliers provide to beneficiaries. Steps to ensure quality items and services for beneficiaries: Routine monitoring could help ensure that beneficiaries continue to have access to suppliers that deliver quality items and services. The agency, when implementing significant Medicare changes in the past that affected payment methods, has lacked information on how the changes affected beneficiary access. As competitive bidding expands, small problems could be potentially magnified. Using quality measures to choose multiple suppliers and having suppliers meet more detailed standards than are currently required can also help ensure quality for beneficiaries.
GAO_GAO-02-105
Background In recent years, Congress and DOD have had an ongoing debate concerning core depot maintenance capabilities and the work needed to support these capabilities; the role of military depots; and the size, composition, and allocation of depot maintenance work between the public and private sectors. Since the mid-1990s, DOD policy and advisory groups have called for contracting with the private sector for a greater share of the Department’s logistics support work, including depot maintenance, and related activities such as supply support, engineering, and transportation. An integral part of the policy shift is the debate over how DOD identifies its core logistics capabilities that are to be performed by federal employees in federal facilities. The Deputy Under Secretary of Defense for Acquisition, Technology, and Logistics is responsible for maintenance issues, including core. We recently testified on core capabilities, DOD management of the depot system, and related issues. DOD estimates that it will spend about one-third of its $297 billion budget for fiscal year 2001 on logistics support activities at military maintenance, supply management, engineering, distribution, and transportation activities and at thousands of contractor locations. As a result of force structure reductions, depot closures under the base realignment and closure process in fiscal years 1988 to 2001, and DOD’s desire to place greater reliance on the private sector for the performance of depot maintenance, the number of “major” depots (those employing more than 400 persons) was halved from 38 to 19. During this same period, the total amount of work (measured in direct labor hours) accomplished at the military depots was cut in half and the depot maintenance workforce was reduced by about three-fifths (from 156,000 in fiscal year 1987 to about 64,500 in fiscal year 2001) as shown in figure 1. At the same time, annual funding for contracted depot maintenance work has increased by 90 percent. Overview of Core and the Depot Maintenance Core Methodology The provisions of 10 U.S.C. 2464 concerning the identification and maintenance of a core logistics capability and DOD implementing guidance are aimed at ensuring that repair capabilities will be available to meet the military needs should a national defense emergency or contingency occur. The concept of core work is not unique to DOD. However, the term gained increased importance in its relationship to military depots in the 1980s and 1990s. The concept of core and the identification of core capabilities for depot maintenance began in the 1980s; and until the early 1990s, each of the services used its own processes for determining core workloads needed to support the identified depot maintenance capabilities. The Core Concept as Used in the Private Sector and DOD The concept of core is one that has usage in the private sector and in the government with respect to decisions over whether support functions might best be provided in-house or outsourced to contractors. In recent years, as private sector firms have approached decisions on whether or not to outsource various activities or functions, they first evaluate the business to identify those activities that are critical to the performance of the mission of the business and which the owners or managers believe they should perform in-house with workers in their employment. These “core” activities are not evaluated for contracting out. Remaining activities are studied to determine if in-house performance can be improved and/or costs can be reduced. The results of this assessment are compared with offers from external businesses. The criteria for outsourcing would generally be that the external business would provide these non-core activities for less cost and/or would provide improved capability or better service than can be provided using internal resources. Essential to an understanding of how private businesses use this concept is the fact that decisions over what is core is a somewhat subjective determination, one that is not absolute. What one business considers core and not subject to contracting out, another business might identify as a candidate for outsourcing. For example, Disney World retains as company employees the maintenance workers who keep their rides functioning at a high readiness condition while another recreation facility might decide to contract out the responsibility for equipment maintenance. Within the government, the concept of “core” and a related concept of “inherently governmental” are a key part of the government’s policy regarding what activities it should perform with federal employees and what activities the private sector should perform. Office of Management and Budget (OMB) Circular A-76, which was first adopted in 1966, sets forth the general government policy that federal agencies are to obtain commercially available goods and services from the private sector when it is cost-effective to do so. A commercial activity is one that is performed by a federal agency and that provides a product or service, such as base operating support or payroll, that could be obtained from a commercial source. The handbook implementing A-76 provides the procedures for competitively determining whether commercial activities government agencies are currently performing should continue to be done in-house (or by another federal agency) or whether they should be contracted to the private sector. At the outset, inherently governmental activities—those that are so intimately related to the exercise of the public interest as to mandate performance by federal employees—are reserved for government performance. These activities are thus in a sense “core” and outside the coverage of A-76. The core concept appears again within the universe of commercial services covered by A-76. The circular exempts from its cost comparison provisions activities that make up an agency’s “core capability.” Thus, under the circular, the government will retain a minimum core capability of specialized scientific or technical in-house employees necessary to fulfill an agency’s mission responsibilities or to meet emergency requirements. Again, these activities are reserved for government performance. While the term “inherently governmental” is defined in statute and in the circular and the term “core” is defined in the circular, agency officials exercise broad discretion in applying them to agency functions. Depot maintenance workloads valued at $3 million or more are exempt from the A-76 process by 10 U.S.C. 2469. The use of the A-76 process in DOD has proven to be controversial with concerns often expressed about the fairness of the process and of the cost comparisons between the public and private sectors. Section 852 of the 2001 Defense Authorization Act provided for a panel of experts to be convened by the Comptroller General to review process and procedures governing the transfer of commercial activities from government personnel to the private sector. The panel is required to report its findings and recommendations to the Congress by May 1, 2002. Legislation was enacted in 1984 that sought to add clarity to the meaning of “core” as it applies to logistics activities involving military facilities.The provision, codified at 10 U.S.C. 2464, provides for a concept of core to be applied to DOD logistics activities. Under the current provision the Secretary of Defense is required to identify and maintain a “core logistics capability” that is government-owned and operated to ensure the existence of a ready and controlled source of technical competence and resources so that the military can effectively and timely respond to mobilizations, national defense emergencies and contingencies. The capabilities are to include those necessary to maintain and repair the weapon systems and equipment that are identified by the Secretary in consultation with the Joint Chiefs of Staff as necessary to meet the nation’s military needs. Further, the Secretary is to identify the workloads required to maintain the core capabilities and to require their performance in government facilities. Finally, the Secretary is to assign these facilities sufficient workloads to ensure peacetime cost efficiency and technical competencies and surge capacity and reconstitution capabilities to support our military strategic and contingency plans. In addition to the 10 U.S.C. 2464 requirements described above, 10 U.S.C. 2466 specifies that no more than 50 percent of the funds made available for depot maintenance may be spent for private sector performance. This sets aside 50 percent of the funding for public-sector performance of these workloads in essence establishing a minimum public-sector core for depot maintenance. Before the 1997 amendment, private-sector performance was limited to no more than 40 percent. The trend in DOD in recent years has been toward increasing reliance on the private sector for depot maintenance work and increasing reliance on original equipment manufacturers for long-term logistics support. Depot Maintenance Core Methodology In November 1993, the Office of the Deputy Under Secretary of Defense for Logistics outlined a standard multi-step method for determining core requirements and directed the services to use this method in computing biennial core requirements. In 1996, the core methodology was revised to include (1) an assessment of the risk involved in reducing the core capability requirement as a result of having maintenance capability in the private sector and (2) the use of a best-value comparison approach for assigning non-core work to the public and private sectors. The current core methodology provides a computational framework for quantifying core depot maintenance capabilities and the workload needed to sustain these capabilities. It includes three general processes: The identification of the numbers and types of weapon systems required to support the Joint Chiefs of Staff’s wartime planning scenarios; The computation of depot maintenance core work requirements measured in direct labor hours to support the weapon systems’ expected wartime operations as identified in the war planning scenarios; and The determination of industrial capabilities (including the associated personnel, technical skills, facilities, and equipment) that would be needed to accomplish the direct labor hours identified above that is generated from the planning scenarios. That determination is adjusted to translate those capabilities into peacetime workloads needed to support them. These peacetime workloads represent the projected core work requirements for the next program year in terms of direct labor hours. For example, the estimate made in fiscal year 2000 projected the core requirements for fiscal year 2001. To conclude the process, the services then identify specific repair workloads and allocate the core work hours needed to accomplish the maintenance work at the public depots that will be used to support the core capabilities. During the latter part of the 1990s, DOD made significant changes in specific maintenance workloads it identified as supporting core capabilities. For example, in 1996 the Air Force privatized in place work on aircraft and missile inertial guidance and navigation systems performed at the Aerospace Guidance and Metrology Center in Newark, Ohio. Prior to closure of this depot, the workload—about 900,000 hours annually— had been identified as necessary to support core capabilities. Workload at the Sacramento Air Logistics Center, which next to the Newark Depot had the Air Force’s highest percentage of core workload relative to total workload, was reclassified as non-core work when the center was to be closed. Similarly, maintenance of the Army’s tactical wheeled vehicles had always been considered core work, with over 1 million hours of work performed in an Army depot. But after the closure of the Army’s truck depot at Tooele, Utah, this work was contracted out; and in 1996 it was categorized as non-core work. More recently the Army has again categorized about 26,000 direct labor hours of truck maintenance work as core support work—less than 1 percent of the workload that the Army identified as necessary to support its core capabilities. Figure 2 shows the services’ biennial computations of depot maintenance core work requirements in direct labor hours for fiscal years 1995-2001. The reported combined core work requirements for all the military services declined by about 30 percent over that period. The Navy aviation and the Marine Corps support work stayed relatively constant while the Army’s declined by 33 percent, the Air Force’s declined by 33 percent, and the Navy ship requirement declined by 37 percent. Figure 2 shows the computed core work requirements for each of the services in labor hours. As discussed later in this report, the existing policy does not provide information about future core capability requirements. Further, the work actually performed in military depots may be different than the work identified by the core process since a separate process is used for assigning maintenance workloads to the depots or to private sector facilities. Source-of-Repair Process A key factor influencing what workloads are actually assigned to military depots and to the private sector is the military services’ source-of-repair process. Departmental policy prescribes a process for determining how new and modified weapon systems are to be supported. The acquisition program guidance provides that within statutory limitations, support concepts for new and modified systems shall maximize the use of contractor provided, long-term, total life-cycle logistics support that combines depot-level maintenance for non-core-related workload along with materiel management functions. The maintenance guidance prescribes a source-of-repair decision process designed to determine whether new and upgraded weapon systems and subsystems should be repaired in military depots or contractor facilities. This guidance provides that repair decisions should be justified through rigorous, comprehensive business case analyses that consider the relative costs of public and private support options, mission essentiality, existing public and private industrial capabilities, and required core capabilities. The source-of-repair process is also supposed to consider workload allocation requirements specified by 10 U.S.C. 2466 that not more than 50 percent of annual depot maintenance funding made available to each military department be used for private sector performance. Weaknesses in Core Policy and Implementation Leave Little Assurance That Capabilities Will Be Developed to Support Wartime Requirements The Department’s core depot maintenance capabilities policy and related implementation procedures and practices provide little assurance that core maintenance capabilities are being developed to support future national defense emergencies and contingencies. Much of the current core workload supports systems that are soon to retire; however, the core policy is not comprehensive in that it does not provide for a forward look at new weapon systems that will replace the ones that are being retired and at associated future maintenance capabilities that will likely be identified as needed to repair those systems. Further, the core policy is not linked to the department’s source-of-repair policy and processes. These policy shortfalls limit the timely identification of equipment, facilities, and workforce technical skills needed to establish and retain future core capabilities. Advance planning for replacement of retiring systems and introduction of new systems and technologies into the depots is critical because it can take up to 5 years or more to establish a new in-house capability. Further compounding the future core capabilities concerns are various core policy implementation procedures and practices that also affect the establishment of core capability. For example, services are using, to varying degrees, concepts such as like workloads and risk assessments that have the impact of further reducing the amount of core workloads that are actually performed on systems such as the C-17 that support contingency plans. These varying practices affect both the quantification of core requirements and the identification of workloads used to support core capabilities. They may also preclude defense managers or the Congress from assessing the extent to which overall core policy objectives are being met. The net effect of these practices is to reduce the amount of new repair technology being introduced into the military depots. Also, actual direct labor hours on workloads assigned to public depots are less than called for in identified core support work requirements and the need to support core capabilities is not adequately considered in service source- of-repair decisions on new and upgraded systems. Both of these situations further negatively impact the development of future core capabilities by reducing the amount of workforce training and again decreasing the extent to which new repair technologies are introduced to the depot. It is unclear to what extent recent initiatives to improve core and core-related policy, procedures, and practices will be successful. Policy Is Not Comprehensive and Does Not Adequately Consider Future Capability and Technology Needs The Department’s core depot maintenance policy is not comprehensive in that it does not provide for a forward look at new weapon systems and associated future maintenance requirements and is not linked to the source-of-repair process. Thus, the policy for identifying core capabilities and support workloads does not plan for the development of future core capabilities because it excludes consideration of systems that are being developed or are in the early stages of being introduced into the forces. The process computes core work requirements biennially based on fielded weapon systems identified in defense warplanning scenarios. Core Policy Does Not Require Forward Look The core policy does not require the consideration of depot maintenance capabilities for developmental systems and systems in early production since these systems are not yet identified in defense war plans or are identified in small numbers. As a result, the determination process does not consider workloads that will be needed to support future core capabilities that would result from new systems being fielded and the associated repair technologies, methods, and equipment. Also, expected decreases in the core workload supporting systems that are soon to retire and changes from in-house to contractor support on replacement or upgraded systems are not being adequately considered. If the services do not plan for the retiring systems’ replacements in the military depot system, support for future core capabilities and the economic viability of the depots will be affected. The Navy’s consideration of core support work related to its helicopter fleet illustrates how future capability needs are not being taken into account. Maintenance and repair on the H-46 utility helicopter currently provides much of the core support workload at the Navy’s Cherry Point depot. The H-46 is to be phased out of the inventory and replaced by the V- 22 tilt rotor aircraft. The Cherry Point aviation depot accomplishes about 600,000 hours of work annually on the H-46, which represented about 15 percent of that depot’s entire workload in fiscal year 2000. However, as the H-46s are retired, depot officials expect that workload to dwindle to zero by fiscal year 2012. Navy officials have decided that the V-22 engine will be supported commercially and are evaluating plans for all other V-22 support. Officials told us that they were considering outsourcing some component workloads, originally identified as requiring a core capability, in concert with current DOD policy preferences for outsourcing depot maintenance activities. While Cherry Point’s core capability position looks favorable today, the process does not take into consideration the expected loss of H-46 work. Similarly, as the Air Force’s C-141 cargo aircraft is being phased out of the inventory, the core methodology has provided for accomplishing little support work for the new generation C-17 cargo aircraft in military depots. Consideration of new and replacement workloads is important because of the advance planning time needed to establish an in-house capability. In some cases, it may take 5 years or more to establish this capability. For example, a depot business planner estimated that about 5 years would be needed from the time the core capability work requirement was first identified to fund, design, and build a C-5 painting facility, assuming that all went according to plan. Funding availability, priorities of this project relative to others, external events, and other factors could slow the acquisition of support resources. Timeframes for acquiring capabilities that are identified as core would typically be longer than this if the depot was not already formally assigned the workload. Core Policy Not Linked to Source-of-Repair Process Existing core policy is not directly linked to the source-of-repair decision process for new systems and major system upgrades, which negatively impacts the development of core capabilities. According to departmental and service policies, consideration of the need to support core capabilities is supposed to be a major factor in planning for life-cycle sustainment and making decisions on the source for the repair of new and upgraded weapon systems. Our review of recent and ongoing source-of-repair decisions, however, found that core capabilities are considered inconsistently, if at all, in many of the decisions on new systems and upgrades. The lack of linkage between these two processes contributes to the decline of future repair capability for critical mission-essential systems. In both 1998 and 1996, we reported that DOD’s new policy for determining source of repair for weapon systems had weaknesses that could impact the retention of core logistics capabilities that the military is supposed to identify and maintain to ensure the support of mission-essential weapon systems. We determined that (1) acquisition program officials had not followed the services’ approved processes for making source-of-repair decisions, (2) information concerning core capabilities and other input from logistics officials were not major factors in these decisions, and (3) weaknesses in guidance contributed to these conditions. Also, the Army Audit Agency and the Naval Audit Service issued reports in 2000 that identified similar deficiencies still occurring in those services.Army auditors concluded that system managers for 13 of 14 weapon systems identified as required for the Joint Chiefs of Staff warfighting scenarios had not performed complete and adequate source-of-repair analyses and specifically had not accomplished core assessments to identify workloads that were needed to support core capabilities. Navy auditors found that acquisition offices had not accomplished 80 of 179 (45 percent) required independent logistics assessments (the process used to identify and provide for logistics support requirements during weapon systems acquisition) and did not always disclose results of logistics assessments to program decisionmakers. Both cited inadequate, inconsistent, and conflicting acquisition and logistics guidance and uncertainty or lack of information on core support needs and repair analyses as contributing factors. During our current review of DOD’s core process, we found that this overall condition has not changed. Acquisition policy and acquisition officials’ preferences for using contractor support were reflected in source-of-repair decisions for new and upgraded systems going to contractors, with the result that the depots have not been receiving much new workload in recent years and may not in the future. In the Air Force, for example, 48 of the 66 systems and components being reviewed for source-of-repair decisions in March 2001 were at that time recommended for private sector support. We also reviewed some new systems and upgrades representing all the services and found that they had decided or were leaning toward the private sector in 10 of the 13 cases for the bulk of their depot maintenance work. In those cases where the public sector is expected to get some portion of the work, it was typically on the older technology and legacy systems while contractors were expected to perform most of the repairs on the newer technology items. In most of the cases, core capability issues had either not been considered or were not major factors in the decisions. In some instances, the final decision on systems had been delayed or stretched out for years, which may make it difficult, more costly, and less likely that the eventual decision would be that the military depots perform this maintenance work. Implementing Procedures and Practices Further Compound Future Core Capability Concerns The services’ core procedures and practices further raise concerns about the extent to which core capabilities are being established and preclude defense managers or the Congress from assessing the extent to which overall core policy objectives are being met. To put the methodology for determining standardized core requirements into effect, each service developed its own approach, criteria, and assumptions to adapt the methodology to individual circumstances. Each service has different procedures and practices to implementing the core methodology and identifying and establishing core capabilities that reduce the development of core capabilities. These procedures and practices include the concepts of capability for like workloads; the use of risk assessments for reducing the amount of core; the use of peacetime workload factors; and having insufficient peacetime workloads to retain core capability because the core process is not linked to defense planning and budgeting. Establishing Capability By Using Like Workloads is Questionable The Air Force and the Naval Sea Systems Command, and to a lesser extent the Army, rely on the questionable concept of “like” workloads to identify core support workloads used to satisfy core requirements. The critical assumption is that peacetime work on like (similar) types of systems and repair processes provides sufficient skills and repair capabilities that government facilities, equipment, and maintenance personnel could, within the short timeframes required by national defense emergencies and contingencies, quickly and effectively transfer to new workloads on systems and equipment currently repaired in the private sector. The theory is that capabilities on a wide range of commodities would be transferable during a defense emergency to repair systems not currently maintained in the defense depots. The like-workload concept as it is applied to specific weapon systems is portrayed in figure 3, and specific examples of concerns about the use of the concept in the various services are discussed below. The Air Force, the most extensive user of the concept of like work, focuses its efforts on providing its depots with the capabilities to accomplish broad categories of repairs. Officials compute core work requirements based on categories of equipment repair such as avionics, instruments, engines, and airframes rather than on specific weapon systems, which is the approach generally used by the Army, Navy, and Marines. Using professional judgement and knowledge of existing in-house work, officials then designate which maintenance workloads will be accomplished to satisfy the required level of repair capability in each category. To illustrate, maintenance workloads on the KC-135, C-141, and C-130 are designated as core workloads for Air Force depots to satisfy computed core capabilities for repairs in the large-airframe cargo aircraft category. As a result, repair workloads on some Air Force weapon systems that are heavily relied on in wartime planning scenarios are not identified as core support work. For example, only a very small amount of avionics workload for the C-17 aircraft—which is expected to be heavily used in all scenarios—is identified as core support work in the latest computation. Also, there are no in-house workloads on some mission-essential systems identified in war plans, notably the F-117, the E-8 (Joint Stars), and the U-2. While the Air Force policy is to provide core capabilities for their systems through like workloads, the Air Forcer core capability calculations do not include these contractor-supported systems. The assumption that depots could quickly and easily transition to repair new and different weapon systems is questionable. It is unlikely that all needed core capabilities could be established in a timely manner because in relying on the private sector, the services have not procured the support resources that would be required to establish in-house capability and it would take time and funding to establish the required capability. For example, Air Force Materiel Command officials stated that it could take 2 years or more to build up a sufficient capability to handle major C-17 repairs if required. Even though one depot maintains other large cargo aircraft, it would not have specialized and unique support equipment, technical data, and mechanics trained and certified on the unique and advanced C-17 features. For comparison purposes, the Warner Robins depot took about 2 years to effectively assume the C-5 workload after the San Antonio depot was closed. Warner Robins had been doing similar work for many years on other airlifters, the C-141 and C-130, and had access to C-5 technical data, depot plant equipment, and mechanics. Similarly, the Air Force relies on B-1 and B-52 workloads to support core capabilities for the B-2 airframe, which is repaired by a contractor. The assumption is that a military depot repairing the B-1 or B-52 could take care of emergency depot requirements for the B-2. However, the technology, repair processes, and equipment needed for the B-2 are much different than those used on the B-1 and B-52 fleets. Further, workers are not trained on unique characteristics or modern repair techniques and do not have the proper clearance to accomplish repairs on low observable characteristics of stealth systems. The Naval Sea Systems Command employs a variant of the like-work concept, which identifies core capabilities based on the number and types of ships. Although Navy officials said all 316 ships in the Navy are mission- essential, the public shipyards primarily overhaul nuclear-powered ships and large-deck surface ships, and private shipyards repair most surface combatants, amphibious, and auxiliary support ships. Ship repair managers assume that, in an emergency, the public shipyards have the necessary facilities, equipment, and skilled personnel to repair any Navy ship and components. This assumption includes those classes of ships and components currently maintained solely by contractors. It is unclear whether, in an emergency, the nuclear facilities, specialized support and test equipment, and dry dock space could be cleared and reconfigured and that government workers could take over repairs on classes of ships currently maintained in the private sector. In contrast with the process used by the Air Force and for Navy ships, the Army, Marines, and the Naval Air Systems Command focus more attention on performing repair workloads on specific weapon systems. Officials initially compute core capabilities by weapon system, making more explicit the linkage between weapon systems that are tied to war planning scenarios and core capabilities and supporting workloads. Officials identify core capabilities based on the number of each specific weapon system identified in the war plan and generally assign at least a portion of the workload on each system and its subsystems to a military depot. As a result, these commands have some degree of active in-house workloads on almost every weapon system identified in the war plans. Use of Risk Assessments Approach Can Hinder the Development of Capability Another area of concern in how services compute core is the use of risk assessments to determine if work initially determined to be core support work could instead be provided by the private sector at an acceptable level of risk. The standard DOD core methodology was revised in 1996 to incorporate risk assessments as a way of evaluating repair capability in the private sector to determine whether capability could be provided by contractors rather than by a military depot. The Air Force makes extensive use of risk assessments to significantly reduce its computed in-house core capability; the Marines Corps and Naval Sea Systems Command apply the concept in more limited fashion; and the Army and Naval Air Systems Command did not use risk assessments at all. Air Force officials developed an extensive risk assessment process and criteria, which identifies private sector capability and reduces its identified core capability because of the availability of this private sector capability. For example, for airframe repairs, the Air Force reduced its core capability by 66 percent through the risk assessment process. As a practical consequence, the Air Force’s application of risk has resulted in at least some portion of the core support workloads needed to maintain every weapon system and commodity being identified as available for contracting out. Officials of the Naval Sea Systems Command and the Marine Corps said that they do risk assessments. However, these appear to be perfunctory and do not change how maintenance work is allocated. As discussed earlier, the Naval Sea Systems Command initially identifies all ships to be strategically necessary, but allocates maintenance work to the public and private shipyards based on type of ship and historical basing considerations. Marine Corps officials said that their last risk assessment was done as an undocumented roundtable discussion in 1998. For the 2001 core capability assessment, the Corps’ computed core of 3.1 million hours was offset by 1.1 million hours because of the perceived availability of risk acceptable contracted workload. The Marines reported a final core figure of 2 million hours to be accomplished in the public sector. Officials said the core process would be more meaningful if it influenced the assignment of repair work for new systems and was tied to the budget process. Conversely, the Army and Naval Air Systems Command revised their processes to eliminate the private sector risk assessments and did not use them in their most recent core determinations. Army and Navy aviation officials said that they think risk assessments are not appropriate. They believe that to have a real capability means that the depots need to have at least some workload on every mission-essential system. In the opinion of these officials, military items are generally best supported in the public sector and commercial items best supported in the private sector. The differing interpretations and applications of risk assessments can result in significant differences in the ultimate core capability requirement computed by each service and in the core support work assigned to the depots. If the result of the risk assessment process is to include private sector capability as a portion of the identified core logistics capability under 10 U.S.C. 2464, that in our view would be inconsistent with the statute. As we understand it, the risk assessment process was intended to assess whether existing private sector sources could provide logistics capability on mission essential systems at an acceptable level of risk, reliability, and efficiency. While one could argue that under 10 U.S.C. 2464 as it was worded prior to 1998, that commercial capability could be considered as a portion of the identified core depot maintenance capabilities, we do not think such is the case under the current version of the statute. The provision was amended by the National Defense Authorization Act for Fiscal Year 1998 to state that; “it is essential for the national defense that the Department of Defense maintain a core logistics capability that is government-owned and government operated (including government personnel and government-owned and operated equipment and facilities).” Similarly, section 2464 further provides that “the Secretary of Defense shall require the performance of core logistics workloads necessary to maintain the core logistics capabilities identified…at government-owned, government-operated facilities of the Department of Defense.”Consequently, we do not view a risk assessment process implementing 10 U.S.C. 2464 that results in the inclusion of private-sector capabilities as a portion of the identified core logistics capabilities as consistent with the statute. Peacetime Workload Factors Affect Computed Capability Requirements The difference in services’ use of the methodology factor used to reduce computed wartime requirements to peacetime workloads also raises concerns about the extent to which core capabilities are being developed. The factor reflects the ability of depots to surge (increase) work during an emergency. The Air Force, Naval Air Systems Command, and the Marine Corps use the same factor; the Naval Sea Systems Command uses a smaller factor; and the Army does not use an adjusting factor. The factors used result in higher peacetime core workload requirements for the Army and Sea Systems Command relative to their wartime needs compared to the other services. For example, in using a factor of 1.6, the Air Force assumes that in emergency situations, existing in-house facilities could increase their production by 60 percent by working increased time. If the Army had used the same factor used by the Air Force, its computed 2001- core capability support requirement would have been reduced from 9.8 million direct labor hours to 6.1 million hours. Conversely, if the Air Force had not used an adjustment factor, its computed 2001 core support requirement would have been increased from 18.2 million direct labor hours to 29.1 million hours. Current Workloads Do Not Optimize the Development of Core Capabilities Our review identified concerns that, after computing the core capabilities, actual workloads assigned to the depots during peacetime are not always sufficient to fully support core capability requirements. Not meeting workload goals can mean that the workforce is getting less than optimal work experience on core workload. According to 10 U.S.C. 2464, DOD policy, and the core requirements determination process, the services are to assign sufficient peacetime workloads to the depots to maintain the expertise and competence in core capabilities. However, as discussed below, this is not happening in all cases. The volume of assigned peacetime workloads in the Army fell short of the 9.2-million-hour total core workload needed to support its core capabilities by about 1.4 million direct labor hours in fiscal year 2000 and about 1 million hours in fiscal year 2001. For example, the Army’s most recent update of the core support work requirement for the Apache helicopter totals 420,000 direct labor hours for fiscal year 2001. However, its funded workloads assigned to military depots totaled only 126,000 direct labor hours in fiscal year 1999 and about 264,000 hours in fiscal year 2000. Depot officials told us the principal Apache aircraft work in the depot involves disassembly and overhaul of selected components that the contractor will later use in the remanufacturing process. Logistics officials pointed out that one reason peacetime work has lagged behind calculated core support workload requirements is the continuing trend for outsourcing maintenance services involving weapon system upgrades and conversions. The depot officials pointed out that to alleviate the financial impact from the shortfall in actual workload, the Army established direct appropriation funding to reimburse its depots for fixed overhead costs associated with underutilized plant capacity. In fiscal years 2000 and 2001, the Army provided its depots a total of about $20 million in direct funding for underutilized capacity. Shortfalls also exist in the Air Force. For example, in fiscal year 2001, the Air Force anticipates about an 800,000-hour shortfall in depot-level software maintenance workload compared to its core capability support work requirement. Air Force officials originally computed a core work requirement of 3.7 million hours for software maintenance. Air Force management reduced the computed requirement by 600,000 hours because the depots were not considered capable of accomplishing that much workload. As a result, the Air Force only included 3.1 million hours for software maintenance in the total 18.2 million-hour core work requirement reported to the Office of the Secretary of Defense. Even at this lower number, the Air Force expects to accomplish only about 2.9 million hours in 2001, increasing the real core shortfall by another 200,000 hours to a total shortfall of more than 800,000 hours. We also determined that the Air Force understated core support work for airframe repairs by 528,000 hours because tasked contractor logistics support systems were inadvertently omitted in the roll-up of core requirements. Additionally, the Air Force potentially understated hours for component workloads because officials could not support how wartime flying hours were converted into commodity repair hours. Air Force officials repeatedly identified capability shortfalls in qualified software technicians and engineers as one of their most severe concerns at the depots. The Air Force Materiel Command initiated a study of software maintenance to assess the ability of the depots to support future depot level software workloads and to identify steps needed to perform greater amounts of workload. The study noted that the three Air Force depots were experiencing difficulty in accomplishing about 2.6 million hours per year. The study recommended changes aimed at improving recruiting, hiring, paying, and retaining software maintenance personnel. In fiscal year 2000, the Marines anticipated a required depot core support workload of 2 million hours but executed only about 1 million hours. Officials told us that not all items could be worked on due to financial constraints, readiness requirements, and operational force priorities. They noted that tying the core process to the budget process would help resolve this problem. Because the biennial core computation process operates largely as a stand-alone exercise and is not explicitly linked to the planning, programming, and budgeting system or to DOD’s strategic planning processes, it has little direct impact on resource allocation decisions and management priority setting. The identification of shortfalls in core capability, for example, does not generate budget requirements for making capital investments in facilities, equipment, and other resources needed to establish the capability. The 1993 core policy statement directed that implementation plans and decisions be reflected in future annual planning and budget submissions, as well as be input to the depot maintenance strategic plan, but this has not been done. If the core process were tied more explicitly to the budget and strategic planning process, the assignment of actual work to the depots should better support the establishment and continuation of required core capability. Results of DOD Initiatives to Improve Core and Core- Related Processes Are Uncertain While the Office of the Secretary of Defense and each service, to varying extents, have taken steps to improve core and core-related processes, the results of these initiatives are uncertain. They may or may not result in improvements to these processes. For example, a recently completed review of DOD’s core process identified various alternatives for improving the core process. The Deputy Under Secretary of Defense for Logistics and Materiel Readiness contracted for the review of core guidance and procedures used by the services to compute core capability requirements. The May 15, 2001 DOD core report provided information about each of the services’ core processes. According to officials, DOD continues to review the report and will not likely complete this process until the new administration announces how it intends to approach the management of logistics. The report concluded that (1) DOD’s depot maintenance core policy was incomplete and unclear, (2) service implementation was inconsistent, (3) the core methodology is not routinely used in DOD decision-making and is not linked to the defense budget system, and (4) capability requirements are not effectively addressed in the context of strategic planning. The study produced four sets of alternatives designed to improve and transform core policy and methodology into a management tool and explicitly integrate it into DOD’s strategic planning processes. Those alternatives, discussed in appendix I, ranged from making a few minor administrative adjustments to the core process, to making substantive changes to the process such as eliminating the risk assessment as a tool for reducing the core requirement, and to undertaking an extensive revamping of the process which would include the elimination of the requirement for maintaining a core capability in military depots. In October 2001, Office of the Secretary of Defense management selected the alternative that would streamline the existing core process and establish explicit linkage with the DOD planning, programming, and budgeting system. The Deputy Under Secretary for Logistics and Materiel Readiness issued new guidance regarding the implementation of core depot maintenance policy and methodology. Also, a joint working group is to be established to review the details of implementation procedures with final policy guidance to be issued by March 1, 2002. Similarly, the military services also have ongoing initiatives that will affect logistics processes, including core and the source-of-repair determination. Some of these initiatives are discussed in the next section of the report and in appendix I. In our June 2000 report we questioned the Department’s management of logistics improvement efforts. Our ongoing review of the Department’s logistics strategic planning process has identified additional areas where the Department can improve its logistics support planning. In addition, the recently completed Quadrennial Defense Review (QDR) may lead to changes in how DOD manages depot maintenance and other logistics activities as well as how the Department approaches core and core-related processes. The QDR involved a comprehensive strategic assessment of defense strategy, goals, requirements, and capabilities. DOD issued its report on the QDR on September 30, 2001 with the intent that it serve as the overall strategic plan required by the Government Performance and Results Act of 1993. The report’s section on modernizing DOD business processes and infrastructure discusses core functions and, as a general rule, states that any function that can be provided by the private sector is not a core government function. The report states that DOD will assess all its functions to separate core and non-core functions with the test being whether a function is directly necessary for warfighting. It expects to divide functions into three broad categories: (1) Functions directly linked to warfighting and best performed by the federal government. In these areas, DOD plans to invest in process and technology to improve performance. (2) Functions indirectly linked to warfighting capability that must be shared by the public and private sectors. In these areas, DOD will seek to define new models of public-private partnerships to improve performance. (3) Functions not linked to warfighting and best performed by the private sector. In these areas, DOD will seek to privatize or outsource entire functions or define new mechanisms for partnerships with private firms and other public agencies. It is not clear where depot maintenance and other logistics functions contributing to weapon systems sustainment and performance will be placed in this framework. If it were placed in the second category, the implication is that it would not be core. The impact of 10 U.S.C. 2464 from such determinations is uncertain. Investments In Facilities, Equipment, and Personnel Have Been Insufficient Investments in facilities, equipment, and human capital have not been sufficient in recent years to ensure the long-term viability of the military services’ depots. This situation is in part due to the weaknesses we identified in the core policy and related implementation practices. Also contributing is DOD’s downsizing of depot infrastructure and workforce. As a result, the investment in capital equipment and human capital resources for DOD’s depot facilities declined significantly. Today’s military depot capability is primarily in the repair of older systems and equipment. At the same time, the average age of the depot worker is 46 with about one-third eligible to retire within the next five years. The Department has only recently begun to consider changes to core capability policies that will generate the workloads, the facilities, and the personnel required to support future core capabilities in government facilities. Consequently, the Department lacks strategic and related service implementation plans that address the development of future capabilities for both the maintenance facilities and the workforce. Future Viability of Maintenance Depots Affected by Lack of Investment in New Capability Capital investments in depot facilities and plant equipment declined sharply in the mid-1990s as a consequence of defense downsizing, depot closures and consolidations, and DOD plans to increase reliance on the private sector for logistics support of new weapon systems. As a result of DOD’s lack of investment in its internal depot system—particularly, by not assigning new and upgraded systems to the depots for repair—the military depot system is aging and is not keeping up with the latest technologies. In recent years, funding has started to increase slightly as the services have recognized the need to modernize the depots. As with any business, modernizing and refurbishing plant and equipment for optimal operating efficiency, as well as acquiring new capabilities and cutting-edge technologies linked to new workloads, are important to future viability of the military depots. Figure 4 depicts depot investments from fiscal years 1990 through 2000 from the three primary funding sources—the capital purchases program, military construction, and new weapon systems procurement and upgrade programs. The depiction has been adjusted for inflation. Of the estimated $3 billion in capital investment funding the military depots received between fiscal years 1990 and 2000, about 60 percent was for the capital purchases program that buys equipment to replace old depreciated equipment. Funding for this program was much lower during the 1990s than under its predecessor programs in the 1980s. More recently, funding levels have increased; but almost one-half of the funds went to meet environmental requirements, to purchase general use computers, and to do minor construction—requirements that may be needed for business purposes but typically do not increase maintenance production capabilities or add new technological capabilities to accomplish new workloads. The military construction appropriation funds new and replacement depot facilities. Military construction represents about 26 percent of the total depot capital investments between 1990 and 2000. For example, a 1998 project at Corpus Christi Army Depot provided a power train cleaning facility to add capability to clean new, specialized metals on Apache and Blackhawk helicopters. The bulk of military construction funding has gone to replace or modernize existing facilities or to increase capacity. Since the military depots have not been assigned much new work, they have received relatively little funding from the third source of funds procurement funds provided by weapon system program offices. Available data shows that the depots received about $403 million through capital investments from program offices between 1990 and 2000— representing about 14 percent of the total capital investment in the depots during that period. This source is the most important in terms of adding new capabilities such as modern repair technologies. System program managers are responsible for providing these funds to support new weapon systems being acquired. A complete and accurate accounting of the historical and planned amounts contributed to capitalizing the depots by weapon system program offices does not exist since the services do not centrally track and account for these funds. With the repair of newer technology items remaining with the private sector for most new systems, the military depots have not been getting the peculiar support equipment, technical data, and other resources needed to build a depot capability for supporting the new systems. For example, the Air Force recently attempted to identify contract workloads that could be brought in-house to help it meet the 50-percent limit on private sector performance of depot maintenance set forth in 10 U.S.C. 2466 but found that the depots were unable to take on these workloads without investment in new capability. The Aging Workforce Presents Significant Human Capital Challenges for Succession Planning DOD faces significant management challenges in succession planning to maintain a skilled workforce at its depot maintenance facilities. Like many other government organizations, relatively high numbers of civilian workers at maintenance depots are nearing retirement age. These demographics, coupled with the highly skilled nature of depot maintenance work and the length of time required to train new hires and support their progression to a journeyman level and beyond, create hiring, training, and retention challenges. Competition with the private sector for skilled workers and pay issues add to the current challenging situation. Reductions in the civilian workforce by more than half since the end of the Cold War have left an aging depot workforce. As a result of depot closures and other downsizing initiatives, the civilian depot workforce has been reduced by about 60 percent since 1987. Many of the youngest industrial workers were eliminated from the workforce while at the same time there were few hiring actions. An aging depot workforce has advantages in terms of the skill levels of the employees, but it also has disadvantages such as lack of familiarity with the newest technologies because the latest weapons have not generally been repaired in the military depots. With large numbers of retirement-eligible personnel, depot managers are concerned about the need to manage the losses of critical skills and regrow the talents that are needed to maintain a high quality workforce. The skills and institutional experience are necessary to maintain an effective and flexible workforce that is capable of performing the required work efficiently and effectively. If production capability similar to current levels is to be maintained, many new workers will be needed. With an average age of 46 and about one-third eligible to retire within the next 5 years, these data are comparable to other studies of DOD’s total civilian workforce. Table 1 provides average age and retirement eligibility data for each of DOD’s major depot activities. As indicated in table 1, by fiscal year 2005, about 30 percent of the current employees will be retirement-eligible. The percentage is highest in the Army at 37 percent and lowest in the Air Force at 27 percent. With an average age of 50, the Army depots have the oldest workers and the Air Force the youngest, with an average age of 45. Two facilities—one Air Force and one Army— share the position of having the oldest workers. The extent of the aging depot workforce problem is influenced by the extent to which the depots retain work requirements in the future. If current levels are retained, large numbers of new workers will be needed; but if the workload levels continue to decline, the problem will be less severe. Marine Corps officials told us that while the Marine Corps has an aging workforce problem, the primary challenge is lack of work. They noted that over the next 2 years, the Marine Corps is projecting a 26- percent reduction in its depot maintenance workforce as older systems are phased out and maintenance and repair work for new systems go to the private sector. Thus, the aging workforce issue is less problematic if this workload reduction occurs. In most cases, depot managers report they have been relatively successful in meeting their recruitment goals in the past; but they said they have had difficulty hiring younger workers and sufficient numbers of workers with specialized skills such as software maintenance. A Department of Labor standard sets a 4-year apprenticeship for acquiring trade skills, and some depot managers said workers in some of the industrial skill areas require 3 or more years of training before they reach the journeyman level. Depot managers indicate that they are behind where they should be in hiring new workers to revitalize human capital resources. Surveys of young adults entering the general workforce indicate that fewer are considering careers in government, and this is particularly true for the depots since workers are uncertain what future there is for these activities. A national shortage of software engineers, skilled mechanics, metal workers, machinists, and some other skill areas exacerbates the military depots’ human capital challenges since the military facilities are competing with the private sector for workers. Current personnel policies, procedures, and other factors may not support timely replacement of depot personnel. As previously noted, many highly skilled workers require 3 or more years to develop technical expertise under the on-the-job tutelage of experienced workers. Inflexible hiring practices inhibit timely hiring, and the historical recruiting pool of skilled workers has been reduced as the number of military maintenance personnel has declined. Strategic Plan to Shape Future Maintenance Infrastructure and Human Capital Investment Requirements Is Needed The services have lately recognized the need to address depot maintenance infrastructure and workforce issues, but improvement plans are still being developed and actions are in the early stages. No overall plan exists that ties investments in depot maintenance facilities and plant equipment with future workloads and, in turn, with human capital needs. Officials have identified significant funding requirements associated with hiring, training, and retaining depot workers. To replace retiring workers, the services will have to greatly increase the rate of new hires. Some Recognition That Action Is Needed None of the services has a comprehensive depot infrastructure plan that integrates expected future core capabilities with necessary capital investments required to establish that capability and which identifies budget requirements to implement that plan. In response to Congressional concerns in this area (that evolved from the Air Force statements that it cannot address its 50-50 workload imbalance by shifting some private sector work to military depots because of not having the required depot support resources), the Air Force is working on such a plan. Air Force officials expect the depot infrastructure plan to be completed in December 2001. Since this plan is not yet available, we do not know whether it will provide the roadmap needed to effectively manage this critical resource. While Army, Navy, and Marine officials have undertaken some initiatives intended to improve their depot management, these efforts do not provide a comprehensive plan to shape future maintenance infrastructure. Given the preliminary status of these efforts, it is unclear to what extent they will mitigate or resolve identified deficiencies in this area. Further, we noted that generally each service is studying and pursuing workforce-shaping efforts independently. Current initiatives to revitalize the depot personnel workforce may not completely resolve the potential personnel shortfall. For example, efforts to expand the apprenticeship, cooperative training, and vocational-technical programs are just starting and involve relatively small numbers to date. Increased funding to support expanded training needs has not been completely identified and programmed, and the priority of this initiative relative to other military requirements is questionable. Personnel officials of the Air Force Materiel Command, for example, identified a need for $326 million over the next 5 years to implement its human capital initiatives, including payment incentives and training costs. Only $15 million has been approved. Related efforts to develop a multi-skilled workforce essential to more efficient operations of depots have been limited. Very importantly, future requirements for hiring and training a workforce capable on new systems and high technology repair processes are not fully known. As discussed earlier, gaps and deficiencies in core policies and implementation limit forward-looking actions to identify and acquire future required capabilities. DOD officials are also looking to better utilize and expand existing authorities under the Office of Personnel Management. For example, the 1990 Federal Employees Pay Comparability Act provides for use and funding of recruitment activities, relocation bonuses, and retention allowances; but the provisions have been used only for white-collar workers. DOD is seeking to expand the act’s coverage to wage grade employees at the depots and arsenals, and it is considering a legislative package of additional authorities that may also be needed. These proposals are designed to make it easier to hire workers, including ex- military personnel, and raise monetary incentives to attract and retain needed talent in areas of shortages and direct competition with the private sector. These areas include software maintenance, engineering, aircraft mechanics, and other skill categories. Another issue receiving attention recently is development of an alternative hiring system to replace the existing system, which defense personnel specialists say is cumbersome and untimely. No Overall Strategic Plan Logistics activities represent a key management challenge. In our January 2001 high-risk series report, we designated strategic human capital management as a new government-wide high-risk area because of the pervasive challenge it represents across the federal government. In our recent performance accountability report on defense we reported that DOD faces significant challenges in managing its civilian workforce. The sizeable reduction in personnel since the end of the Cold War has led to an imbalance in age, skills, and experience that is jeopardizing certain acquisition and logistics capabilities. Its approach to the reductions was not oriented toward reshaping the makeup of the workforce. DOD officials voiced concerns about what was perceived to be a lack of attention to identifying and maintaining a basic level of skills needed to maintain in- house industrial capabilities as part of the defense industrial base. We concluded that these concerns remain today and are heightened by DOD’s increased emphasis on contracting for many of its functions. Maintenance is an important element of those activities; and DOD is at a critical point with respect to the future of its maintenance programs, that are linked to its overall logistics strategic plan. However, it is unclear what future role is planned for the military depots in supporting the Department’s future maintenance program. There is no DOD-wide integrated study effort for depot workers and related logistics activities similar to the extensive review of the civilian acquisition workforce undertaken by the Acquisition 2005 Task Force. The Under Secretary of Defense for Acquisition, Technology and Logistics established the task force to take a comprehensive look across the services to identify human capital challenges and solutions as well as the resources needed to implement them. The October 2000 final report of the acquisition task force noted that to meet the demands caused by an acquisition workforce retirement exodus in 3 to 5 years, implementation of recommended initiatives had to begin by the next quarter. Before DOD can know the magnitude of the challenge of revitalizing its depot facilities and equipment and its depot workforce, it must first know what its future workloads will be; what facility, equipment, and technical capability improvements will be required to perform that work; and what personnel changes will be needed to respond to retirements and workload changes. Since the services have not yet conducted an assessment to enable the identification of future requirements in sufficient detail to provide a baseline for acquiring needed resources, they are behind in identifying solutions and required resources to implement them. Policy Gaps Could Lead to Shortfalls in Non-Depot Maintenance Logistics Capabilities Regarding non-depot maintenance logistics activities, the Department has not established policies or processes for identifying core capabilities for activities such as supply support, engineering, and transportation. Without identifying those core logistics activities that need to be retained in-house, the services may not retain critical capabilities as they proceed with contracting initiatives. The resulting shortfalls in non-depot maintenance logistics capability could impact the Department’s ability to effectively support required military operations. Officials of the Office of the Secretary of Defense have stated that DOD has not identified any core capabilities nor implemented a core determination process for any logistics activities other than depot maintenance. As we understand it, DOD does not believe that 10 U.S.C. 2464 necessarily includes logistics functions other than depot maintenance. We believe that notwithstanding any lack of clarity in the coverage of 10 U.S.C. 2464, a well-thought-out and well-defined policy and process for identifying core requirements in other areas of logistics is necessary to maintain the government’s capability to support its essential military systems in time of war or national emergency. Resolving this policy issue is becoming more important as DOD increases outsourcing and develops new strategies to rely on the private sector to perform many logistical support activities. We note that the September 2001 QDR report discusses DOD’s plans to assess support functions to identify core from non-core functions. The current version of 10 U.S.C. 2464 is not specifically limited to depot maintenance—it refers generally to “core logistics capabilities.” On the other hand, the operative provisions of 10 U.S.C 2464 are set forth in terms of capabilities needed to maintain and repair weapon systems and other military equipment and the workloads needed to accomplish those activities; these are functions encompassed within depot maintenance as defined by 10 U.S.C. 2460. While the coverage of 10 U.S.C. 2464 is not clear, we nevertheless think that from an operational standpoint, the core identification process ought to include those logistics functions that are necessary to support the depot maintenance on mission essential weapons and equipment. Section 2464 of title 10 is aimed at maintaining the government’s capability to support its essential military systems in time of war or national emergency. We think that it is reasonable to expect that DOD will include in the core process those logistics functions that are determined to be necessary to achieve such a result. Providing military readiness through the logistics support of military forces in an operational environment requires a complex set of functions and activities that includes maintenance, supply support, transportation, engineering, and others. In recent years, DOD has contracted for more of these activities. However, the Department has not laid out a strategic framework describing what combination of public and private sector support is expected as an end state and why certain activities or positions should be retained as government-performed activities. In a recent reportwe noted that operating command officials have raised concerns about the impact on their operations that may result from expanding the use of contractors. Among their concerns was that increased contracting could reduce the ability of program offices to perform essential management functions. During this review, officials told us that they have experienced increasing problems in fulfilling oversight responsibilities because they cannot obtain adequate insight into contractor-supported programs. Additionally, logistics officials at depots and service headquarters have also raised concerns about the need to retain in-house technical and management capabilities in functional areas such as engineering and supply management. Because of the criticality of these and other logistics activities, a core assessment would improve the Department’s ability to manage these activities and to better determine capabilities that should be retained in-house and those that should be available for competitive sourcing. Conclusions Serious weaknesses exist in the Department’s policy and practices for developing core depot maintenance capabilities that are creating gaps between actual capabilities and those that will be needed to support future national defense emergencies and contingencies. If the existing policy is not clarified and current practices continue, the military depots will not have the equipment, facilities, and trained personnel to work on and provide related logistics support on many of the weapon systems and related equipment that will be used by the military in the next 5 to 15 years. While the Department states that it intends for its depots to have these capabilities, actual practices are much different. Core policy does not adequately take into consideration future systems repair needs and the impact of retiring systems on developing future capabilities. The core policy is not linked to the source of repair process. Also, other individual service practices negatively impact the establishment of future core capabilities and hinder management oversight. Additionally, investments in new facilities, equipment, and workforce training and revitalization have been limited for an extended period of time. Lastly, there is no strategic plan and associated service implementation plans to create and sustain a viable depot maintenance capability. Regarding non-depot maintenance logistics activities, core policies and implementing processes do not exist. Without such policies and in the absence of a strategic approach to determining what kinds and how much logistics should be retained in-house, the Department may inadvertently contract for logistics capabilities that are needed to be performed in-house to meet readiness and contingency needs. Recommendations for Executive Action To enhance the management of core logistics capabilities, particularly for depot maintenance, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology and Logistics, in conjunction with the appropriate military services activities, take the following actions: Revise depot maintenance core policy to include a forward look to incorporate future systems and equipment repair needs when developing core capability requirements and a direct link to the source of repair process. Revise depot maintenance core implementation procedures and practices to (1) establish criteria for determining what it means to have a capability in military depots to perform maintenance on mission essential systems in support of national defense emergencies and contingencies; (2) prohibit the use of the risk assessment to the extent it results in the inclusion of private-sector capability within identified core capabilities; (3) clarify the use of the adjustment factor and other elements of the computation methodology; and (4) link core requirements to the budget process to ensure adequate funding of core support workload requirements. Establish expedited milestones for developing strategic and related implementation plans for the use of military depots that would identify desired short- and long-term core capabilities and associated capital investments and human capital needs. These plans at a minimum should (1) delineate workloads to be accomplished in each service’s depots, other services’ depots, by contractors at their own sites, and at government sites; (2) discuss the role of in-house maintenance capability as an element of each service’s ability to respond to national defense emergencies and contingencies; (3) identify infrastructure improvements designed to operate more efficiently; and (4) address human capital needs and the specific actions that will be taken to meet them. Establish milestones and accountability for developing policies to identify core logistics capabilities for non-maintenance activities to ensure in-house retention of needed capabilities for an emergency. Matter for Congressional Consideration Congress may wish to review the coverage of 10 U.S.C. 2464 as it relates to non-maintenance logistics activities such as supply support, transportation, and engineering, and if it deems it appropriate, clarify the law. Agency Comments and Our Evaluation In commenting on a draft of this report, the Department concurred with our recommendations to improve core depot maintenance policies and procedures and to develop strategic and implementation plans for maintenance depots. Appendix IV of this report is the full response by the Department. The Department did not concur with our recommendation to establish milestones and accountability for developing policies to identify core logistics capabilities for non-maintenance activities. The Department stated that it has not identified any core logistics capabilities beyond those associated with depot maintenance and repair as that term is defined in 10 U.S.C. 2460. Therefore, the Department saw no need to establish milestones and accountability for developing core policies for non- maintenance activities. In further discussions of this matter, officials reiterated their earlier comments that the coverage of 10 U.S.C. 2464 for non-maintenance activities was not clear. We recognize that there is some question about the applicability of 10 U.S.C. 2464 to non-maintenance logistics activities. Thus, we included a matter for congressional consideration in this report, noting that the Congress may wish to consider reviewing and clarifying the intent of 10 U.S.C. 2464 as it relates to non-maintenance logistics activities. We continue to believe the identification of core capabilities for other logistics activities to improve the Department’s ability to manage these activities and to better support business decisions regarding whether functions and capabilities should be retained in-house. Providing military readiness through the logistics support of military forces in an operational environment requires a complex set of functions and activities such as maintenance, supply support, transportation, and engineering. The interrelatedness of the entire spectrum of logistics activities would argue that attention to core capabilities is important to non-maintenance as well as depot maintenance activities. For example, program managers and depot officials have raised management concerns including oversight of weapon systems support and retention of in-house technical skills and expertise given increased outsourcing of logistics activities. Further, the best practices of private sector companies, business reengineering principles, and OMB A-76 guidance all support the importance of an enterprise determining which vital and cost-effective functions and business processes should be retained in-house and which are appropriate for outsourcing. Our recommendation that the department extend its core analysis beyond wrench-turning maintenance activities to include those other logistics activities that are linked to the depot maintenance function is intended to assure that the Department appropriately consider what specific activities should be retained inhouse to assure the continued support of essential warfighting capability. We continue to believe it should be adopted. We are sending copies of this report to the Secretary of Defense, the Secretary of the Army, the Secretary of the Navy, the Secretary of the Air Force, the Commandant of the Marine Corps, and the Director of the Office of Management and Budget. The scope and methodology for this review are described in appendix II. If you have questions about this report, please call me at (202) 512-8412 or Julia Denman at 202 512-4290. Additional contacts and staff acknowledgements are provided in appendix III. Appendix I: DOD Initiatives That Could Affect Core and Core-Related Processes Proposed Office of the Secretary of Defense Alternatives for Revising the DOD Core Process Alternative 1 proposes updating and consolidating existing DOD core- related policy and guidance, explicitly addressing core-related laws. It would not involve any significant changes to the core methodology. This alternative would realign somewhat and standardize the categories in which the services report core maintenance workloads. Core depot maintenance capability requirements would continue to be computed biennially, addressing only existing systems; and the overall core determination process would continue to be relatively independent of the DOD planning, programming, and budgeting system. Alternative 2 proposes building on the first alternative by streamlining the existing core methodology and establishing an explicit linkage with the DOD planning, programming, and budgeting system. It also would divide the core methodology into two distinct parts to more clearly distinguish between core capability requirements and the depot maintenance workloads needed to satisfy those requirements. Detailed core computations would be performed on a biennial basis in conjunction with the planning, programming, and budgeting system in order to address both requirements for new systems and changes to existing systems. Also, core computations would be reviewed annually to assess the impact of unanticipated budgetary adjustments. Alternative 3 proposes building on the second alternative by incorporating a value-driven source-of-repair evaluation process for workloads that are not required to support core depot maintenance capabilities. This appears to be a more prescriptive expansion of the current version of the core methodology concerning the types of analysis that should be done as a part of the value-driven decision. Depending on the amount and ultimate source-of-repair decisions reached through the value-driven process, implementation of alternative 3 could necessitate issuance of waivers from the 10 U.S.C. 2466 (50-50) requirements. Alternative 4 proposes doing away with the core process as it is known today and using a value-driven source of repair evaluation process for all depot maintenance workloads. In this context, it would be used to allocate depot maintenance workloads among public, private, and integrated maintenance activities. It could not be implemented without the revision or repeal of 10 U.S.C. 2464, 10 U.S.C. 2466, and 10 U.S.C. 2469. In October 2001, DOD managers selected alternative 2 and issued new implementation guidance. Improvement efforts were ongoing at the time this report was issued. Air Force Initiatives In fiscal year 2000, the Air Force exceeded the 50-percent limit set forth in 10 U.S.C. 2466 on the amount of depot maintenance work that can be performed in the private sector. Largely because of this, we found a heightened awareness of the need to put more emphasis on incorporating core capability analysis with the source-of-repair process to drive some future workloads into the military depots. Air Force officials have taken some steps designed to better integrate the source-of-repair process and logistics considerations with acquisition program decisions. For example, senior Air Force officials issued a series of policy memos in 1999 and 2000 that were aimed at integrating the source-of-repair process with acquisition program decisions. The intent was to ensure that sustainment plans for new and modified weapon systems consider the future impacts on depot workloads allocated to the public and private sectors. These changes are designed to ensure that core capability, life-cycle costs, and other logistics considerations such as the 50-50 rule are considered at all stages of the acquisition process and figure prominently in decisions on lifetime support. Officials also revised guidance to incorporate recommended improvements and to specify both the acquisition and sustainment communities’ roles and responsibilities. While these are steps in the right direction, we have not yet seen substantive change reflected in the source-of-repair decisions. Materiel Command officials acknowledged that although the Air Force has made an effort to identify systems to redirect for repair by a military depot, program office officials have been reluctant to make changes. Officials said that since program funds to cover the acquisition of technical data, depot plant equipment, and other resources needed to establish capability in military depots have not been programmed, there is little flexibility in the short term. In a March 2001 hearing held by the House Committee on Armed Services, Air Force officials said they are working on a longer-term plan to consider options for reassigning some new systems maintenance work to Air Force depots. This plan is expected to be completed in December 2001, but it is uncertain whether any workloads will be identified for reassignment to an Air Force depot for repair. Navy Initiatives The Navy is in the very early stages of implementing a process to improve its management of aviation maintenance issues; and, while in an early phase, Navy officials have identified core support repair work in the Navy’s North Island depot for the F/A-18 E/F, its newest fighter upgrade. In August 2000, the Naval Air Systems Command instituted a Depot Program Management Board to improve its source-of-repair process. The board is supposed to corporately manage the naval aviation industrial enterprise, which encompasses the combined capabilities and resources of organic Navy, interservice, and commercial aviation depots. The board includes key logistics and acquisition officials from within the Command whose responsibilities and authority have a major impact on the size, shape, and cost of the naval aviation industrial base. Its responsibilities include determining and sustaining core naval aviation industrial capability and capacity and guiding best-value, industrial source-of-repair decisions. At its inaugural meeting in August 2000, the board concluded that the industrial enterprise needed a more unified corporate source-of-repair decision process to ensure that the technology for core capability is maintained. The process is still on the drawing board and implementing instructions have not yet been developed. However, Navy officials say that the new process influenced the 2001 Navy decisions requiring repair work to support core capability for the F/A-18 E/F at the North Island depot. Army Initiatives The Army is attempting to improve the cost-effectiveness of its depot maintenance program by better utilizing the industrial capability that it currently maintains by increasing the amount of work assigned to the Army’s depots and arsenals, but the long-term impact is uncertain. In July 1999, the Assistant Secretary of the Army for Acquisition, Logistics and Technology issued guidance that gave the Army Materiel Command the responsibility for achieving optimal efficiency within the organic depot system. Prior to 1999, the acquisition community operated under policy guidance advocating contractor performance and the development of long- term support relationships with private sector contractors. Some officials believe that Army policy and practice is trying to better use the Army depots and achieve improved efficiencies. The Army also revised its acquisition guidance to require a source-of-repair decision by acquisition milestone two, the beginning of engineering and manufacturing development. Logistics officials believe this initiative is important to ensuring that core and other logistics considerations are made an earlier part of acquisition program decisions. Appendix II: Scope and Methodology During this review, we visited and obtained information from the Office of the Secretary of Defense and the Army, Navy, and Air Force headquarters, all in the Washington D.C. area; Army Materiel Command headquarters in Alexandria, Virginia; and two subordinate Army commands—the Tank- Automotive and Armaments Command, Warren, Michigan, and the Aviation and Missile Command, Huntsville, Alabama; the Naval Sea Systems Command, Arlington, Virginia, and the Norfolk Navy Shipyard, Norfolk, Virginia; the Naval Air Systems Command in Patuxent, Maryland, and Naval Air Depots at North Island, California, and Cherry Point, North Carolina; the Marines Corps Materiel Command and Logistics Base in Albany, Georgia; the Air Force Materiel Command at Wright-Patterson Air Force Base, Ohio, and the Ogden Air Logistics Center in Ogden, Utah; and the Joint Depot Maintenance Analysis Group, Wright-Patterson Air Force Base, Ohio. To determine whether DOD has implemented an effective core depot maintenance policy, we reviewed defense core policy and applications from a historical perspective to trace their development and use in decision-making. We reviewed the standard core methodology developed by DOD, changes in the methodology, and the specific procedures and techniques used by the military services to compute core requirements. We also obtained and reviewed logistics and acquisition policies and procedures for sustaining weapon systems, including source-of-repair and other decision tools. We obtained historical core computation data to identify trends in core workloads. We compared and contrasted the services’ methodologies for computing core and for making source-of- repair decisions. We evaluated recent maintenance decisions and pending decisions to determine the basis and support for decisions and current status of systems being reviewed. We reviewed a recent departmental report that evaluated the services’ procedures for computing core requirements and set out alternatives for consideration of improvements. To determine the extent to which DOD’s investments in facilities, equipment, and human capital are adequate to support the long-term viability of military depots, we reviewed current service efforts to address depot issues and concerns and emerging business strategies and concerns, including plans to modernize and recapitalize the depots. We also issued a data call and received information from all 19 major defense depots. The purpose of the data call was to gain the local perspective of depot officials on recent events affecting business operations and to obtain data on their plans, business strategies, and capital investments. We gathered and summarized information on the size and scope of depot activities, new repair workloads received and/or planned for the depots, as well as workloads lost (or expected to be lost) for fiscal years 1995-2005. We summarized recent and planned investments in depot plants and equipment to determine the amount, nature, and trend in capital investments. We reviewed plans to address human capital issues, in particular the hiring and training plans to replace an aging maintenance work force, cost estimates, and legislative proposals being considered to address these issues. We also relied on our extensive and continuing work on human capital issues, both in the defense environment and the federal government as a whole. To determine the extent to which DOD has identified core capability for logistics activities other than depot maintenance, we discussed with officials their perspectives on core legislation and their historical responses to congressional requirements. We relied also on our previous work on the A-76 process and prior reviews of logistics activities and plans. We conducted our review from September 2000 through June 2001 in accordance with generally accepted government auditing standards. Appendix III: GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition, John Brosnan, Raymond Cooksey, Bruce Fairbairn, Johnetta Gatlin-Brown, Jane Hunt, Steve Hunter, Glenn Knoepfle, Ron Leporati, Andrew Marek, Fred Naas, and Bobby Worrell contributed to this report. Appendix IV: Comments by the Department of Defense Related GAO Reports Defense Logistics: Strategic Planning Weaknesses Leave Economy, Efficiency, and Effectiveness of Future Support Systems at Risk (GAO- 02-106, Oct. 11, 2001). Defense Logistics: Air Force Lacks Data to Assess Contractor Logistics Support Approaches (GAO-01-618, Sept. 7, 2001). Human Capital: Major Human Capital Challenges at the Departments of Defense and State (GAO-01-565T, Mar. 29, 2001). Defense Maintenance: Sustaining Readiness Support Capabilities Requires a Comprehensive Plan (GAO-01-533T, Mar. 23, 2001) Major Management Challenges and Program Risks: Department of Defense (GAO-01-244, Jan. 2001). High Risk Series: An Update (GAO-01-263, Jan. 2001). Depot Maintenance: Key Financial Issues for Consolidations at Pearl Harbor and Elsewhere Are Still Unresolved (GAO-01-19, Jan. 22, 2001). Depot Maintenance: Action Needed to Avoid Exceeding Ceiling on Contract Workloads (GAO/NSIAD-00-193, Aug. 24, 2000). Defense Logistics: Integrated Plans and Improved Implementation Needed to Enhance Engineering Efforts (GAO/T-NSIAD-00-206, June 27, 2000). Defense Logistics: Actions Needed to Enhance Success of Reengineering Initiatives (GAO/NSIAD-00-89, June 23, 2000). Defense Logistics: Air Force Report on Contractor Support Is Narrowly Focused (GAO/NSIAD-00-115, Apr. 20, 2000). Human Capital: Strategic Approach Should Guide DOD Civilian Workforce Management (GAO/T-NSIAD-00-120, Mar. 9, 2000). Depot Maintenance: Air Force Faces Challenges in Managing to 50-50 Ceiling (GAO/T-NSIAD-00-112, Mar. 3, 2000). Military Base Closures: Lack of Data Inhibits Cost-Effectiveness Analyses of Privatization-in-Place Initiatives (GAO/NSIAD-00-23, Dec. 20, 1999). Depot Maintenance: Army Report Provides Incomplete Assessment of Depot-type Capabilities (GAO/NSIAD-00-20, Oct. 15, 1999). Depot Maintenance: Workload Allocation Reporting Improved, but Lingering Problems Remain (GAO/NSIAD-99-154, July 13, 1999). Air Force Logistics: C-17 Support Plan Does Not Adequately Address Key Issues (GAO/NSIAD-99-147, July 8, 1999). Army Logistics: Status of Proposed Support Plan for Apache Helicopter (GAO/NSIAD-99-140, July 1, 1999). Air Force Depot Maintenance: Management Changes Would Improve Implementation of Reform Initiatives (GAO/NSIAD-99-63, June 25, 1999). Navy Ship Maintenance: Allocation of Ship Maintenance Work in the Norfolk, Virginia, Area (GAO/NSIAD-99-54, Feb. 24, 1999). Army Industrial Facilities: Workforce Requirements and Related Issues Affecting Depots and Arsenals (GAO/NSIAD-99-31, Nov. 30, 1998). Navy Depot Maintenance: Weaknesses in the T406 Engine Logistics Support Decision (GAO/NSIAD-98-221, Sep. 14, 1998). Defense Depot Maintenance: Contracting Approaches Should Address Workload Characteristics (GAO/NSIAD-98-130, June 15, 1998). Defense Depot Maintenance: Use of Public-Private Partnering Arrangements (GAO/NSIAD-98-91, May 7, 1998). Defense Depot Maintenance: DOD Shifting More Workload for New Weapon Systems to the Private Sector (GAO/NSIAD-98-8, Mar. 31, 1998). Defense Depot Maintenance: Information on Public and Private Sector Workload Allocations (GAO/NSIAD-98-41, Jan. 20, 1998). Outsourcing DOD Logistics: Savings Achievable But Defense Science Board’s Projections Are Overstated (GAO/NSIAD-98-48, Dec. 8, 1997). Navy Regional Maintenance: Substantial Opportunities Exist to Build on Infrastructure Streamlining Progress (GAO/NSIAD-98-4, Nov. 13, 1997). Air Force Depot Maintenance: Information on the Cost-Effectiveness of B-1 and B-52 Support Options (GAO/NSIAD-97-210BR, Sept. 12, 1997). Defense Depot Maintenance: Uncertainties and Challenges DOD Faces in Restructuring Its Depot Maintenance Program (GAO/T-NSIAD-97-112, May 1, 1997) and (GAO/T-NSIAD-97-111, Mar. 18, 1997). Defense Outsourcing: Challenges Facing DOD as It Attempts to Save Billions In Infrastructure Costs (GAO/T-NSIAD-97-110, Mar. 12, 1997). High-Risk Series: Defense Infrastructure (GAO/HR-97-7, Feb. 1997). Air Force Depot Maintenance: Privatization-in-Place Plans Are Costly While Excess Capacity Exists (GAO/NSIAD-97-13, Dec. 31, 1996). Army Depot Maintenance: Privatization Without Further Downsizing Increases Costly Excess Capacity (GAO/NSIAD-96-201, Sept. 18, 1996). Navy Depot Maintenance: Cost and Savings Issues Related to Privatizing-in-Place at the Louisville, Kentucky Depot (GAO/NSIAD-96- 202, Sept. 18, 1996). Defense Depot Maintenance: Commission on Roles and Mission’s Privatization Assumptions Are Questionable (GAO/NSIAD-96-161, July 15, 1996). Defense Depot Maintenance: DOD’s Policy Report Leaves Future Role of Depot System Uncertain (GAO/NSIAD-96-165, May 21, 1996). Defense Depot Maintenance: More Comprehensive and Consistent Workload Data Needed for Decisionmakers (GAO/NSIAD-96-166, May 21, 1996). Defense Depot Maintenance: Privatization and the Debate Over the Public-Private Mix (GAO/T-NSIAD-96-148, Apr. 17, 1996) and (GAO/T- NSIAD-96-146, Apr. 16, 1996). Depot Maintenance: Opportunities to Privatize Repair of Military Engines (GAO/NSIAD-96-33, Mar. 5, 1996). Closing Maintenance Depots: Savings, Workload, and Redistribution Issues (GAO/NSIAD-96-29, Mar. 4, 1996) Military Base Closures: Analysis of DOD’s Process and Recommendations for 1995 (GAO/NSIAD-95-132, Apr. 17, 1995). Military Bases: Analysis of DOD’s 1995 Process and Recommendations for Closure and Realignment (GAO/NSIAD-95-133, Apr. 14, 1995). Aerospace Guidance and Metrology Center: Cost Growth and Other Factors Affect Closure and Privatization (GAO/NSIAD-95-60, Dec. 9, 1994). Depot Maintenance: Issues in Allocating Workload Between the Public and Private Sectors (GAO/T-NSIAD-94-161, Apr. 12, 1994). Depot Maintenance (GAO/NSIAD-93-292R, Sept. 30, 1993). Depot Maintenance: Issues in Management and Restructuring to Support a Downsized Military (GAO/NSIAD-93-13, May 6, 1993). Defense Force Management: Challenges Facing DOD as it Continues to Downsize Its Civilian Work Force (GAO/NSIAD-93-123, Feb. 12, 1993).
The Department of Defense's (DOD) policy and practices for developing core depot maintenance capabilities are creating gaps between actual capabilities and those needed for future national defense emergencies and contingencies. If the existing policy is not clarified and current practices continue, the military depots will not have the equipment, facilities, and trained personnel to provide logistics support on many of the weapon systems and related equipment for military use in the next five to 15 years. Although DOD intends for its depots to have these capabilities, actual practices are much different. Core policy does not adequately take into consideration future systems repair needs and the impact of retiring systems on developing capabilities. Furthermore, the practices of individual services hinder the establishment of future core capabilities and management oversight. Additional investments in new facilities, equipment, and workforce training and revitalization have been limited for some time. Finally, there is no strategic plan and associated service implementation plans to create and sustain a viable depot maintenance capability.
GAO_GAO-03-601
Background HIV/AIDS, TB, and malaria, three of the world’s deadliest infectious diseases, cause tremendous human suffering, economic loss, and political instability. According to UNAIDS, in 2002 AIDS caused 3 million deaths, and 5 million people became infected. More than 70 percent, or 28.5 million, of the 40 million people with HIV/AIDS worldwide live in sub- Saharan Africa. However, according to a report by the National Intelligence Council, HIV infections in just five populous countries—China, India, Nigeria, Russia, and Ethiopia—will surpass total infections in central and southern Africa by the end of the decade. In addition, Thailand, a developing country that had successfully countered the growth of AIDS in the 1990s, is now facing a resurgent epidemic. According to WHO, after HIV/AIDS, TB is the world’s leading infectious cause of adult mortality, resulting in as many as 2 million deaths per year. Like HIV/AIDS, tuberculosis primarily affects the most economically active segment of the population, with 75 percent of the annual deaths occurring in those between the ages 15 and 54. Conversely, malaria, which causes more than 1 million deaths and at least 300 million cases of acute illness each year, is a leading cause of death in young children. The disease exerts its heaviest toll in Africa, where about 90 percent of malaria deaths occur. The Fund was formally launched in January 2002. The Fund is a grant- making organization with the purpose of attracting, managing, and disbursing funds that will increase existing resources and make a sustainable and significant contribution to the reduction of infections, illness, and death. The Fund aims for an integrated and balanced approach, covering prevention, treatment, care, and support, and seeks to establish efficient and effective disbursement mechanisms. During its first full year of operation, the Fund successfully completed two proposal rounds and began distributing grant money. Over the course of these two proposal rounds, the Fund approved grants to 153 proposals in 81 countries across the major regions of the world (see fig. 2). These grants total nearly $3.7 billion ($1.5 billion over the first 2 years) and cover all three diseases. The Fund Has Established Key Governance Structures, but Implementation Challenges Impede Ability to Rapidly Disburse Funds In its first year, the Fund developed and established key governance and other supporting structures, including a board of directors, a permanent secretariat, a grant review process, and country-level structures required to develop, implement, and oversee grants. However, limited communication, administrative complications, and the evolving nature of these new structures, especially at the country level, led to a lack of clarity over roles and responsibilities and slowed the Fund’s ability to sign the initial grant agreements. The Fund has recognized these problems and is taking steps at both the country and headquarters levels to address them. Key Governance and Other Supporting Structures Established The Fund has made noteworthy progress in establishing key headquarters and country-level governance structures. Figure 3 illustrates the governance structure of the Fund. At the headquarters level, governance structures include a board of directors, a permanent secretariat, a Technical Review Panel (TRP), and the World Bank as its trustee. The board is the governing body of the Fund, consisting of 18 voting members and 5 nonvoting members. The voting members consist of seven government representatives from developing countries, seven government representatives from donor countries, and one representative each from a developing country nongovernmental organization (NGO), a developed country NGO, the private sector, and private foundations. The five nonvoting members consist of a representative from WHO, the World Bank (as trustee, see below), UNAIDS, a person representing communities living with HIV/AIDS, TB, or malaria, and one Swiss citizen appointed by the board. The board makes all funding decisions; sets Fund policies, strategies, and operational guidelines; and selects the executive director of the secretariat. The board chair and vice chair rotate between beneficiary and donor country representatives. In January 2003, the U.S. Secretary of Health and Human Services was elected to serve as chairman, replacing the outgoing chairman from Uganda. Figure 4 illustrates the current structure of the Fund’s board. The board plans to meet three times per year and strives to make decisions by consensus. When consensus cannot be reached, any voting member can call for a vote. Successful motions require approval from a two-thirds majority of those present, representing both donor and recipient voting groups, which means that the current voting structure may make it difficult to reach a decision. For example, the only time the board brought an issue to a vote a decision was not reached because the members could not get a sufficient number of affirmative votes. The board has established four committees: (1) Governance and Partnership, (2) Resource Mobilization and Communications, (3) Portfolio Management and Procurement, and (4) Monitoring and Evaluation, Finance, and Audit. The committees respond to issues raised by the board and identify options for addressing them. For example, the Portfolio Management and Procurement Committee has developed a proposal appeals process. The United States has representatives on three of the four committees (Governance and Partnership; Portfolio Management and Procurement; and Monitoring and Evaluation, Finance, and Audit). The secretariat has hired 63 staff as of April 1, 2003, to run the day-to-day operations of the Fund. As the Fund’s only full-time body, the secretariat receives and screens grant applications, studies and recommends strategies to the board, communicates board decisions to stakeholders, manages and oversees regional grant portfolios, receives and reviews program and financial reports submitted by grant recipients through the LFA, and performs all administrative functions for the Fund. The board reviews and approves the secretariat’s business plan and budget. In January 2003, the board approved a $38.7 million budget for 2003 for the secretariat (see table 1). The Technical Review Panel (TRP) reviews and evaluates eligible proposals submitted to the Fund. It currently consists of 22 independent experts: 7 members with cross-cutting expertise in development, including health systems development, economics, public policy, and finance; 7 members with expertise in HIV/AIDS; 4 members with expertise in malaria; and 4 members with expertise in TB. There are two U.S. members on the TRP, an expert on TB and an expert with cross-cutting expertise in health and development issues. The TRP is supported by a WHO/UNAIDS working group that reviews the accuracy of baseline data on disease prevalence, poverty, and other indicators provided in the proposals. The working group also reviews the accuracy and relevance of the information provided by applicants on their ability to effectively use additional funds. The TRP makes recommendations to the board for final decisions on proposal selection. According to officials at the Department of Health and Human Services, health and development experts at the Centers for Disease Control and Prevention and USAID conducted an informal review of approved proposals and largely concurred with the TRP’s recommendations. As the Fund’s trustee, the World Bank receives money from donors, holds the money in an interest-bearing account, and disburses it according to the Fund’s written instructions. At the country level, governance and oversight structures include a Country Coordinating Mechanism, a principal recipient, subrecipients, and a Local Fund Agent. The country coordinating mechanism (CCM) is meant to provide a forum for stakeholders to work together to identify needs and develop and submit proposals to the Fund and follow the progress of grant projects during implementation. According to the Fund, CCM membership should include high-level government representatives as well as representatives of NGOs, civil society, multilateral and bilateral agencies, and the private sector. Further, all eligible partners in the CCM should be entitled to receive Fund money based on their stated role in implementing the proposal. The principal recipient, which is a member of the CCM, is responsible for receiving and implementing the grant. A principal recipient can be a government agency, an NGO, a private organization, or, if alternatives are not available, a multilateral development organization. Of the 69 grant agreements resulting from the first round of proposals approved by the Fund, 41 (59 percent) are with principal recipients that are government agencies, 17 (25 percent) are with NGOs, and 9 (13 percent) are with the U.N. Development Program. (See app. II for more detailed information.) The principal recipient is responsible for making sure that funds are properly accounted for as well as for monitoring and evaluating the grant’s effectiveness in accordance with indicators mutually agreed to by the Fund and the grantee. In some cases, there may be multiple principal recipients for a single grant. The principal recipient typically works with other entities, or subrecipients, to carry out grant activities. Subrecipients are entities, such as NGOs, with the expertise necessary to perform the work and can be other CCM members. The principal recipient is responsible for supervising any subrecipients and distributing Fund money to them. The local fund agent (LFA) is the Fund’s representative in each recipient country and is responsible for financial and program oversight of grant recipients. This oversight role includes an assessment of recipients prior to their receiving money from the Fund. The assessment covers recipients’ ability to maintain adequate financial controls, procure goods and services, and carry out program activities. The Fund selects one LFA in each country. As of April 1, 2003, the Fund has contracted with four organizations to fill this role: two private sector firms, KPMG and PricewaterhouseCoopers; one private foundation that was formerly a public corporation, Crown Agents; and one multilateral entity, the U.N. Office for Project Services (UNOPS). The Fund may contract with additional organizations as the need arises and expects to receive bids from potential LFAs by August 2003. Challenges at Country Level Slow Disbursement of Grants; Fund Taking Steps to Respond Limited Communication, Lack of Clarity over Roles and Responsibilities at Country Level As of late 2002, in three of the four countries we visited, country coordinating mechanisms were not operating at levels envisioned by the Fund, owing in part to insufficient communication between the Fund and the CCM as well as between the CCM’s chair and members. This has resulted in confusion over the intended structure and purpose of the CCM. While our sample of only four countries is not necessarily representative of all grant recipients, several NGOs reported similar observations to the board. The Fund has posted general guidelines for CCMs on its Web site as well as in its calls for proposals. These guidelines encourage CCMs to hold regular meetings; engage all relevant participants, including representatives of civil society, in substantive discussions; ensure that information is disseminated to all interested parties; and be involved in the implementation of projects after proposals are developed and submitted to the Fund. However, many CCMs had difficulties following these guidelines. The role of the CCM in developing proposals and participating in their implementation after approval is not clear, according to a report by an international HIV/AIDS organization that assessed the participation of NGOs in the CCM process and according to CCM members in several countries. For example, many NGOs are not aware that they can participate in both the development and implementation of proposals. Furthermore, they are demanding clearer information on the selection of CCM members and the entities to which CCMs are accountable. An NGO participant told us that after a meeting in March 2002, the CCM did not convene again for about 6 months because it had received no guidance from the Fund on how to proceed. A number of members of another CCM said that they did not get a chance to vet or, in some cases, read proposals before endorsing them. In addition, after the proposals were submitted, members of this CCM were not informed of important events in a timely manner. A donor participating in this CCM stated that, with regard to a grant proposal for more than $200 million that was submitted in the second round and has since been approved, no one knows who will be responsible for implementing it when the money arrives. A number of the CCM members with whom we met were concerned over the level of involvement of all relevant parties. According to information compiled by the Fund’s Governance and Partnership Committee for the board’s January 2003 meeting, all CCMs that submitted second-round proposals are chaired by a government official (79 percent from the health ministry). In addition, at least a quarter of the CCMs lack representation from one or more of the following groups: people living with one of the three diseases, the private sector, academic institutions, or religious organizations. In one country, for example, donors said that NGOs need to develop a stronger and more active voice on the CCM. An update on the Fund for nongovernmental organizations and civil society, prepared by the International Council of AIDS Service Organizations, expressed similar views regarding CCMs in countries that we did not visit. However, the update also included evidence that CCMs are enhancing the involvement of NGOs in national health policies in some countries. In addition to members of civil society, key government ministries and donors are often not included as members in current CCMs. The Governance and Partnership Committee recognized this point in the document prepared for the January 2003 board meeting, stating, “Of concern is the relatively low participation from Ministries of Finance (37 percent), given the need to ensure consistency with Global Fund grant processes and overall fiscal and monetary policies of recipient countries.” The committee also noted that although the World Bank is a significant source of resources for many recipients, it is a member of only 14 percent of CCMs. In one country we visited, for example, where neither the Ministry of Finance nor the World Bank were members of the CCM, a dispute over where the Fund money should be deposited delayed the signing of the country’s first grant agreement. Dissemination of information is also a problem, according to the international HIV/AIDS organization report and CCM members with whom we met. The report stated that many NGOs are not receiving essential information from the Fund because the CCM chairs receiving this information are not passing it on to all stakeholders. In one country, several CCM members told us that the CCM is not functioning well because the flow of information is tightly controlled by the chair. Many members of this CCM, for example, were unaware that a nongovernmental organization had also submitted a proposal to the Fund. As of April 1, 2003, more than 1 year after the proposal was submitted, the CCM had yet to review and endorse or reject it, as required by the Fund. As a result, the Fund has dropped this proposal from its list of those approved in the first round. Of the four countries we visited, even the country with the most functional CCM experienced some difficulties. This country had received substantial support from a Fund staff member, who spent 6 weeks in the country helping the CCM clarify the Fund’s principles regarding CCMs and how its proposal will be implemented. This support, together with the active leadership of the CCM chair, was widely credited with the relative success of the CCM. Members of this CCM said it had become a transparent, multisectoral, participatory, and consensus-driven forum that has held frequent meetings. However, CCM members were still unclear as to their role after the grant is disbursed. The Fund Is Taking Steps to Address Problems Associated with CCMs According to the Fund, it does not have sufficient resources to provide the same level of support for every country as it did in the country cited above. Nevertheless, it is currently attempting to enhance communication with and within country coordinating mechanisms in order to improve their functioning. While trying to remain flexible and attentive to differing situations in each country and avoid an overly prescriptive, “cookie- cutter” approach, the Fund’s Governance and Partnership Committee proposed to the board in January 2003 specific guidelines for CCMs that address many of the issues raised above. The committee also proposed that the secretariat work with it to develop a handbook for CCMs that contains these principles. Although the board did not reach a decision on this proposal in January 2003, as of April 1, 2003, the agreements between the Fund and grant recipients contained language describing the nature and duties of CCMs. This language states that CCMs are to have a role in monitoring the implementation of Fund grants; that they should promote “participation of multiple constituencies, including Host Country governmental entities, donors, nongovernmental organizations, faith-based organizations and the private sector”; and that they should meet regularly to develop plans and share information. According to U.S. government officials who were involved in setting up the Fund and who attended the January 2003 board meeting, the Fund may also consider other options to enhance the functioning of CCMs, such as having those CCMs that have been working relatively well share best practices with others or having a member of the secretariat hold regional workshops for CCMs from several countries. Starting in December 2002 through the spring of 2003, the Fund held a series of regional workshops for CCM members and other stakeholders in the Philippines, Myanmar, Senegal, and Cuba. Additional workshops are scheduled to take place in South Africa, Ukraine, and Latin America. According to the Fund, these workshops are providing a forum for “open dialogue,” whereby the Fund can disseminate and clarify information and receive feedback. In addition, the Fund is considering expanding the secretariat to allow its staff to devote more time to advising individual CCMs and to working with local partners, such as bilateral and multilateral donors, that are assisting with grant implementation. Administrative Arrangement with WHO Causing Delays; Fund Considering Alternate Arrangements The Fund established an administrative services agreement with the WHO, an agency of the United Nations, to benefit from some of the tax and employment advantages of an international organization, but this relationship is causing delays and other problems, and the Fund is considering alternate arrangements. The agreement with WHO requires that the Fund apply certain WHO regulations and systems governing personnel and contractual issues. According to WHO and Fund staff, while this agreement gives the staff of the secretariat important privileges in Switzerland and allowed the Fund to begin operating quickly, it has contributed to administrative delays, frustration, and uncertainties concerning responsibility and accountability. Regarding delays, once the Fund makes certain administrative decisions, it must wait until it obtains clearance from officials at WHO before it can act. According to secretariat officials and one of the local fund agents we met with, this dual approval process has delayed the approval of LFA contracts by up to 8 weeks. The officials stated that this is significant because it has lengthened the time required to get grant agreements completed and signed by recipient countries. The WHO official responsible for approving the Fund’s administrative decisions said that it takes several weeks to vet key actions, such as the LFA contracts, when they are added to his unit’s existing workload. In addition to creating delays, the relationship between the Fund and WHO has led to frustration and uncertainties for Fund staff concerning the scope of their responsibility and the authorities to whom they are accountable. For example, although the board granted the executive director of the Fund the authority to sign contracts with vendors and grantees, WHO must be a party to all contracts since the executive director is technically a WHO employee. According to officials from both the Fund and WHO, removing the dual approval process would lessen delays and uncertainties over roles and responsibilities. The board asked the secretariat to look into pursuing enhanced legal benefits for the Fund from Swiss authorities. An important objective for this change is to allow the Fund to withdraw from the administrative services agreement with the WHO while retaining tax and other advantages. However, according to the Fund, there are important considerations to be resolved before the board would approve and the Swiss government would authorize a change in recognition. The board expects to address this issue at its next meeting in June 2003. The Fund Developed Comprehensive Oversight Systems and Issued Procurement Guidance, but Systems Face Challenges, and Guidance Is Still Evolving The Fund has developed systems for financial accountability and for monitoring and evaluating grant activities and has issued guidance on procurement. However, in the Fund’s first year of operation, these systems faced challenges at the country level that the Fund is working to address, and procurement guidance is still evolving. Oversight Systems Established but Face Challenges The Fund, through the local fund agent, has established a comprehensive system for overseeing grant recipients, but the introduction of the LFA has been marked by controversy and misconceptions regarding its role. These problems may impede the implementation of grants. The Fund recognizes these issues and is developing additional guidance for LFAs and principal recipients. The Fund Has Established a Comprehensive System for Ensuring Recipients’ Financial Accountability The Fund has established a system for ensuring that principal recipients rigorously account for the money they spend. This system requires them to demonstrate adequate finance and management systems for disbursing money, maintaining internal controls, recording information, managing and organizing personnel, and undergoing periodic audits. The secretariat, the LFA, and the principal recipient each has a role in this system. The secretariat selects the LFAs, exercises quality control over their work, and draws up grant agreements. Prior to selecting LFAs, the secretariat considers their independence from principal recipients and other CCM members in an effort to avoid potential conflicts of interest. It also considers their expertise in overseeing financial management, disease mitigation programs, and procurement, as well as their experience with similar assignments. The LFAs, in turn, assess principal recipients for the same capabilities. To ensure that the disbursement of funds will be carefully controlled, the secretariat provides principal recipients with limited amounts of money at a time, based on their documentation of project results. In an effort to ensure clear definition of roles, responsibilities and accountability, it developed guidelines for LFAs that define their duties to assess and oversee principal recipients. For example, the LFA’s financial assessment of the principal recipient is to be completed before the grant agreement is signed, and the secretariat is to receive and validate a preliminary assessment before the LFA proceeds with the full assessment. To minimize inefficiency, the preliminary assessment is to draw on existing records of the principal recipient’s performance with other donors. The Fund has established requirements for principal recipients in the grant agreement. Specifically, the agreement requires principal recipients to maintain records of all costs they incur, and these records must be in accordance with generally accepted accounting standards in their country or as agreed to by the Fund. Principal recipients are to have an independent auditor separate from the LFA and acceptable to the Fund that conducts annual financial audits of project expenditures. The principal recipient is also to ensure that the expenditures of subrecipients are audited. The LFA or another entity approved by the Fund is authorized to make site visits “at all reasonable times” to inspect the principal recipient’s records, grant activities, and utilization of goods and services financed by the grant. The principal recipient is required to submit quarterly and annual reports to the Fund through the LFA on its financial activity and progress in achieving project results. For example, the annual financial reports are to include the cost per unit of public health products procured and the portion of funds supporting various activities such as prevention, treatment, care, administering the project, and enhancing local skills and infrastructure through training and other activities. The reports are also to specify the portion of funds used by local NGOs, international NGOs, government agencies and other public sector organizations (e.g., U.N. agencies), the private sector, and educational institutions. Failure to abide by these and other requirements in the grant agreement can result in the Fund terminating the grant or requiring the principal recipient to refund selected disbursements. The Fund Has Established a Detailed System for Monitoring and Evaluating Grant Performance The Fund has established a detailed system for monitoring, evaluating, and reporting at regular intervals on the performance of grants that identifies specific roles for the LFA, principal recipient, subrecipients, and CCM. Prior to the signing of each grant agreement between the Fund and the principal recipient, the LFA conducts an assessment of the principal recipient that includes an evaluation of its capacity to monitor and evaluate grant projects. Within 90 days after the agreement enters into force, the principal recipient is required to submit a detailed plan for monitoring and evaluation. The principal recipient and the subrecipients are responsible for selecting the appropriate indicators, establishing baselines, gathering data, measuring progress, and preparing quarterly and annual reports. The LFA is charged with making sure that the principal recipient monitors and evaluates its projects and with reviewing the reports. If the LFA identifies concerns, it is to discuss them with the principal recipient and the CCM and may forward information to the Secretariat in Geneva. According to the Fund, the CCM should work closely with the principal recipient in establishing the monitoring and evaluation processes and should review the reports along with the LFA. Building on the existing body of knowledge and contributions of evaluation specialists from organizations such as the U.S. Agency for International Development (USAID), UNAIDS, WHO, and the Centers for Disease Control and Prevention, the Fund has identified indicators for recipients to use in tracking the progress of grant-supported projects. The indicators that the principal recipient will use to track the progress of individual grants are expected to measure processes, outcomes, and impact. During the first 2 years of 5-year projects, the quarterly and annual reports submitted by the principal recipient to the LFA track steps taken in the project implementation process. For example, a process indicator for HIV/AIDS prevention activities could measure the dissemination of information, such as the number of prevention brochures developed and distributed to teenagers or other at-risk groups. Starting in the third year, the principal recipient is expected to report on program outcomes. Following the HIV/AIDS prevention example, this would entail measuring whether the information had any effect on the behavior of the targeted population. In this example, the principal recipient would report on the percentage of the young people or others receiving the brochures who correctly identified ways of preventing HIV transmission and stated that they had changed their behavior accordingly. Near the end of the project, the principal recipient would report on its epidemiological impact by measuring whether there has been a reduction in the incidence of disease in the target group. Funds will be released to the principal recipient at intervals based on its performance according to these indicators. The exact amounts to be released will be calculated using its anticipated expenditures. In cases where repeated reports demonstrate that progress is not being made, the Fund, after consultation with the LFA and CCM, may choose to make adjustments, including replacing the principal recipient or nonperforming subrecipients. The key evaluation for the majority of the grants comes after 2 years, when the Fund expects to begin seeing evidence that grant- supported activities are leading to desired outcomes. At that point, the Fund will decide whether to continue to disburse money to grant recipients. The board has agreed in principle that there should also be an independent evaluation of the Fund’s overall progress in meeting its key objective of reducing the impact of HIV/AIDS, TB, and malaria by mobilizing and leveraging additional resources. According to the Fund, this evaluation will include an assessment of the performance of the board and the secretariat. The focus of the evaluation will be on the board’s and secretariat’s performance in governing and implementing processes that enable Fund grants to relieve the burden of disease, improve public heath, and contribute to the achievement of the U.N.’s millennium goals. As of April 1, 2003, the board had not made a final decision on what entity will conduct the independent evaluation or how or when the evaluation will be conducted. In addition, the board had not yet determined what portion of its resources should be budgeted for this evaluation. LFAs Face Several Challenges In certain countries, the introduction of the local fund agent has been marked by controversy and misconceptions, partly due to its newness, that may delay the designation of LFAs and make it difficult for them to oversee the implementation of grants. For example, the chair of the CCM in one of the countries we visited, where the principal recipient is the Ministry of Health, believed that another government ministry could serve as the LFA, despite the Fund’s explicit instructions that the LFA must be independent from the grant recipient. In another country, key government and some donor officials were upset over the Fund’s decision to bypass existing systems for handling donor funds. This situation contributed to resentment of the LFA as the Fund’s local representative and oversight mechanism. A number of stakeholders with whom we met assumed incorrectly that the LFA was charging an exorbitant fee and deducting it from the grant. In fact, LFA fees are funded through the secretariat, not deducted from each grant. Payment for LFA services constitutes the single largest item in the secretariat’s budget, accounting for $16.4 million, or 42 percent of its proposed 2003 budget. Overall, however, these fees represent only about 2 percent of estimated grant disbursements for the year, according to secretariat officials. Moreover, representatives from KPMG, one of the entities designated by the Fund as an LFA, told us that they are charging the Fund 50 percent less than they are charging other clients for similar services. The Fund is aware of these problems and is attempting to address them. According to a January 2003 report of the board’s Monitoring, Evaluation, Finance and Audit Committee, the oversight role of the LFA can create resentment in a country if it is carried out without local participation in problem analysis and resolution. The report cites the same example we observed, stating that recent experience in that country showed that existing local systems should be used as much as possible to avoid new and unnecessary requirements that distract from, rather than support, the Fund’s goal of helping countries improve their capacity to fight disease. On January 12, 2003, the Fund drew up guidelines on financial management arrangements for principal recipients that offer several options, including the use of credible, existing local systems. Finally, despite the Fund’s having designated independence as a key factor in the selection of LFAs, the limited number of trained personnel and organizations in many recipient countries may impair independence, resulting in potential conflicts of interest. Given the small pool of qualified disease experts available for hire in some poor countries, subrecipients recruited to implement grant activities will be competing with subcontractors to the LFA for monitoring these disease-mitigation projects. It is unclear whether there is sufficient expertise available to provide staff for both of these functions. For example, in one of the countries we visited, the NGO the LFA had hired to assess the the principal recipient’s capacity to carry out its grant activities will also be implementing a Fund project for this principal recipient. Since effective evaluation assumes that the monitor is independent of the implementer, achieving such independence may be a challenge in such circumstances. Conceivably, there also may be situations in which one U.N. organization, the U.N. Office for Project Services—one of the entities contracted by the Fund to serve as an LFA—may be overseeing another, the U.N. Development Program, serving as the principal recipient. Fund officials have stated that they would try to avoid this situation. The board’s Monitoring, Evaluation, Finance and Audit Committee is developing a conflict of interest policy for LFAs. In the meantime, the Fund has required one LFA with a potential conflict of interest to include in its contract conflict of interest mitigation policies and procedures to minimize this possibility. The Fund has included conflict-of- interest and anticorruption provisions for principal recipients in the grant agreement document. Board Developed Procurement Requirements, but Certain Issues Have Not Been Finalized The Fund, through the grant agreements, has developed detailed procurement requirements for medical supplies and a brief list of requirements for procuring nonmedical items, but certain issues have not been finalized. Establishing procurement requirements is important to ensure that grant recipients use Fund money efficiently as they purchase medicines, vehicles, office equipment, and other items; contract services; and hire personnel. Board Analyzed Issues and Developed Options for Procuring Drugs and Health-Related Items The Fund’s procurement provisions have focused primarily on drugs and health products because a significant amount of Fund money will be spent on these items and because drug procurement is complex. For example, the Fund anticipates that $194 million of grant money will be spent on drugs in the first 2 years of second-round grants, based on the proposals approved in that round. When other health products are included, the total comes to $267 million, or almost half of anticipated expenditures, for the first 2 years of round-1 grants, and $415 million, representing a similar percentage of anticipated expenditures, for the first 2 years of round-2 grants (see fig. 5). Drugs and health products for round-2 grants are expected to grow to $1.17 billion over the full life of these grants. Drug procurement is complex, as it requires strict standards for ensuring and monitoring quality, controlling transport and storage, and tracking how the products are used. For example, many grant recipients have plans to purchase antiretrovirals, which block the replication of HIV and are indispensable for treating patients living with the disease. These drugs have strict dosing regimens, and patients must be closely monitored to ensure that they are adhering to these regimens and do not develop adverse reactions or resistant strains of the virus. The Fund estimates that close to 200,000 people will be treated with antiretrovirals during the first 2 years of grants resulting from the first 2 proposal rounds and that close to 500,000 will be treated over the life of these grants. (See app. III for more detailed information.) In April 2002, the board established a procurement and supply management task force, made up of technical experts from U.N. agencies, the private sector, and civil society, to analyze issues related to procuring drugs and health products and develop options and recommendations for grant recipients on how to procure them. In October 2002, the task force provided a list of issues to the board that included drug selection and the use of preventive, diagnostic, and related health monitoring drug quality and compliance with country drug registration processes for marketing and distribution; procurement principles and responsibilities, including supplier performance, obtaining the lowest price for quality goods, compliance with national laws and international obligations, and domestic production; managing and assessing the chain of supply, including forecasting demand, ensuring proper shipping and storage, and preventing drug diversion; payment issues, including direct payment and exemption from duties, tariffs and taxes; and ensuring that patients adhere to treatment while monitoring drug resistance and adverse drug reactions. In the grant agreements, the Fund provides specific requirements for principal recipients regarding many of these issues. The requirements are meant to ensure the continuous availability of safe and effective drugs and other health products at the lowest possible prices and to provide a standard for the LFA to use in evaluating the procurement activities of the principal recipient. For example, the requirements state that recipients must comply with established quality standards when purchasing medicines. The requirements also stipulate that no Fund money may be used for procuring drugs or other health products until the Fund, through the LFA, has verified that the principal recipient has the capacity to manage (or oversee subrecipients’ management of) procurement tasks, such as purchasing, storing, and distributing these products in accordance with Fund guidance, unless the Fund agrees otherwise. In one country, the Fund issued additional procurement requirements to complement the grant agreement, based on an assessment of the principal recipient’s ability to procure drugs and other goods. The Fund anticipates that all grant recipients that have plans to purchase medicines with Fund money will be assessed within 6 months after signing the grant agreement. The Fund Provided General Requirements for Procuring Goods and Services In addition to providing specific requirements for procuring drugs and other health-related products, the grant agreement includes a brief list of general requirements that also apply to services and nonmedical items such as vehicles or office equipment. These requirements establish a series of minimum standards that recipients must observe when purchasing goods or executing contracts. For example, recipients are to award contracts on a competitive basis to the extent possible and must clearly describe the goods they are requesting when they ask for bids. They must pay no more than a reasonable price for goods and services, keep records of all transactions, and contract only with responsible suppliers who can successfully deliver the goods and services and otherwise fulfill the contract. The Fund encourages recipients to use international and regional procurement mechanisms if doing so results in lower prices for quality products. For example, in one country, the U.N. Development Program will purchase vehicles for subrecipients because it has extensive experience with the import process. Similarly, the health ministry of another country— the entity that will implement the grant—may purchase antiretrovirals through the Pan American Health Organization. The Fund also encourages recipients with procurement experience to use their existing procedures, provided these procedures meet the requirements set forth in the grant agreement. For example, a principal recipient in one country will use its own procedures to purchase nonmedical items because these procedures are familiar and are based on generally accepted management practices. The Fund Has Not Finalized Some Procurement Issues The Fund has not finalized certain procurement issues, including (1) the consequences of noncompliance with national laws regarding patent rights and other intellectual property obligations, (2) the acceptance of waivers that would permit recipients to pay higher prices for domestically produced goods, and (3) solicitation and acceptance of in-kind donations. The board amended its policy on a fourth issue, payment of taxes and duties on products purchased with Fund money, and has asked the secretariat to monitor the impact of this change. Board documents and the Fund’s guidelines for submitting proposals encourage grant recipients to comply with national laws and applicable international obligations, including those pertaining to patents and other intellectual property rights. This issue is significant because these laws and obligations have rules and procedures that affect the procurement of drugs. The board has yet to reach a decision regarding the consequences of noncompliance, that is, whether failure to comply would automatically be considered a breach of the grant agreement and cause for termination of the grant. As of April 1, 2003, the Fund has not included any language concerning compliance with national laws and international obligations in the grant agreement. In the interim, however, Fund officials stated that the Fund retains the option of using the more general termination clause in the grant agreement in the event that a recipient is found by the appropriate authorities to be in violation of national law or international obligations. Another issue on which no formal decision has been made is whether the Fund, like the World Bank, should allow aid recipients to pay higher prices for domestically produced medicines and other goods to develop local manufacturing capacity. Documents prepared for the fourth board meeting note that the benefits of paying higher prices for domestically produced items are not clear and that it could be difficult for recipients to administer such a pricing scheme. The documents also note that it may be beyond the mandate of the Fund to support domestic efforts by approving higher prices for them. This was the only issue that board members brought to a vote, at the January 2003 meeting, and were unable to obtain the votes necessary to reach a decision. According to the Fund, the fact that no decision was reached means that the status quo—that recipients are encouraged to pay the lowest possible price for products of assured quality—remains. This policy is also likely to remain for the foreseeable future, since, according to Fund officials, it is no longer on the agenda of the Portfolio Management and Procurement Committee or the Procurement and Supply Management Advisory Panel, the two bodies that report to the board on issues pertaining to procurement. The board deferred to its June 2003 meeting the question of whether the Fund should solicit or accept in-kind donations such as drugs on behalf of grant recipients. The Portfolio Management and Procurement Committee cautioned that the Fund needs to consider methods for ensuring the quality of these products. While the Fund states in the grant agreements that Fund resources shall not be used to pay taxes and duties on products purchased in the recipient country, the Portfolio Management and Procurement Committee revisited this issue in its report to the January 2003 board meeting. Specifically, the committee noted that this policy may be difficult for NGO recipients to follow, as they have neither the authority to guarantee exemption nor the cash reserves to cover costs when exemptions are not possible. The committee implied that given these weaknesses, NGOs may be reluctant to serve as principal recipients and indicated in its report that making sure NGOs are included as principal recipients is more important than trying to ensure that grant recipients don’t pay taxes and duties. The committee also raised a practical issue, noting that the Fund’s current reporting requirements do not provide it with the information necessary to determine whether grantees are in fact using Fund money to pay these levies. At the January 2003 board meeting, the Fund amended its policy on exempting grant recipients from duties, tariffs, and taxes. The amended policy allows, but does not encourage, Fund resources to be used to pay these costs. The board asked the secretariat to monitor the impact of this revision and report back when sufficient information is available. Lack of Resources Threatens Fund’s Ability to Continue to Approve and Finance Grants The Fund’s ability to approve and finance additional grants is threatened by a lack of sufficient resources. The Fund does not currently have enough pledges to allow it to approve more than a small number of additional proposals in 2003. In addition, without significant new pledges, the Fund will be unable to support all of the already approved grants beyond their initial 2-year agreements. The Fund Requires Additional Pledges to Continue Approving Grants Because the Fund approves grant proposals on the basis of amounts that have been pledged, it will require additional pledges if it is to continue approving grants. According to the Fund, it will approve proposals on the basis of actual contributions to the trustee or pledges that will be converted to contributions soon after approval, so that proposals can be financed in a timely manner. As a result, the Fund has only a limited amount of money available for its third proposal round, currently planned for late 2003. In addition, the Fund will require significant additional pledges in order to continue holding proposal rounds beyond the planned third round. The Fund has less than $300 million available to support commitments in round 3, which would be significantly less than the $608 million in 2-year grants approved in the first round and the $884 million approved in the second round. These available resources are substantially less than the $1.6 billion in eligible proposals that the Fund expects to be able to approve in round 3. The Fund’s resource needs are based on expected increases in eligible proposals over the next two rounds (rounds 3 and 4) due to a concerted effort on the part of local partners to prepare significantly expanded responses to AIDS, TB, and malaria (see fig. 6). Based on the number of technically sound proposals it expects to receive and approve in future rounds, and the amount pledged as of April 1, 2003, the Fund projects that it will require $1.6 billion in new pledges in 2003 and $3.3 billion in 2004. The Fund Requires Significantly Greater Contributions to Finance Approved Grants for Duration of Programs The Fund will require significantly greater contributions to finance approved grants beyond initial 2-year commitments of money. By January 2003, the Fund had made 2-year grant commitments equaling nearly $1.5 billion in the first two proposal rounds. Among other things, these grants seek to provide 500,000 people with AIDS medications and 500,000 AIDS orphans and other vulnerable children with care and support. Although the Fund approves grants that can be covered by pledges received, these pledges need only be sufficient to finance the initial 2-year period of the grant. Since the typical Fund-supported project lasts five years, this could result in the Fund’s inability to fulfill its longer-term obligation to programs that are deemed successful at the 2-year evaluation. If all currently approved proposals demonstrate acceptable performance after 2 years, the Fund will require $2.2 billion more to assist these programs for an additional 1 to 3 years. Currently, the Fund has $3.4 billion in total pledges and nearly $3.7 billion in potential obligations from the first two proposal rounds (see fig. 7). The Fund will only sign grant agreements based on money received by the trustee, as opposed to pledges received. Thus, continued support beyond the 2-year point requires that a significant amount of pledges be turned into actual contributions. However, not all pledges are contributed in a timely manner. For example, as of January 15, 2003, more than $90 million pledged through 2002 had still not been contributed, including $25 million pledged by the United States. The Fund is providing numerous grants that will be used to procure antiretroviral drugs for people living with HIV/AIDS. Interruption or early termination of funding for such projects due to insufficient resources could have serious health implications, although Board documents suggest that special consideration for people undergoing treatment may be given during the evaluation process. The Fund currently has potential obligations lasting at least until 2007, and each additional proposal round will incur further long- term obligations for the Fund. The Fund has estimated that it will need at least $6.3 billion in pledges for 2003–2004 to continue approving new proposals and finance the grants already approved in rounds 1 and 2. The Fund is looking to raise these resources from both public and private sources, with $2.5 billion needed in 2003 alone. As of April 1, 2003, only $834 million had been pledged for 2003, 6 percent of which came from the private sector. Improvements in Grant-Making Processes Enhance Fund’s Ability to Achieve Key Objectives, but Challenges Remain The Fund has established detailed objectives, criteria and procedures for its grant decision process and is making enhancements to the process in response to concerns raised by participants and stakeholders. Several improvements were made to the proposal review process between the first and second proposal rounds, and the Fund has committed to further improvement. These efforts will seek to address ongoing challenges, including ensuring that the money from the Fund supplements existing spending for HIV/AIDS, TB, and malaria and that recipients are able to use the new aid effectively. The Fund has recognized these challenges, but its efforts to address them are still evolving. Improvements in Proposal Review and Grant-Making Process Support Key Objectives The Fund has made improvements in its proposal review and grant-making process to support key objectives, but assessment criteria and procedures are still evolving. According to the Fund, criteria for successful proposals include (1) technical soundness of approach, (2) functioning relationships with local stakeholders, (3) feasible plans for implementation and management, (4) potential for sustainability, and (5) appropriate plans for monitoring and evaluation. In addition, the Fund states that successful proposals will address the abilities of recipients to absorb the grant money. Using these criteria, the Fund established a grant approval process, based primarily on an independent evaluation of proposals by the TRP (see fig. 8). Between the first and second proposal rounds, the Fund made several improvements to the process, based on feedback from participants and the work of one of the Board’s committees. These improvements included revising the proposal forms and instructions to make them more comprehensive and better support the criteria for successful proposals as determined by the Fund. The Fund also added additional members with cross-cutting expertise to the Technical Review Panel to allow it to better evaluate nonmedical development–related aspects of the proposal, and lengthened the proposal application period from 1 month in round 1 to 3 months in round 2 to give applicants more time to develop their proposals. According to Fund and other officials, these improvements helped increase the overall quality of grant proposals submitted in the second proposal round. The Fund also made all successful proposals from the second round publicly available on its Web site, increasing the amount of information available to all interested parties regarding Fund-supported programs. Some board members expressed concerns between the first and second proposal rounds regarding the way the Fund was addressing its objective of giving due priority to the countries with the greatest need. In particular, the board members were concerned that countries with the greatest need, as determined by poverty and disease burden, might be least able to submit high-quality proposals, resulting in their systematic exclusion. In the first two proposal rounds, the Fund excluded only the highest income countries from grant eligibility. However, the Fund stated that priority would be given to proposals from the neediest countries. Most of the grants approved in rounds 1 and 2 did in fact go to recipients in countries defined by the World Bank as low income, demonstrating that the poorest countries were not being excluded. No money was awarded in countries defined as high income, and only 3 percent of the money was awarded in countries defined as upper-middle income (see fig. 9). Similarly, sub-Saharan Africa, the region that suffers from the highest burden of disease for HIV/AIDS, received 61 percent of the money for HIV/AIDS programs. (See app. IV for more detailed information.) However, to further ensure that this key objective is supported, particularly in the face of increasingly scarce resources, the Fund has altered its eligibility criteria for round 3 to focus more clearly on need. All high- income countries are now excluded from eligibility for Fund money, and upper-middle and lower-middle income countries must meet additional criteria such as having cofinancing arrangements and a focus on poor or vulnerable populations. Low-income countries remain fully eligible to request support from the Fund. Beginning in the fourth round, WHO and UNAIDS will be asked to provide matrices categorizing countries by disease-related need and poverty. Challenges to Grant-Making Process Remain The Fund and other stakeholders note that meeting key grant-making criteria will be a challenge, and the Fund’s efforts to address these criteria are still evolving. According to Fund guidelines, proposals should demonstrate how grants complement and augment existing programs and how these additional resources can be effectively absorbed and used. Ensuring that Grants Complement and Add to Existing Spending The Fund’s policy is that both the pledges the Fund receives and the grants it awards must complement and add to existing spending on the three diseases. However, ensuring adherence to this policy is difficult. According to the secretariat, it monitors the sources of new pledges to assess whether the pledges represent additional spending. Monitoring pledges is problematic, however, because it can be difficult to determine how much money was spent by a donor or multilateral institution specifically on AIDS, TB, or malaria-related programs. According to a UNAIDS report, pledges to the Fund from most of the G-7 countries, as well as from eight of the Development Assistance Committee governments, have thus far been determined to add to baseline HIV/AIDS funding. Nonetheless, despite its monitoring efforts, the Fund can only encourage, rather than require, donors to contribute new spending rather than simply transfer funds from related programs. It is also difficult for the Fund to ensure that the grants it awards will augment existing spending at the country level. It has identified several situations to be avoided, including allowing grants to replace budgetary resources or other “official development assistance,” and it has taken certain steps to ensure that the grants will in fact represent new and added spending in the country. For example, the Fund has required all applicants to include information in their proposals on how the funds requested would complement and supplement existing spending and programs. In addition, the Fund has reserved the right to terminate grants if it discovers that they are substituting for, rather than supplementing, other resources. However, the Fund does not have the ability to formally monitor whether grants constitute additional spending once disbursed, and we anticipate that doing so would be difficult. Even if the Fund succeeded in documenting that all grant money was spent appropriately on the approved project and that no previously allocated money for AIDS, TB, or malaria was supplanted in the process, it still could not document the level of spending on these diseases that would have occurred without the grant. Thus, it could not show whether the grant in fact substituted for money that would have been otherwise allocated. A report presented at the Fund’s October 2002 board meeting proposed the development of a policy for monitoring additionality. At present, lacking any formal system, the Fund may be unaware of, or unprepared to address, situations in which its grants do not represent additional, complementary spending. For example, an official from a development agency that currently funds much of one country’s TB program stated that he believes the country lacks the capacity to increase its program for TB, despite having received a TB grant in the first round. The development agency therefore planned to transfer its current TB funding to other health assistance projects in response to the Fund’s TB grant, raising questions of whether the grant will fulfill its purpose of providing additional funding for TB. Similar concerns have been expressed by other officials representing both Fund recipients and donors. Ensuring that Recipients Have the Capacity to Absorb New Funding Although the Fund has stated that proposals will be assessed based on whether they have demonstrated how grants could be effectively absorbed and used, Fund officials, donors, and others have raised concerns regarding the actual capacity of recipients to absorb new aid. While some countries may have surplus labor and institutional capacity within their health sectors, other countries may have difficulty rapidly expanding their health sectors due to a shortage of skilled health workers or insufficient infrastructure to deliver health services. While such capacity constraints can be relieved over time with additional training and investment, in the short run they could limit the effectiveness of expanded health spending. For example, officials in one country told us that it has been slow in disbursing its World Bank HIV/AIDS money because of difficulties in establishing the necessary institutions to identify and distribute funds to effective projects. In another country, government and NGO officials cited a lack of administrative capacity in NGOs as a likely challenge to their ability to absorb the Fund grant. The Fund is aware of these concerns and is addressing them in a number of ways. Proposal applications must describe the current national capacity—the state of systems and services— available to respond to HIV/AIDS, TB, and malaria. After the first round, the Fund also added more members to the TRP to evaluate these issues in proposals. In addition, the Fund requires LFAs to preassess principal recipients to ensure that they are prepared to receive, disburse, and monitor the money. On at least one occasion, the Fund decided to reduce its initial grant disbursement to a recipient, based on concerns raised by the LFA in the preassessment. The LFA preassessment does not address all potential constraints on a country’s ability to absorb new funds, notably across sectors or at the macroeconomic level. While these capacity constraints could hinder the effectiveness of the grant, they could also generate unintended side effects beyond the scope of the funded project. Introducing more money into a sector with insufficient capacity to utilize it could draw scarce resources from other vital sectors, such as agriculture or education. For example, one way to reduce temporary shortages of skilled health workers would be to raise the salaries of those positions, relative to the rest of the economy. Over time, this wage disparity will provide an incentive to increase the number of graduates trained in the health field. However, in the short term, it may encourage already skilled workers in other sectors to pursue higher wages in the health sector, adversely affecting the sectors they leave. To the extent that these other sectors are also priorities in economic development, this could adversely affect a country’s pursuit of poverty reduction. The country coordinating mechanism model of proposal development is intended to help avoid such problems by ensuring that those with the most knowledge of a country’s needs and capacities are directly responsible for developing proposals. However, as discussed earlier, many CCMs are facing challenges in operating effectively. The provision of large amounts of new foreign aid to countries from all sources, including the Global Fund and bilateral and multilateral initiatives, may also have unintended, detrimental macroeconomic implications. Large increases in development assistance are considered critical to the successful fight of the three diseases, as well as the achievement of long- term poverty reduction goals. Moreover, increasing the number of healthy people in a country, such as through successful treatment, may increase its productive capacity. However, increasing spending beyond a country’s productive capacity could result in problems, such as increased domestic inflation, that are not conducive to growth or poverty reduction. While a substantial share of Global Fund grant money is expected to fund imports such as medicines–-which likely have no adverse macroeconomic implications–-a significant amount will also be spent domestically on nontraded items, such as salaries and construction expenses. Concerns over potential macroeconomic difficulties prompted one government to initially propose offsetting its Global Fund grant with reductions in other health spending; however, upon further assessment the government reconsidered and will not reduce other health spending. An International Monetary Fund official stated that he believed that the Global Fund grants are not generally large enough, as a share of a country’s Gross Domestic Product, to cause significant macroeconomic effects. He added, however, that country authorities should nonetheless monitor these grants in case they do become significant and possibly destabilizing. The Global Fund expects that the amount of money that it disburses will rise substantially in the future, which—along with large increases in other proposed development assistance, such as through the U. S. Millennium Challenge Account--–could substantially increase total aid flows to certain countries in a relatively short period of time. Available research on the macroeconomic effects of large increases in overall grant aid is thus far inconclusive, providing little guidance on the magnitude of assistance that may trigger these negative macroeconomic impacts. Agency Comments and Our Evaluation We requested comments on a draft of this report from the Executive Director of the Fund, the Secretary of Health and Human Services, the Secretary of State, and the Administrator of USAID, or their designees. We received formal comments from the Fund as well as a combined formal response from the Department of Health and Human Services, the Department of State, and USAID (see apps. V and VI). Both the Fund and the U.S. agencies agreed with the information and analysis presented in this report. The Fund’s Executive Director concluded that this report accurately describes the challenges faced by the Fund in responding to the three diseases. The Fund outlined measures it is taking to address these challenges and identified several additional challenges. The U.S. agencies stressed that they and other donor agencies should work with the Fund to address the challenges. Both the Fund and the U.S. agencies also submitted informal, technical comments, which we have incorporated into this report as appropriate. We are sending copies of this report to the Executive Director of the Fund, the Secretary of Health and Human Services, the Secretary of State, the Administrator of USAID, and interested congressional committees. Copies of this report will also be made available to other interested parties on request. In addition, this report will be made available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149. Other GAO contacts and staff acknowledgments are listed in appendix V. Objectives, Scope, and Methodology At the request of the Chairman of the House Committee on Appropriations, Subcommittee on Foreign Operations, Export Financing and Related Programs, we assessed (1) the Fund’s progress in developing governance structures; (2) the systems that the Fund has developed for ensuring financial accountability, monitoring and evaluating grant projects, and procuring goods and services; (3) the Fund’s efforts to mobilize resources; and (4) the Fund’s grant decision-making process. To assess how the Fund has progressed in establishing structures needed for governance, we reviewed Fund documents and reports from nongovernmental organizations involved in the country coordinating mechanism (CCM) process. We also interviewed Fund officials in Geneva and U.S. government officials from the Departments of State and Health and Human Services and the U.S. Agency for International Development. In addition, we traveled to Haiti and Tanzania, two “fast-track” countries where grant agreements were about to be signed, and two countries less far along in the process, Ethiopia and Honduras. In these four countries, we met with a wide variety of CCM members, including high-level and other government officials, multilateral and bilateral donors, faith-based and other nongovernmental organizations, professional associations, and private sector groups. In all four countries, we met with organizations designated as the principal recipient in grant proposals. We also met with a Fund official who was working with the CCM in Haiti. To understand the Fund’s administrative services agreement with the World Health Organization (WHO) and its impact on the Fund’s ability to quickly disburse grants, we reviewed Fund documents pertaining to the agreement, met with WHO and Fund officials in Geneva and spoke with a U.S. government legal expert in Washington, D.C. We also met with a WHO official while he was traveling in San Francisco. To assess the Fund’s development of oversight systems to ensure financial and program accountability, we reviewed Fund documents prepared for the second, third, and fourth board meetings; requirements contained in the grant agreements; and Fund working papers prepared after the fourth board meeting that propose further clarifications and guidelines for principal recipients and Local Fund Agents (LFAs). We also reviewed the U.S. Agency for International Development’s (USAID) Handbook of indicators for programs on human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS) and sexually transmitted infections, Joint United Nations HIV/AIDS Program publications for monitoring and evaluating national AIDS programs, and WHO coordinates for charting progress against HIV/AIDS, tuberculosis and malaria. We held discussions with the secretariat in Geneva on fiduciary and financial accountability and monitoring and evaluation of grant programs and received presentations on these topics from the secretariat. In addition, we discussed these issues with U.S. government officials from the Departments of State and Health and Human Services and USAID, and with officials from the World Bank. During our fieldwork in Haiti and Tanzania, we met with representatives of the entities serving as local fund agents in those countries (KPMG in Haiti and PricewaterhouseCoopers in Tanzania); we also met with representatives from KPMG’s Global Grants Program in San Francisco. To further our understanding of the Fund’s oversight systems and the challenges to implementing them in recipient countries, we met with the following groups in all four of the countries we visited: government officials, multilateral and bilateral donors, nongovernmental organizations, and others who will be involved in implementing Fund grants or who had observations on the Fund’s oversight systems. To assess the Fund's procurement guidelines, we reviewed the grant agreements and data prepared by the Fund showing anticipated spending on drugs and other items and met with Fund officials in Geneva. We also interviewed a U.S. legal expert serving on the procurement and supply management task force and reviewed documents prepared by taskforce and the Portfolio Management and Procurement Committee at the request of the board. To learn about the ability of grant recipients to procure goods and services, we met with local fund agent representatives, a principal recipient, and subrecipients. We asked the principal recipient and subrecipient representatives about their procurement practices, their understanding of Fund guidance and their plans to procure medicines, goods and services. In Washington, D.C., we met with staff from a public health consulting firm who assessed one of the principal recipients. To further our understanding of the procurement process, we also interviewed representatives from several other consulting firms that assist developing country governments and nongovernmental organizations with procurement. To assess Fund efforts to mobilize resources, we analyzed pledges made to the Fund from public and private sources as well as the Fund’s commitments to grants. We reviewed their expected future financial needs to make new grants and finance already approved grants. In addition, we contacted officials from the Fund to discuss their resource mobilization efforts and strategies for dealing with a resource shortfall. To assess the Fund’s grant-making process, we reviewed the objectives and processes of their proposal review and approval processes. We reviewed Fund documents, including proposal applications and guidelines from the first and second proposal rounds. Additionally we tracked the Fund’s efforts at improving the grant-making process by reviewing documents prepared for the Fund’s first four board meetings. We also interviewed representatives from the Fund and the technical review panel in Geneva and Washington, D.C., and we asked government, donor, and nongovernmental organization officials in the four recipient countries we visited for their assessment of the proposal process and its challenges. To assess the nature of the challenges identified and any efforts made by the Fund to address them, we interviewed officials at the World Bank and International Monetary Fund, and we conducted a review of relevant economic literature. We also conducted research and reviewed data available on global spending on HIV/AIDS, TB, and malaria. For general background and additional perspectives on the Fund, we spoke with representatives from the Gates Foundation, the Global AIDS Alliance, and the Earth Institute at Columbia University. We conducted our work in Washington, D.C.; San Francisco; Geneva, Switzerland; Ethiopia; Haiti; Honduras; and Tanzania, from April 2002 through April 2003, in accordance with generally accepted government auditing standards. Drug Procurement Cycle The drug procurement cycle includes most of the decisions and actions that health officials and caregivers must take to determine the specific drug quantities obtained, prices paid, and quality of drugs received. The process generally requires that those responsible for procurement (1) decide which drugs to procure; (2) determine what amount of each medicine can be procured, given the funds available; (3) select the method they will use for procuring, such as open or restricted tenders; (4) identify suppliers capable of delivering medicines; (5) specify the conditions to be included in the contract; (6) check the status of each order; (7) receive and inspect the medicine once it arrives; (8) pay the suppliers; (9) distribute the drugs, making sure they reach all patients; (10) collect information on how patients use the medicine; and (11) review drug selections. Because these steps are interrelated, those responsible for drug procurement need reliable information to make informed decisions. Indicators of Need for Recipient Countries HIV/AIDS rate (%), Adults (15-49) Malaria (Cases/ 100,000) TB (Cases/ 100,000) HIV/AIDS rate (%), Adults (15-49) Malaria (Cases/ 100,000) TB (Cases/ 100,000) HIV/AIDS rate (%), Adults (15-49) Malaria (Cases/ 100,000) TB (Cases/ 100,000) HIV/AIDS rate (%), Adults (15-49) Malaria (Cases/ 100,000) TB (Cases/ 100,000) Gross National Income per capita (in U.S. dollars) Although each country is listed only once, many countries received multiple grants. All grants received have been accounted for when noting disease programs addressed and dollar amount requested by approved programs. This table includes only grants for individual countries. Multicountry grants are not included. Purchasing Power Parity method. Comments from the Global Fund to Fight AIDS, TB and Malaria Joint Comments from the Departments of Health and Human Services and State, and the U.S. Agency for International Development GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the persons named above, Sharla Draemel, Stacy Edwards, Kay Halpern, Reid Lowe, William McKelligott, Mary Moutsos, and Tom Zingale made key contributions to this report. GAO’s Mission The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs
By the end of 2002, more than 40 million people worldwide were living with human immunodeficiency virus/acquired immunodeficiency syndrome (HIV/AIDS), with 5 million newly infected that year. HIV/AIDS, along with tuberculosis (TB) and malaria, causes nearly 6 million deaths per year and untold human suffering. Established in January 2002, the Global Fund (the Fund) aims to rapidly disburse grants to augment existing spending on the prevention and treatment of these three diseases while maintaining sufficient oversight of financial transactions and program effectiveness. As of April 1, 2003, the United States had pledged $1.65 billion to the Fund and is expected to remain its single largest donor. In this study, GAO was asked to assess (1) the Fund's progress in developing governance structures; (2) the systems that the Fund has developed for ensuring financial accountability, monitoring and evaluating grant projects, and procuring goods and services; (3) the Fund's efforts to raise money; and (4) its grant-making process. In responding to our draft report, the Fund, the Department of Health and Human Services, the Department of State, and the U.S. Agency for International Development agreed with our findings. The Fund has made noteworthy progress in establishing essential governance and other supporting structures and is responding to challenges that have impeded its ability to quickly disburse grants. A key challenge involves locally based governance structures, many of which are not currently performing in a manner envisioned by the Fund. The Fund has developed comprehensive oversight systems for monitoring and evaluating grant performance and ensuring financial accountability and has issued guidance for procurement; however, the oversight systems face challenges at the country level and some procurement issues have not been finalized. The Fund's ability to approve and finance additional grants is threatened by a lack of sufficient resources. Pledges made through the end of 2003 are insufficient to cover more than a small number of additional grants and without significant new pledges, the Fund will be unable to support all of the already approved grants beyond their initial 2-year agreements. Improvements in the Fund's grant-making processes have enhanced its ability to achieve its key objectives, but challenges remain. These challenges include ensuring that grants add to and complement existing spending on HIV/AIDS, TB, and malaria and that recipients have the capacity to effectively use grants.
GAO_RCED-99-48
Background DOE began FUSRAP in 1974 to address radiological contamination at sites operated by the Manhattan Engineering District and the Atomic Energy Commission, both predecessor agencies to DOE. During the 1940s through 1960s, work was performed at numerous locations within the United States as part of the nation’s nuclear weapons program. Storing, transporting, sampling, mining and milling, machining, and processing radioactive materials that were used to make nuclear weapons created sites that became contaminated with uranium, thorium, radium, and their decay products, as well as nonradioactive materials. In general, these sites were cleaned up or released for use under the guidelines in effect when the work was completed. However, those guidelines were not as strict as those in effect today, and radiological contamination in excess of current guidelines remained at a number of sites. To date, 46 sites have been included in FUSRAP. After reviewing several hundred sites, DOE originally identified 41 sites for inclusion in FUSRAP. According to DOE, these sites were included because they had met several criteria, including the following: (1) they had been involved in processing or handling radioactive materials owned by the government, (2) DOE determined that it had authority over the sites, and (3) there was significant or potential radioactive contamination. In addition to the sites identified by DOE, the Congress assigned five sites to DOE for remediation, and the Department placed them in FUSRAP because of their similarity with or proximity to sites in the program. By 1997, DOE had completed the cleanup of 24 sites, leaving 22 sites in Connecticut, Illinois, Maryland, Massachusetts, Missouri, New Jersey, New York, and Ohio, as shown in table 1. In October 1997, the Energy and Water Development Appropriations Act for fiscal year 1998 (P.L. 105-62) transferred responsibility for the administration and execution of FUSRAP from DOE to the Corps. At that time, about $582 million had been spent for cleaning up sites since the program’s inception. Funding for FUSRAP for fiscal year 1998 was $140 million (compared with the funding levels of about $70 million per year during the last few years that DOE managed the program). The conference report on the legislation transferring FUSRAP directed the Corps to review the cost and schedule for each cleanup site. In March 1998, the Corps issued a report to Congress on the status and future of FUSRAP. The Corps included two cost and schedule estimates—baseline and conservative. The baseline estimates assumed cleanup levels consistent with future restricted or industrial land use, while the conservative estimates assumed cleanup levels consistent with future residential land use at all sites. Both the baseline and conservative estimates assumed unconstrained funding. Whether the baseline or conservative assumptions are closer to the cleanup that is actually implemented will depend on the results of the Corps’ risk analysis and coordination with the Environmental Protection Agency and state and local representatives. Corps’ Cost and Schedule Estimates Differ From DOE’s and May Change Soon after FUSRAP was transferred, the Corps developed cost and schedule estimates for each FUSRAP site. In comparison to prior cost and schedule estimates prepared by DOE, the Corps’ cost estimates, in total, are higher. The Corps estimated that it would cost up to $2.25 billion and would take until after 2004 to complete cleanup at all sites. DOE had estimated that it would cost up to $1.5 billion and would take until as late as 2006 to complete cleanup. An examination of the individual cost estimates, however, shows that much of the difference between DOE’s and the Corps’ estimates can be attributed to two FUSRAP sites where new information became available after the program was transferred and/or the scope of cleanup alternatives was changed. At several sites, the extent of contamination is unknown, and, at one site, a treatment technology or disposal site may not be available. For those sites, the Corps’ current cost and schedule estimates are probably not accurate and can be expected to increase as more information is developed. Corps Based Cost and Schedule Estimates on DOE’s Prior Work, but Made Revisions at Some Sites The Corps’ cost and schedule estimates were generally based on DOE’s site characterizations, scope of work, and estimates and do not differ significantly from DOE’s estimates at most of the 22 sites. Corps officials told us that this was because the Corps either agreed with DOE’s plan or did not have sufficient knowledge and information about a site to deviate from DOE’s plan. For example, within the Buffalo (N.Y.) District, the Corps’ report to Congress identified planned efforts at the Ashland 1 site during fiscal year 1998 that were very similar to those planned for by DOE in its June 1997 accelerated plan. Ashland 1 is a 10.8-acre site in Tonawanda, New York, that was used to store wastes from uranium processing. Contamination on the site is from uranium, radium, and thorium and the decay products associated with those elements. To estimate the site’s cleanup costs and schedule, the Corps used site characterization data compiled while the program was under DOE. Just as DOE had planned, the Corps plans to remove about 29,000 cubic yards of contaminated material. When completed, the site will be available for industrial use. The cost or schedule estimates for some sites were based on the Corps’ judgment that the scope of the cleanup would have to be altered. For example, the Seaway site (located in Tonawanda, N.Y.) is a 93-acre landfill that includes 16 acres that are contaminated with uranium, thorium, and radium. DOE officials informed us that they had reached a tentative agreement with local officials to leave buried material in place. Other material in the landfill that was accessible would be assessed to determine if removal was required. DOE’s $250,000 cost estimate and 1999 closure date for the site assumed that no further remedial action was necessary. The Corps reviewed this information and determined that additional remedial action may be necessary. The Corps listed several options for remediating the site and estimated that the cost to complete the cleanup would be $10.2 million and that the cleanup would take until 2001. Similarly, at the W.R. Grace site (the 260-acre site in Baltimore, Md., was used to extract thorium and other elements from sand), DOE was still conducting site characterization work and had not developed a cleanup plan. DOE estimated that it would cost from $12.1 million to $12.8 million to clean up the site and that it would take until 2002 or 2003 to complete the cleanup. The Corps reviewed DOE’s data and estimated that a further review of site information and remedial actions would cost from $39.6 million to $53.3 million and would take until 2002. The Corps also assumed that cost sharing with the site owner would not occur, while DOE assumed that the site’s owner would bear a portion of the costs. In total, the Corps’ March 1998 report to Congress stated that the cleanup of the remaining 22 FUSRAP sites would cost from $1.56 billion under the baseline estimate to $2.25 billion under the conservative estimate, in addition to the costs incurred prior to fiscal year 1998. The Corps also estimated that, given unconstrained funding, 16 of the remaining 22 sites could be cleaned up and removed from FUSRAP by 2002. Four additional sites could be cleaned up by 2004 if funding were unconstrained and if the cleanup parameters—such as cleanup criteria or disposal location—were significantly changed. The report stated that the remaining two sites—the Niagara Falls (N.Y.) Storage Site and Luckey, Ohio—could not be completed until after 2004 because the contamination at those sites was not fully characterized and technological uncertainties existed. In May 1997, DOE estimated that cleaning up the 22 FUSRAP sites would cost about $1.5 billion and could be completed by 2006. In June 1997, DOE estimated that cleaning up the 22 FUSRAP sites would cost about $983 million and could be completed by 2002. The May 1997 cost and schedule estimates were part of a plan to complete cleanup at all FUSRAP sites within 10 years. The June 1997 estimate was part of an accelerated plan to complete the cleanup within 6 years. In order to complete the cleanup within 6 years, many sites would be cleaned up to a less stringent level, leaving higher levels of contamination at the site than would have remained under the May 1997 plan. Because of this, the June cost estimate was much lower than the May cost estimate. The difference between the Corps’ estimates and DOE’s estimates results primarily from the estimates for two sites—the Niagara Falls, New York, and Luckey, Ohio, sites. Table 2 shows DOE’s and the Corps’ cost estimates for these sites. (See app. I for a site-by-site comparison of DOE’s and the Corps’ estimates.) The Corps’ overall total cost estimates for these sites differ from DOE’s because of changes in the scope of cleanup or additional contamination information that has become available. For example, the Niagara Falls Storage Site may eventually be cleaned to a more stringent level than was planned by DOE. The Niagara Falls site is a federally owned site consisting of 191 acres about 19 miles north of Buffalo, New York. Beginning in 1944, the former Manhattan Engineering District used the site to store waste material from processing uranium. On-site contamination includes uranium decay products, radium, and thorium. The site also contains highly radioactive processing residues in a containment structure with an interim cap. In its June 1997 plan, DOE planned to clean up two buildings at the site and monitor and maintain the interim cap that currently contains the contamination. This alternative would have resulted in the site’s removal from the program in 2002 at a cost of $6 million. DOE also planned to conduct long-term surveillance and maintenance at the site. Although DOE issued a draft plan that favored this approach, it was not universally accepted. The National Research Council conducted a study that questioned DOE’s approach of leaving the contamination in place. DOE’s response included plans to review possible technologies for dealing with the highly radioactive processing residues prior to developing plans for their removal. In view of that study, the Corps may do more than DOE was planning to do at the site. The Corps intends to decontaminate the two on-site buildings and conduct a study to determine what to do with the rest of the contamination. The study will consider (1) removing the highly radioactive processing residues only, (2) removing all wastes, and (3) leaving all wastes in place under a permanent cap. Of these alternatives, the Corps’ baseline cost and schedule estimate ($285 million, with completion in 2006) provides for removing the highly radioactive processing residues only, while the conservative estimate ($434.5 million, with completion in 2008) provides for removing all contaminated soil. (The Corps’ baseline and conservative estimates included the first two alternatives only. A cost estimate for the third alternative was not developed.) The Corps’ cost and schedule estimates in its March 1998 report to Congress for the Luckey, Ohio, site were based on a project scope different from that used by DOE because additional information became available after FUSRAP was transferred to the Corps. The Luckey site consists of 40 acres about 22 miles southeast of Toledo, Ohio. The former Atomic Energy Commission used the site to produce beryllium from 1949 through 1959. Radioactive contamination in the form of uranium, radium, and thorium and chemical contamination in the form of beryllium still exist on the site. In its June 1997 plan, DOE estimated that the site’s cleanup would cost $32 million and would be completed in 1999. However, site characterization had not been completed when FUSRAP was transferred, and the Corps has since found that beryllium contamination is much more extensive than previously known and that larger amounts of soil will have to be excavated. The Corps’ report to Congress described a baseline scope—assuming that a portion of the contaminated soils would be required to be disposed of off-site—for which, remediation was estimated to cost about $157.3 million and be completed in 2004. Under the conservative estimate, the Corps planned to remove larger amounts of contaminated soil, all of which would be disposed of off-site. The conservative cost estimate was $179.9 million, and completion was scheduled for 2005. Corps’ Cost and Schedule Estimates Could Change for Some Sites When DOE was responsible for FUSRAP, contaminated materials that were removed from sites were primarily shipped to one waste site—Envirocare in Utah. Since the program was transferred, the Corps has sent contaminated material to two additional waste sites—International Uranium Corporation’s uranium-processing facility in Utah and Envirosafe in Idaho. According to Corps officials, the competition created by using multiple sites has reduced disposal costs. For example, Corps officials informed us that they negotiated a contract with Envirosafe for the disposal of lead-contaminated waste at a cost of about 58 percent of the average disposal cost in fiscal year 1997. For the Ashland 2 site, the Corps negotiated a disposal contract with International Uranium Corporation for $90 per cubic yard of contaminated material. According to Corps officials, the disposal cost under the Corps’ existing contract with Envirocare ranged from $150 per cubic yard to over $1,000 per cubic yard, depending on the type of waste. Corps officials estimate that the lower disposal cost resulted in savings of about $16 million. The use of the International Uranium Corporation disposal site provides an additional benefit in that the radioactive material is extracted and recycled for use in the power industry. In addition, the Corps has negotiated a new contract with Envirocare to dispose of contaminated material at about one-half of the cost of a year ago and in December 1998 issued a request for proposals for additional FUSRAP disposal contracts. Since the publication of its report, the Corps has gathered additional data related to radioactive and chemical contaminants that could affect its cost and schedule estimates. For example, the data for the Luckey, Ohio, site mentioned earlier show that beryllium has migrated and was found in a drinking water well at an adjacent residence. The extent of the contamination is currently being studied, but Corps officials believe it has expanded beyond what was anticipated. The Corps’ Buffalo District officials told us that if additional remediation is required for the drinking water, it could potentially double cleanup costs (the March 1998 baseline estimate was $157.3 million) and delay completion of cleanup activities until 2004 or 2005. In a similar situation, the Colonie, New York, site consists of an 11.2-acre site and 56 vicinity properties that have been contaminated. From 1958 through 1984, several different processes that involved radioactive materials were conducted on the site. The site’s primary known radioactive contaminants include uranium and thorium. In addition, at the time of the report to Congress, the site had known lead, copper, and tetrachloroethylene contamination. While the contaminants were known at the time of the report and DOE and state officials had an agreement that would allow some contaminated material to remain on-site under a cap, the extent of groundwater contamination and the cleanup needed had not been finalized. According to Corps officials, the lead and possible groundwater contamination could significantly increase costs and delay completion dates. The CE site in Windsor, Connecticut, is a location where possible changes in cleanup levels could alter the cost and schedule information contained in the Corps’ report to Congress. The CE site consists of 1,100 acres and is located about 8 miles north of Hartford, Connecticut. From 1958 through 1961, nuclear fuel assemblies using highly enriched uranium were fabricated on-site. The CE site owner also conducted commercial nuclear manufacturing on-site and disposed of waste from those activities in many of the same areas as the FUSRAP wastes. Known site contamination involves the highly enriched uranium. In the Corps’ report to Congress, the baseline cost estimate was $40.7 million and the completion date was 2005; the conservative cost estimate was $99.3 million, and the completion date was 2005 also. The facility operator and the government have not agreed on the level of enriched uranium that will be cleaned up under FUSRAP. However, the current facility operator wants FUSRAP to be responsible for remediating additional uranium contamination, which DOE had not agreed to do and which would result in increased quantities and costs. In the fall of 1998, the current facility operator submitted a proposal to the Corps to expand the scope of FUSRAP cleanup at the CE site. The Corps is reviewing the proposal. Unknown information on the Niagara Falls Storage Site mentioned earlier also has the potential to change the cleanup costs and completion schedule contained in the report to Congress. Although the Corps has made cost and schedule estimates to clean up the Niagara Falls site (the baseline estimate, with completion in 2006, is $285 million, and the conservative estimate, with completion in 2008, is $434.5 million), there is no proven technology for treating the contamination with the highest activity. The highly radioactive processing residues at the site are of the same material that DOE has at its Fernald, Ohio, facility. In 1994, DOE began building a pilot-scale vitrification plant at Fernald to demonstrate a treatment process for these residues. The purpose of the plant was to gather information for the design of a future full-scale facility. However, the project experienced significant delays, equipment problems, and cost overruns. As a result, DOE closed the plant and is currently reevaluating its remediation options. If the Corps’ study of alternatives for cleaning up the Niagara Falls site results in the selection of an option that requires treatment of the highly radioactive processing residues before shipping them to a disposal site, the technology developed to treat these residues will significantly affect the cost and schedule for cleaning up the site. Corps’ Efforts Since the Transition The Corps has been responsible for FUSRAP for only a little more than 1 year. Therefore, it is difficult to extrapolate the chances for FUSRAP’s future successes or failures from the Corps’ short history with the program. However, since FUSRAP was transferred to the Corps, it has achieved, and in some cases exceeded, its planned milestones for evaluating and cleaning up most individual sites. In fiscal year 1998, the Corps had 71 full-time equivalents involved in program management and support. The Corps’ staffing for FUSRAP has fluctuated and is expected to continue to fluctuate because of the type of work being conducted. It is difficult to compare the Corps’ staffing levels with DOE’s because the two agencies used a different basis for calculating the number of staff in the program. Considerable progress has also been achieved in completing environmental documents necessary to begin removal and remedial work. Corps’ Efforts to Meet Site Milestones DOE had planned to conduct decontamination work at 14 sites during fiscal year 1998. The Corps planned decontamination work at 11 sites during fiscal year 1998. (See app. II for the Corps’ and DOE’s fiscal year 1998 milestones for each FUSRAP site.) At 12 sites, planned environmental documentation and cleanup work were conducted as scheduled. For example, the Corps planned to complete Engineering Evaluation/Cost Assessments for the St. Louis Airport Site, and the Wayne, New Jersey, site. These documents were completed. In addition, the St. Louis District planned to, and issued, a Record of Decision for the St. Louis Downtown Site. At four sites, the Corps not only met its milestones, but also conducted additional work. At the Maywood, New Jersey, site, the New York District had planned to remediate 13 vicinity properties during fiscal year 1998. Instead, the District was able to remediate 15 vicinity properties. In addition, the Corps remediated four other properties where contamination was found during the planned excavation of the vicinity properties. At Middlesex, New Jersey, half of a contaminated waste pile was scheduled for removal; however, because the New York District was able to obtain a favorable disposal rate by using an alternate disposal site, it was able to remove the entire pile. At the Painesville, Ohio, site, the Buffalo District originally planned to remove 250 cubic yards of contaminated soil; however, as the soil was removed, additional contamination was found, and 300 cubic yards was subsequently removed. The original milestones for the Niagara Falls Storage Site included only providing for site security and maintenance. The Corps provided security and maintenance and also decontaminated a building on the site. At five sites, the milestones established for fiscal year 1998 were not met for various reasons. For example, the Corps originally planned to remove the Shpack Landfill site near Attleboro, Massachusetts, from FUSRAP by summer 1998. However, the Corps questioned whether the site’s contamination was attributable to the government. The Corps has delayed the closing and did not meet its milestone because it decided to do a more intensive review of the project records than it originally anticipated. One site (Madison, Ill.) did not have any fiscal year 1998 milestones. Corps’ Staffing Changes to Meet the Program’s Needs The Corps set a number of expectations for the program, including one that the Corps would implement the program without an increase in its overall staffing levels. During fiscal year 1998, the Corps had 71 full-time equivalents. Most of these—65 full-time equivalents—were located at the six Corps district offices that manage FUSRAP sites. In addition, six full-time equivalents were located at the Hazardous, Toxic, and Radioactive Waste Center of Expertise in Omaha, Nebraska. The Corps does not employ contractor staff to manage this program. During the first year that the Corps managed FUSRAP, staffing levels fluctuated. Transition teams were formed and disbanded, and district FUSRAP teams and site teams were created. In addition, district officials have indicated that they expect staffing levels to continue to change in the near term as specific sites move through the different phases of cleanup. For example, Corps officials told us that the preparation of environmental documentation requires significantly more staff involvement than does the actual physical removal of contaminated material. (See app. III to this report for a listing of the number of staff involved in FUSRAP.) At the time the program was transferred, DOE reports that it had 14 federal and 50 contractor full-time equivalents involved in a joint federal/contractor management team. It is difficult to compare the Corps’ and DOE’s staffing levels. Consistent with other DOE programs, DOE used a federally led management team in FUSRAP, while the Corps used all federal staff. In addition, as stated previously, the Corps’ staffing level includes program management and some program support staff, while DOE’s reported staffing level includes only program management. Corps’ Preparation of Environmental Documentation The Corps believes that its authority to execute FUSRAP is the Comprehensive Environmental Response, Compensation, and Liability Act of 1980, as amended. One of the challenges the Corps identified during the program’s transition from DOE was completing environmental documents necessary to begin removal or remediation of contamination pursuant to the act. Removal actions are short-term actions taken to clean up, remove, and monitor contamination. Remedial actions are the study, design, and construction of longer-term responses aimed at permanently cleaning up a site. When DOE managed FUSRAP, it used action memorandums as its primary decision document to carry out removal actions. An action memorandum identifies the selected removal action and authorizes the cleanup. It is supported by an Engineering Evaluation/Cost Assessment, which characterizes the waste, examines different options, tentatively selects a remedy, and obtains public comment. DOE’s use of Engineering Evaluation/Cost Assessments and action memorandums was consistent with a GAO report recommending that DOE consider the increased use of removal actions, where appropriate, as a potential means of schedule and cost savings. The Corps has prepared five Engineering Evaluation/Cost Assessments for removal actions involving six sites and two Records of Decision for cleanup involving four sites and plans to prepare Records of Decision to remediate and close out nearly all sites. Records of Decision document the selected remedy and authorize the cleanup. They are supported by a work plan, a remedial investigation, a feasibility study, and a proposed plan that tentatively selects a remedy and obtains public comment. Records of Decision are generally prepared to support and document longer, more complex remedial action cleanups. Corps officials told us that they make extensive use of Records of Decision because the Corps believes that Records of Decision are required under the Comprehensive Environmental Response, Compensation, and Liability Act of 1980, as amended, to achieve finality and completion of response actions at a site. Furthermore, the Corps believes that the Record of Decision process ensures full public comment on the selected remedial alternative. The use of either decision document complies with relevant requirements for documenting cleanup actions. Implementing regulations and applicable guidance documents for the Comprehensive Environmental Response, Compensation, and Liability Act of 1980, as amended, provide that both removal and remedial actions require a decision document to be included as part of the administrative record of each response action. The regulations and guidance indicate that this requirement may be satisfied differently for each type of action. While a Record of Decision is the document to be used for a remedial action, an action memorandum generally is used for a removal action. Transition Activities and Issues During the transition from DOE to the Corps, the Corps established transition teams and met with DOE officials to transfer contracts and obtain information related to the FUSRAP sites. The transition of FUSRAP sites and information to the Corps was achieved quickly and smoothly. However, several issues related to the program needed to be resolved. DOE and the Corps are negotiating a Memorandum of Understanding to clarify roles and responsibilities. DOE and Corps officials told us that the memorandum may be finalized in early 1999. Transfer of FUSRAP Sites When FUSRAP was initially transferred from DOE, the Corps set out to review and analyze the program, facilitate a smooth transition from DOE to the Corps, develop and submit a report to the Congress, and execute the program and projects within budget and on schedule. To accomplish the objectives, the Corps developed a management plan and created two teams—the Program Assessment Team and the Oak Ridge Transition Team. The Program Assessment Team consisted of six members with backgrounds in hazardous, toxic, and radioactive waste management; technical requirements; construction contracting; laws and regulations; health physics and safety; and real estate. The team was chartered to develop the Corps’ overall assessment of the status of FUSRAP projects, DOE’s strategy for completion, and the technical appropriateness and funding level of existing DOE-directed contractor activities. During November 1997, the team visited the six Corps districts that manage FUSRAP sites and also visited most of the FUSRAP sites. The team was also to work with the Corps’ districts to determine if the cleanup of all sites could be completed by 2002, to determine a transition strategy for each project, and to consolidate, assemble, and coordinate site-specific components of the Corps’ report to Congress. The Oak Ridge Transition Team had four members with expertise in hazardous, toxic, and radioactive waste; program and project management; contracting; and contract management. The team was chartered to assess DOE’s FUSRAP management practices, contract requirements, financial systems, scheduling, regulatory interfaces, community relations, and future program requirements. In addition, the team was responsible for assisting in preparation of the report to Congress. The Corps’ and DOE’s staff held numerous meetings during the first few months of fiscal year 1998. For example, the day after the President signed the bill transferring the program, Corps officials from headquarters and the districts met with DOE headquarters officials. The Corps’ teams spent from October 20 through 24, 1997, with DOE and Bechtel National, Inc., (DOE’s prime management support contractor) staff in Oak Ridge, Tennessee, where they were briefed on individual FUSRAP sites. The Corps’ headquarters officials again met with DOE officials on November 7. The Corps’ March 1998 report to Congress stated that during the transition period, DOE personnel at Oak Ridge and the FUSRAP sites provided outstanding cooperation. The report also stated that DOE’s program and project managers and its contractors involved in FUSRAP acted professionally and responsibly. DOE and Corps officials agreed that both agencies were cooperative and that the transition was a smooth, coordinated effort. Transition Issues Early in the transition, it was not clear whether the Corps had the same authority as DOE for regulating certain safety activities of contractors carrying out FUSRAP cleanups. With respect to nuclear safety and occupational safety and health activities, through the terms of its contracts, DOE regulated its FUSRAP cleanup contractors as authorized by the Atomic Energy Act. As a result, DOE’s contractors followed safety requirements imposed by DOE under its authority rather than those imposed by the Nuclear Regulatory Commission or by the Occupational Safety and Health Administration. The Corps questioned whether this authority had been transferred. As a result, the Corps’ contractors were required to comply with the substantive provisions of all applicable safety and regulatory requirements of the Nuclear Regulatory Commission and Occupational Safety and Health Administration. Corps officials informed us that they have taken the position that the Comprehensive Environmental Response, Compensation, and Liability Act of 1980, as amended, does not require the Corps to obtain Nuclear Regulatory Commission licenses for FUSRAP work performed entirely on-site but does require compliance with provisions of otherwise applicable license requirements for on-site work. Corps officials also believe that any portions of FUSRAP work that are entirely off-site are subject to applicable license or permit requirements. The Corps therefore requires its contractors to comply with all federal, state, and local regulations regarding the handling of FUSRAP materials and to meet all license or permit requirements for off-site work. On January 12, 1999, the Corps wrote a letter to the Nuclear Regulatory Commission that stated the Corps’ position and asked for the Commission’s guidance. Under the Comprehensive Environmental Response, Compensation, and Liability Act of 1980, an agency that cleans up a contaminated site may be able to recover some of the funds spent for response actions from potentially responsible parties. The Corps believed it needed specific legislative authority to deposit funds recovered this way in its FUSRAP accounts and then to use the funds for additional FUSRAP response actions. This issue was resolved when specific authority to deposit these funds was provided in the Energy and Water Development Appropriations Act for Fiscal Year 1999 (P.L. 105-245). Sites may be added to FUSRAP when new information about radioactive contamination related to sites used by DOE’s predecessor agencies becomes available. For example, as recently as 1994, the CE site in Windsor, Connecticut, was added to the program. The Corps does not regard the designation of new FUSRAP sites as being within the scope of responsibilities that were transferred. The Corps believes that DOE is the repository for information on the Manhattan Engineering District and early activities by the Atomic Energy Commission and that such information is essential for determining the eligibility for cleanups under FUSRAP. DOE’s initial position was that the Energy and Water Development Appropriations Act of 1998 transferred complete responsibility for carrying out FUSRAP to the Corps—including the designation of new sites, although DOE also stated that it would provide the Corps with reasonable assistance in evaluating the eligibility of potential new sites. DOE and Corps officials informed us that they have tentatively resolved this issue—DOE will research the history of proposed new FUSRAP sites to determine their eligibility, and the Corps will assess the sites’ level of contamination—in a Memorandum of Understanding that is currently being negotiated. Questions about which agency should be accountable for sites is another transition issue that requires resolution. DOE and Corps officials informed us that they have tentatively agreed—in the Memorandum of Understanding that is currently being negotiated—that DOE will be responsible for any surveillance and maintenance of sites that have been released from the program. Questions about which agency should be accountable for sites still in FUSRAP remain under discussion. Specifically, the matter of which agency is responsible for property management has not been decided. The Corps has proposed that DOE should retain responsibility for these matters. DOE’s position is that while the Corps’ cleanup activities are in progress, these responsibilities are best handled by the Corps. DOE and Corps officials informed us that they are attempting to resolve this issue in the Memorandum of Understanding, which may be finalized in early 1999. Conclusions The Corps has been responsible for FUSRAP for only a little more than a year; because of this short period, it is difficult to predict the future of the program. However, during the first year that the Corps managed FUSRAP, it accomplished much. The Corps reviewed all 22 sites, developed cost and schedule estimates for each, and established site-specific milestones. For most sites, these milestones were achieved or exceeded. The Corps also realized reductions in the costs of disposing of contaminated materials and in staffing levels. The transition of the sites from DOE to the Corps was achieved quickly and smoothly. Despite the successes of the Corps’ first year, unknowns still exist for several aspects of FUSRAP. We found several sites where the extent of contamination had not yet been completely characterized or the technology required to clean up the contamination is not yet available. As a result, there is potential for the Corps’ $2.25 billion cleanup cost estimate to increase in the future. In addition, several overall transition issues related to the Corps’ responsibilities and authorities remain to be formally resolved, particularly, its responsibility for determining the eligibility of new FUSRAP sites, accountability for the sites removed from the program, and accountability for the sites currently in the program. The first two issues have been tentatively resolved; discussions continue on the third. Agency Comments and Our Evaluation We provided the Corps and DOE with a draft of this report for their review and comment. The Corps concurred with the report’s assessment of the Formerly Utilized Sites Remedial Action Program. The Corps also commented about its 71 full-time equivalent management and support staff that we reported were employed in the program. The Corps’ letter stated that management of the program was accomplished with 26 full-time equivalents. During our review, we requested information on program management staffing levels, and the Corps informed us that it had 71 full-time equivalents involved in program management and support. We included that information in the report and the Corps’ comments provide no basis for making changes to the report. As stated in the report, we are aware that a comparison between DOE’s and the Corps’ staffing levels is difficult and that staffing levels for the program tend to fluctuate. Nevertheless, the staffing level data that the Corps previously provided us with and the President’s fiscal year 2000 budget—which show staffing levels of 97 full-time equivalents for the program for fiscal year 1998 and 140 full-time equivalents for fiscal years 1999 and 2000—further support our view that the assessment of the Corps’ staffing levels presented in this report should not be adjusted downward. DOE’s letter provides a perspective on the last several years of the Formerly Utilized Site Remedial Action Program—when it was managed by DOE—and the condition of the program when it was transferred to the Corps. This report focused on transition issues and activities that occurred after the program was transferred, and, as a result, we did not make any changes to the report. The full texts of the Corps’ and the DOE’s comments are included in appendixes IV and V, respectively. Scope and Methodology To obtain information on issues related to FUSRAP’s transition from DOE to the Corps, we held discussions with and obtained documents related to the transition period from the Corps’ headquarters, division, and district officials; former DOE program officials in headquarters and Oak Ridge, Tennessee; and DOE contractor officials who were responsible for FUSRAP. To determine the basis for the Corps’ cost and schedule estimates contained in its report to Congress and to obtain information on the Corps’ program milestones, staffing levels, and environmental document preparation, we visited and held discussions with officials from the six Corps districts that are responsible for FUSRAP sites. We obtained documents related to cleanup costs and schedules, site contamination, program milestones and accomplishments, staffing levels, and environmental requirements. We visited 21 of the 22 FUSRAP sites (the site we did not visit is an active site, and the operator requested that we not visit because doing so could disrupt current activities). We also visited the Corps’ Omaha, Nebraska, District Office and the Hazardous, Toxic, and Radioactive Waste Center of Expertise in Omaha to obtain documents and information on contractual and technical assistance that they provided for FUSRAP districts. We conducted our review from July 1998 through January 1999 in accordance with generally accepted government auditing standards. As arranged with your office, unless you publicly announce its contents earlier, we plan no further distribution of this report until 10 days after the date of this letter. At that time, we will send copies of the report to the Secretaries of Defense and Energy, the Director, Office of Management and Budget, and other interested congressional parties. We will also make copies available to others on request. DOE’s and Army Corps of Engineers’ Cost Estimates for FUSRAP Sites Corps’ estimates of cost to complete (continued) Status of Fiscal Year 1998 Milestones at FUSRAP Sites W.R. Grace, Baltimore, Md. No FY 1998 milestones. Award contract to prepare Engineering Evaluation/Cost Assessment. Occurred. Ashland 1, Tonawanda, N.Y. Begin removal of contamination. Complete Record of Decision. Occurred. Ashland 2, Tonawanda, N.Y. Begin removal of contamination. Complete Record of Decision and initiate remediation. Occurred. Bliss & Laughlin Steel, Buffalo, N.Y. No FY 1998 milestones. Release Engineering Evaluation/Cost Assessment to the public. Delayed because of lack of access to site. Linde, Tonawanda, N.Y. Demolish building No. 30. Occurred. Decontaminate building No. 14. Complete decontamination of building No. 14 and demolish and remove building No. 30. Niagara Falls Storage Site, Lewiston, N.Y. Surveillance and maintenance. Provide for site security and maintenance. Exceeded. Decontaminated Building No. 403. Seaway, Tonawanda, N.Y. Issue hazard assessment. Issue Record of Decision. Has not occurred because additional characterization found higher volume of contaminated material. Continue site characterization and begin remedial action. Complete site characterization. Occurred. Planned characterization was completed; however, beryllium was found to have migrated, and additional characterization work will be performed. Complete remedial action. Issue Engineering Evaluation/Cost Assessment and Action Memorandum and excavate/dispose of 250 cubic yards of material. Exceeded. Additional contamination found. Removed 300 cubic yards. CE Site, Windsor, Conn. Start site characterization. Initiate site characterization. Occurred. (continued) Ventron, Beverly, Mass. Issue final certification document. Has not occurred because of Corps’ desire not to put out Record of Decision for public review prior to completion of negotiations related to the owner’s plans to sell the site. Shpack Landfill, Norton/Attleboro, Mass. Remove from program. Remove from program. Has not occurred because of the need to review more records than originally anticipated. Maywood, Maywood, N.J. Complete residential vicinity properties. Begin remediation of municipal vicinity properties. Remediate 13 vicinity properties. Exceeded. Completed 15 vicinity properties and began remediation of 6 vicinity properties scheduled for FY 1999. Completed four additional properties not originally in the program. Middlesex Sampling Plant, Middlesex, N.J. Complete Engineering Evaluation/Cost Assessment. Remove 50 percent of waste pile. Issue Engineering Evaluation/Cost Assessment. Remove half of contaminated waste pile. Exceeded. Issued Engineering Evaluation/Cost Assessment and removed entire waste pile. Wayne Interim Storage Facility, Wayne, N.J. Complete removal of waste pile. Begin removal of subsurface contamination. Issue Engineering Evaluation/Cost Assessment and remove 10,000 cubic yards. Occurred. Colonie, Colonie, N.Y. Complete vicinity property cleanup. Begin subsurface soil remediation. Award contract for total site remediation. Conduct various decontamination and removal activities. Occurred. Dupont Chambers Works, Deepwater, N.J. Remove drums containing waste. Issue Engineering Evaluation/Cost Assessment and remove drums containing waste. Occurred. Drums removed under a Post Hazard Assessment document. (Engineering Evaluation/Cost Assessment was not used.) Madison, Madison, Ill. No FY 1998 milestones. No FY 1998 milestones. Not applicable. (continued) St. Louis Airport Site, St. Louis, Mo. Begin excavation of surface and subsurface soil. Remove contaminated sediment in ditches. Complete rail spur for loading out material and issue Engineering Evaluation/Cost Assessment. Remove contaminated material. Occurred. St. Louis Airport Site Vicinity Properties, St. Louis, Mo. Continue remediation of haul routes. Issue Engineering Evaluation/Cost Assessment for ball fields as part of Airport Site Engineering Evaluation/Cost Assessment. Remove contaminated material. Occurred. St. Louis Downtown Site, St. Louis, Mo. Continue building decontamination. Begin subsurface soil remediation. Issue Record of Decision. Remove contaminated material. Occurred. Hazelwood Interim Storage Site and Latty Ave. Properties, Hazelwood, Mo. Begin removal of waste storage pile. Issue Engineering Evaluation/Cost Assessment and start rail spur. Engineering Evaluation/Cost Assessment was issued, and rail spur was not started because the property owner would not sign the agreement to allow the Corps on the property. Army Corps of Engineers’ FUSRAP Staffing Levels at the End of Fiscal Year 1998 Comments From the Army Corps of Engineers Comments From the Department of Energy The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Army Corps of Engineers' Formerly Utilized Sites Remedial Action Program (FUSRAP), focusing on the: (1) Corps' cost and schedule estimates for cleaning up the FUSRAP sites; (2) Corps' progress in meeting milestones for site cleanups, FUSRAP staffing levels, and environmental document preparation; and (3) transition of the program from the Department of Energy (DOE) to the Corps. GAO noted that: (1) when the Corps took over the program, it reviewed DOE's cost and schedule estimates for the 22 sites, visited the sites, and developed new cost and schedule estimates for each; (2) the Corps' cost estimates, in total, are higher than estimates previously developed by DOE; (3) the Corps estimated that it would cost up to $2.25 billion and would take until after 2004 to complete cleanup at all sites; (4) however, there is potential for the $2.25 billion estimate to increase in the future because no proven technology is available to clean up one site and characterization is incomplete for others; (5) DOE had estimated that it would cost up to $1.5 billion and would take until as late as 2006 to complete the cleanup; (6) an examination of the individual cost estimates, however, shows that much of the difference between DOE's and the Corps' estimates can be attributed to two sites where new information became available after the program was transferred or the scope of the cleanup alternatives was changed; (7) since the program was transferred to the Corps in October 1997, the Corps has achieved or exceeded its milestones for planned cleanup activities at 16 of the 22 sites; (8) the Corps did not achieve one or more of its milestones at five sites, and one site did not have any milestones for fiscal year (FY) 1998; (9) to accomplish its goals for the program, in FY 1998, the Corps had 71 full-time equivalents involved in program management and support; (10) in regard to completing the environmental documentation necessary to begin removal and remedial work, the Corps has made considerable progress, including issuing two Records of Decision and five Engineering Evaluation/Cost Assessments that provide detailed plans for site cleanups; (11) during the program's transition from DOE to the Corps, the Corps established transition teams and worked with departmental officials to transfer the 22 sites; (12) when the program was transferred, several issues remained unresolved; (13) only one issue remains to be formally resolved, specifically, which agency should be accountable for property management for the sites while they are in the program; and (14) attempts to resolve this issue through negotiation of a memorandum of understanding between the Corps and DOE are ongoing.
GAO_GAO-10-676
Background MDA’s mission is to develop an integrated and layered BMDS to defend the United States, its deployed forces, allies, and friends. In order to meet this mission, MDA is developing a highly complex system of systems—land-, sea- and space-based sensors, interceptors and battle management. Since its initiation in 2002, MDA has been given a significant amount of flexibility in executing the development and fielding of the BMDS. To enable MDA to field and enhance a missile defense system quickly, the Secretary of Defense in 2002 delayed the entry of the BMDS program into the DOD’s traditional acquisition process until a mature capability was ready to be handed over to a military service for production and operation. Because MDA does not follow the traditional acquisition process, it has not yet triggered certain statutory and regulatory requirements that other major defense acquisition programs are required to adhere to. For example, other major defense acquisition programs are required to establish the total scope of work and total cost baselines as part of their entry into the formal acquisition cycle. Title 10 United States Code (U.S.C.) section 2435 requires a baseline description for major defense acquisition programs, however the requirement to establish a baseline is not triggered until a system enters into system development and demonstration. DOD has implemented this requirement with the acquisition program baseline in its acquisition policy. Because the BMDS has not yet formally entered the acquisition cycle, it has not yet been required to meet the minimum requirements of section 2435. Therefore, because of the Secretary of Defense’s decision to delay entry of the BMDS system into the acquisition cycle, MDA is not required to establish the full scope of work or total cost baselines. Since we began annual reporting on missile defense in 2004, we have been unable to assess overall progress on cost. As a result, one of the only tools available for us to use in assessing BMDS costs is the costs reported on individual contracts. MDA employs prime contractors to accomplish different tasks that are needed to develop and field the BMDS. Prime contractors receive the bulk of funds MDA requests each year and work to provide the hardware and software for elements of the BMDS. Table 1 provides a brief description of eight BMDS elements and the prime contracts associated with these elements currently under development by MDA. Each BMDS program office’s prime contractor provides monthly earned value reports which provide insight into the dollar gained or lost for each dollar invested. These Contract Performance Reports compare monthly progress to the existing cost or schedule performance baseline to reveal whether the work scheduled is being completed on time and if the work is being completed at the cost budgeted. For example, if the contractor was able to complete more work than scheduled and for less cost than budgeted, the contractor reports a positive schedule and cost variance, or “underrun”. Alternatively, if the contractor was not able to complete the work in the scheduled time period and spent more than budgeted, the contractor reports both a negative schedule and cost variance, or “overrun”. The results can also be mixed by, for example, completing the work ahead of schedule (a positive schedule variance) but spending more than budgeted to do so (a negative cost variance). We also used contract performance report data to base projections of likely overrun or underrun of each prime contractor’s budgeted cost at completion. Our projections of overruns or underruns to the budgeted cost at completion are based on the assumption that the contractor will continue to perform in the future as it has in the past. In addition, since the budgeted cost at completion provides the basis for our projected overruns, we also provide it for each contract we assessed in appendix II. In addition, as part of the yearly system compliance verification process, DCMA conducts a periodic surveillance of contractor EVM systems to determine initial and continuing compliance of those management systems with government accepted standards. Surveillance (routine evaluation and assessment) of the EVM systems is mandatory for all contracts that require EVM systems compliance. Surveillance ensures that the contractor is meeting contractual terms and conditions and is in compliance with applicable policies and regulations. DCMA has primary responsibility for surveillance of the prime contractor and sub-tier suppliers with EVM requirements. According to a DCMA Earned Value Management Center official responsible for leading system surveillance, at the completion of the assessment, the DCMA Earned Value Management Center submits to the contracting officer a status of the contractor’s EVM system compliance, including all supporting data to that effect. If deficiencies are found during the course of the surveillance process, it is the surveillance team’s responsibility, working through DCMA’s Earned Value Management Center, to issue a written corrective action request. The purpose of a corrective action request is to formally notify the contractor that a documented course of action in the form of a corrective action plan is needed to bring the EVM system in compliance with government accepted EVM system guidelines. Corrective action requests range in severity from Level I to Level IV where, according to a DCMA Earned Value Management Center official responsible for leading system surveillance, Level I is for non-compliance with the Defense Federal Regulation Acquisition Supplement clauses in the contract that can be corrected immediately and for which no special management attention is required, and Level IV identifies issues where cost, schedule, technical performance, resources, or management process issues have unfavorably affected the supplier's EVM so that it is incapable of reporting meaningful EVM across multiple programs or multiple sites; and these issues have not been corrected. Level III and IV corrective action requests may trigger formal reviews such as post award review for cause, compliance reviews, or other system validation reviews and may result in suspension or revocation of EVM systems certification. EVM Data for the GMD and Targets and Countermeasures Programs Are Not Sufficiently Reliable For GMD and Targets and Countermeasures, we determined that the EVM data were not sufficiently reliable to analyze these contracts’ cost and schedule performance because of instability in these programs. Without reliable EVM data, we are unable to identify significant performance drivers or forecast future cost and schedule performance. Further, when the baseline on which the work is performed and measured against is no longer representative of the program of record, program managers and other decision makers lose the ability to develop constructive corrective action plans to get the program on track. These reliability issues affect MDA’s oversight of contractor progress and both MDA and GAO’s ability to report this progress to external parties and Congress. MDA officials were aware that significant changes were not reflected in the baselines for these two elements and have been conducting more extensive oversight to compensate, but did not alert us to this issue during the course of our audit. The Director, MDA has acknowledged the importance of EVM and to address some of these issues he has enacted quarterly reviews of each of the program’s baselines. Further, he intends to report EVM information to Congress annually. According to DCMA officials, there were several issues associated with the Boeing EVM system for GMD. One of the main issues was the contractor’s inability to maintain a consistent performance measurement baseline. With numerous changes to the program and modifications to the contract, the contractor experienced difficulty incorporating these changes into the baseline in order to measure performance against this new work. For example, although the GMD program experienced a $1.3 billion dollar restructure in 2007, another major restructure beginning in fiscal year 2008 for over $500 million that was completed in fiscal year 2009, and a third in fiscal year 2010 for over $380 million, the GMD program has not conducted an IBR since December 2006.DOD’s acquisition policy states that an IBR is to be conducted within 6 months after contract award, exercise of contract options, or major modifications to a contract.DCMA officials told us that the GMD program had an IBR underway following the restructure that began in fiscal year 2008 and completed in fiscal year 2009, but in May 2009 the program was again redirected and the baseline review was cancelled. The Director, MDA explained that some of the GMD program’s baseline instability from frequent restructures was related to the changing GMD role in European defense. The February 2007 budget request for fiscal year 2008 included an approach to European defense focused on GBIs from the GMD element and a large fixed radar as well as transportable X-Band radars. In September 2009, the administration altered its approach to European defense and instead constructed a defense system to consist primarily of Aegis BMD sea-based and land-based systems and interceptors, as well as various sensors to be deployed over time as the various capabilities are matured. The Director told us that these European capability requirements changes drastically affected the GMD program as a significant amount of work had to be restructured. During these three to four years of GMD baseline instability, the Director, MDA told us that MDA took steps to gain additional insight into the contractor’s progress. The program held added reviews in the absence of IBRs to understand planned near-term effort and how well they were executing against those plans. In addition, the Director told us that the program held monthly focus sessions during which the joint government and contractor teams briefed the status of progress and risks. The Director acknowledged that these insights are necessary to understand the meaning of the near-term EVM data. However, without the benefit of a documented IBR after multiple larger restructures to the program or being made aware of MDA’s added reviews, we do not have sufficient confidence in the GMD program performance measurement baseline to reliably analyze the existing EVM data. Boeing and MDA are taking steps to address problems with the reliability of the contractor’s EVM data. The contractor had planned to deliver a performance measurement baseline by May 2010 and the GMD program is planning to conduct a series of IBRs on the remaining prime and major subcontractor effort beginning in July 2010. In addition, the contractor is taking initiatives to put a performance measurement baseline in place as quickly as possible and is providing additional training for its management and control account managers in charge of EVM. The Director, MDA told us that MDA was changing how its future contracts for the GMD program are being structured to be more receptive to modifications. This new contract structure will include dividing the work into delivery orders so that modifications will be reflected at a delivery order level instead of affecting a larger contract. These steps may help resolve the EVM issues; however we cannot determine the full effect of these steps until further evaluation after their full implementation. Similarly, we have determined that the EVM data for the Targets and Countermeasures contractor, Lockheed Martin Space Systems, are not sufficiently reliable for inclusion in our analysis. Based on discussions with and reports issued by DCMA, the Targets and Countermeasures contractor was unable to update its baseline because of numerous program changes. In September 2007, when the delivery order for the launch vehicle-2 was approximately 60 percent complete, Lockheed Martin signaled that its baseline was no longer valid by requesting a formal reprogramming of the effort to include an overrun in its baseline for this delivery order. MDA allowed the contractor to perform a schedule rebaseline and remove schedule variances – but did not provide any more budget for the recognized overrun in the performance measurement baseline. As a result, DCMA reported that the performance indicators for this delivery order, needed to estimate a contract cost at completion, were unrealistic. According to the Director, MDA did not believe the contractor had justified that there was a scope change warranting additional budget in the performance measurement baseline. He said he believed doing so would mask problems the contractor was experiencing planning and executing the contract which he identified as the issue as opposed to changes in the contract’s scope. According to the Director, one example of the issues the contractor was experiencing on this delivery order included a failure rate of 64 percent on production qualification components. MDA has since completed the work on this delivery order and begun managing follow-on target production on a newly established delivery order. In addition, during fiscal year 2009 DCMA identified several issues with the stability of the Targets and Countermeasures program baseline. For example, program changes since fiscal year 2008 on one delivery order included over 20 contract changes to the scope of work or corrective actions to quality issues. In addition, the schedule and quantity of planned flight tests changed significantly. During the fiscal year, DCMA submitted a corrective action request for noncompliance with incorporating authorized changes in a timely manner although the contractor was able to close this issue before the end of the reporting period. Because of the instability in the baseline and the contractor’s inability to update the baseline with these frequent changes, we determined the cost performance reports for 2009 do not reflect an appropriate baseline against which to measure cost and schedule progress. According to the Director, MDA, the agency has undertaken a major effort to stabilize the Targets and Countermeasures program. MDA has established a new target acquisition strategy to address recurring target performance issues and increases in target costs. In this new strategy, the agency will buy generic targets in larger lots that are not tied to a particular test instead of smaller lots. This effort should also help increase MDA’s flexibility to respond to changing program requirements. In addition, the Director, MDA told us that the Director of Engineering at MDA will define target requirements instead of the program manager which should also help create more stability. Despite Non-Compliance Ratings for MDA Prime Contractor EVM Systems, Most Were Sufficiently Reliable for GAO Review During the course of our review, we found that DCMA assessed 7 of the 14 contractors’ EVM systems as noncompliant in fiscal year 2009. DCMA also rated 3 of the 14 contractors systems as unassessed. We reviewed the basis for the noncompliance and unassessed ratings and determined that only the GMD and Targets and Countermeasures contractor EVM issues affected the reliability of the data for our purposes. See table 2 for the DCMA compliance ratings for the 14 MDA prime contracts’ EVM systems and GAO’s reliability assessment. Five EVM systems besides the GMD and Targets and Countermeasures contractor EVM systems were rated as noncompliant by DCMA during the fiscal year but did not lead to GAO to conclude that the EVM data were not sufficiently reliable. In order to judge the reliability of the data, we reviewed the significance of any open corrective action request(s) that triggered a noncompliance rating and its impact on the contractor’s ability to judge cost and schedule performance against a baseline. During the course of our audit, we interviewed DCMA representatives at each of the contractor sites to understand the basis for the noncompliance determination and to gain information to help us assess the reliability of the data. For example, the EVM system of the STSS contractor Northrop Grumman was deemed noncompliant because of two low-level corrective action requests related to issues with other contracts that did not materially affect the performance baseline for the STSS contract we assessed. Also, the C2BMC’s contractor Lockheed Martin Information Systems & Global Services received a rating of noncompliant during 2009 because of a corrective action request that stated that major subcontractor efforts were not specifically identified, assigned, or tracked in the organizational breakdown structure. However, after the noncompliant rating was given, DCMA reversed its decision and decided to close the corrective action without requiring the contractor to change its methods. In addition, although DCMA was unable to assess two EVM systems during 2009 for Lockheed Martin Mission Systems and Sensors under the Aegis BMD weapon system contract, and Lockheed Martin Space Systems Company under the two THAAD contracts, we determined that the reasons for the unassessed rating did not lead to issues with data reliability. According to the DCMA EVM specialist responsible for monitoring the Aegis BMD weapon system, the Aegis BMD weapon system contractor was unassessed because some of the accounting guidelines could not be assessed in time for the compliance rating. In addition, the THAAD contractor was not assessed because, according to DCMA, although the contractor had addressed the open corrective action requests, DCMA did not have the resources to review and document the effectiveness of those actions in order to close these items before the end of the rating assessment period. However, subsequent to the closing of the rating assessment period, the contractor’s actions were deemed sufficient by DCMA to fix the unresolved issues and the corrective action requests were closed. BMDS Prime Contractors Aggregate Analysis Not Appropriate Due to Data Reliability Issues We are unable in this year’s report to aggregate total projected underruns or overruns in our analysis of the remaining 12 prime contracts because we had to exclude the GMD and Targets and Countermeasures programs due to data reliability issues. The GMD and Targets and Countermeasures prime contracts’ budgeted costs at completion total nearly $16 billion dollars or half of the total 14 contracts’ budgeted cost at completion. By removing such a large portion of data from our analysis, we determined that it is inappropriate to perform any aggregate analysis. More detail is provided for each of the contractors responsible for the remaining twelve BMDS contracts’ cost and schedule performance in appendix II. Nine of the remaining 12 contracts experienced cost overruns for fiscal year 2009. Most of the overruns were because of issues with maturing technologies, immature designs, or other technical issues. For example, the ABL contractor experienced a failure in some of the system’s optics which required it to develop and procure new high power optics, delaying the test schedule and increasing program cost. In addition, the THAAD development contractor expended more funds than expected for redesigns on the missile’s divert and attitude control system assembly, correcting issues with its boost motor, and making changes on the design of its optical block—a safety system to prevent inadvertent launches. Also, the contractor experienced cost overruns on extended testing and redesigns for its prime power unit in the radar portion of the contract. Contractors were able to perform within their fiscal year 2009 budgeted costs for three contracts—the Aegis BMD SM-3 contract for a fourth lot of 27 SM-3 Block IA missiles and contract for another lot of 24 SM-3 Block IA missiles, and the BMDS radars contract. The Aegis BMD SM-3 contractor attributed underruns in both of these lots of Block IA missiles to production efficiencies since the contractor has been building Aegis BMD SM-3 Block I and IA missiles for nearly 6 years. The BMDS radars contractor improved cost performance during the fiscal year through efficiencies in the software development and systems engineering. Conclusions Because MDA has not established cost baselines, prime contractor EVM data provides one of the only tools to understand MDA’s cost and schedule progress, particularly for purposes of external oversight. At present that tool cannot be used effectively for two major contractors because their data are not sufficiently reliable. While MDA is taking action to stabilize its programs and thereby improve the reliability of its EVM data, any additional delays into fiscal year 2011 could affect future fiscal years’ oversight. Moreover, until the data are sufficiently reliable, MDA, GAO and Congress lose the valuable insights into contractor performance that EVM provides, including an understanding of significant drivers to performance, the ability to forecast future cost and schedule performance, and the ability to develop constructive corrective action plans based on these results to get programs that have encountered problems back on track. Recommendation for Executive Action We recommend the Secretary of Defense direct MDA to resolve prime contractor data reliability issues by the beginning of fiscal year 2011 and, if MDA has not resolved the data reliability problems, determine the barriers preventing resolution and provide a report to Congress on: the steps MDA is taking to make its contractor data sufficiently reliable, how the data reliability issues affect MDA’s ability to provide oversight of its contractors, and the effect these issues have on MDA’s ability to report contractor progress to others, including Congress. Agency Comments and Our Evaluation DOD provided written comments on a draft of this report. These comments are reprinted in appendix I. DOD also provided technical comments, which were incorporated as appropriate. DOD concurred with our recommendation to resolve prime contractor EVM data reliability issues by 2011; however, DOD stated that MDA considers its fiscal year 2009 prime contractor performance data to be reliable. It should be noted that, while MDA has undertaken extra measures to gain insight into and compensate for the program instability effects on its EVM data, the insights gained by MDA are not available to external organizations which depend on the EVM data to analyze and forecast trends. Without the benefit of MDA's extra measures and added reviews, we maintain that the prime contractor fiscal year 2009 EVM data are not sufficiently reliable for analysis. Although we agree that MDA will likely have better insight into the reliability of its contractor performance data once it completes its comprehensive Integrated Baseline Review process and verifies data reliability through joint surveillance of the contractor’s EVM system as stated in the DOD response, we are retaining the recommendation to ensure that these corrective steps are implemented in time to improve the reliability of the EVM data by the beginning of the next fiscal year. We are sending copies of this report to the Secretary of Defense, the Director, MDA, and Office of Management and Budget. The report also is available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Comments from the Department of Defense Appendix II: BMDS Prime Contractor Fiscal Year 2009 Cost and Schedule Performance To determine if they are executing the work planned within the funds and time budgeted, each prime contractor provides monthly reports detailing cost and schedule performance. The contractor tracks earned value management (EVM) by making comparisons that inform the program as to whether the contractor is completing work at the cost budgeted and whether the work scheduled is being completed on time and then reports this information on Contract Performance Reports. For example, if the contractor was able to complete more work than scheduled and for less cost than budgeted, the contractor reports a positive schedule and cost variance, or “underrun”. Alternatively, if the contractor was not able to complete the work in the scheduled time period and spent more than budgeted, the contractor reports both a negative schedule and cost variance, or “overrun”. The results can also be mixed by, for example, completing the work ahead of schedule (a positive schedule variance) but spending more than budgeted to do so (a negative cost variance). We provide two kind of variances in our individual contract assessments pertaining to overruns or underruns either cumulatively over the life of the contract or during the fiscal year. Cumulative variances are the overruns or underruns the contractor has earned since the contract began. In order to calculate fiscal year variances, we determined the contractor’s cumulative variances at the end of September 2008 and subtracted them from the cumulative variances at the end of September 2009. Fiscal year 2009 variances give us an idea of the contractor’s performance trends during the fiscal year. A contractor may have cumulative overruns but underrun its fiscal year budgeted cost or schedule by improving its cost performance over the course of the fiscal year. In our graphs, positive fiscal year variances (underrunning cost or schedule) are indicated by increasing performance trend lines and negative fiscal year variances (overrunning cost or schedule) are shown by decreasing performance trend lines. In our notional example in Figure 1, the positive slope of the cost variances line indicates that the contractor is underrunning fiscal year budgeted cost. Specifically, the contractor began the fiscal year with a negative cumulative cost variance of $7.0 million but ended the fiscal year with a negative cumulative cost variance of $1.0 million. That means that the contractor underran its fiscal year budgeted costs by $6.0 million and therefore has a positive $6.0 million fiscal year cost variance. Alternately, the cumulative schedule variance is decreasing during the fiscal year indicating that the contractor was unable to accomplish planned fiscal year work and therefore has a negative fiscal year schedule variance. In this case, the schedule performance declined during the fiscal year from $5.0 million down to $2.0 million. Therefore, the contractor was unable to accomplish $3.0 million worth of work planned during the fiscal year. The individual points on Figure 1 also show the cumulative performance over the entire contract up to each month. Points in a month that are above $0 million represent a positive cumulative variance (underrunning cost or schedule) and points below $0 million represent a negative cumulative variance (overrunning cost or schedule). In our notional example, the contractor ended the fiscal year with a negative cumulative cost variance of $1.0 million. This means that since the contract’s inception, the contractor is overrunning its budgeted cost by $1.0 million. Alternately, the contractor ended the fiscal year with a positive cumulative schedule variance of $2.0 million. That means that over the life of the contract, the contractor has been able to accomplish $2.0 million more worth of work than originally planned. Besides reporting cost and schedule variances, we also used contract performance report data to base projections of likely overrun or underrun of each prime contractor’s budgeted cost at completion. Our projections of overruns or underruns to the budgeted cost at completion are based on the assumption that the contractor will continue to perform in the future as it has in the past. Our projections are based on the current budgeted costs at completion for each contract we assessed, which represents the total planned value of the contract as-of September 2009. However, the budgeted costs at completion, in some cases, have grown significantly over time. For example, the Airborne Laser (ABL) contractor reported budgeted costs at completion totaling about $724 million in 1997, but that cost has since grown to about $3.7 billion. Our assessment only reveals the overrun or underrun since the latest adjustment to the budget at completion. It does not capture, as cost growth, the difference between the original and current budgeted costs at completion. As a result, comparing the underruns or overruns for Missile Defense Agency (MDA) programs with cost growth on other major defense acquisition programs is not appropriate because MDA has not developed the full scope of work and total cost baselines that other major defense acquisition programs have. Aegis BMD Contractors Experienced Mixed Performance during the Fiscal Year The Aegis Ballistic Missile Defense (BMD) program employs two prime contractors for its two main components—Lockheed Martin Mission Systems and Sensors for the Aegis BMD Weapon System and Raytheon for the Aegis BMD Standard Missile-3 (SM-3). During fiscal year 2009, the Aegis BMD SM-3 Block IA and IB missile technology development and engineering contract experienced declining cost and schedule performance, the Aegis BMD SM-3 contract for a fourth lot of 27 Block IA missiles had increasing cost and schedule performance, and the Aegis Weapon System and Aegis BMD SM-3 contractor for another lot of 24 SM-3 Block IA missiles experienced mixed performance. Aegis BMD Weapon System Although the Aegis Weapon System contractor overran fiscal year 2009 budgeted costs by $0.2 million, it was able to accomplish $1.7 million more worth of work than originally anticipated. The fiscal year 2009 cost overrun is attributed to unplanned complexity associated with developing radar software. During the fiscal year, the decline in cost performance and subsequent recovery is partially attributed to annual technical instruction baseline updates. These baseline updates occur over the course of a sixty day period during which varying performance data occurs. At the end of this period, there is a jump in performance as the contractor earns two months worth of performance. Some of the cost savings from April through September 2009 are the result of a planned flight test being cancelled during the fiscal year and the contractor not spending intended funds on pre-flight test, flight test, and post-flight test activities. The favorable schedule variance was driven by completion of some technical instruction efforts. Figure 2 shows cumulative variances at the beginning of fiscal year 2009 along with a depiction of the contractor’s cost and schedule performance throughout the fiscal year. Considering prior performance on the Aegis Weapon System contract since it began performance in October 2003, the contractor is $0.2 million over budget and has been unable to accomplish $6.7 million worth of work. The small negative cost variance was driven primarily by radar software development issues, including a significant redesign not included in the original baseline. In addition, the engineering test and evaluation portion of the radar software is experiencing an increase in the lines of code that also accounts for some of the budget overrun. The unfavorable $6.7 million in schedule variances are attributed to the engineering test and evaluation portion of the radar software for which builds and capabilities are being delivered later than originally planned. If the contractor continues to perform as it did through September 2009, our analysis projects that at completion in June 2010, the work under the contract could cost about $0.2 million more than the budgeted cost of $1.5 billion. Aegis BMD SM-3 for 27 Block IA Missiles The Aegis BMD SM-3 contractor for a fourth lot of 27 Block IA missiles underran its budgeted fiscal year 2009 cost and schedule by $0.5 million and $5.8 million respectively. The program attributed its cost and schedule underruns to efficiencies in producing Aegis BMD SM-3 Block I and IA missiles since the contractor has been building these missiles for nearly 6 years. Additionally, the program reported that the contract incentivizes the contractor to deliver missiles ahead of schedule for maximum incentive fee which further encouraged the contractor to accomplish $5.8 million more worth of work then originally planned during the fiscal year. See figure 3 for an illustration of cumulative cost and schedule variances during the course of the fiscal year. Considering prior years’ performance since the contract began in May 2007, the contractor is performing under budgeted cost with a favorable cumulative cost variance of $3.9 million but is behind schedule on $1.3 million worth of work. The cost underruns are primarily driven by implemented efficiencies, material transfers, and program management adjustments within the solid divert and attitude control system; a decrease in rework and more efficiencies realized with the seeker; and underruns in engineering efforts associated with the third stage rocket motor. The $1.3 million in schedule overruns are attributed to late delivery of parts as the result of some equipment failures. If the contractor continues to perform as it did through September 2009, our analysis projects that at completion in December 2011, the work under the contract could cost about $5.2 million less than the budgeted cost of $233.8 million. Aegis BMD SM-3 for 24 Block IA Missiles As of September 2009, the Aegis BMD SM-3 contractor for another lot of 24 Block IA missiles had underrun its fiscal year budget by $4.2 million and was behind in completing $3.7 million worth of work. The contractor attributes its cost underrun to efficiencies in program management and systems engineering because of its experience in building SM-3 Block I and IA missiles. The $3.7 million in schedule overruns resulted from the contractor planning the baseline to a more aggressive schedule than the contractual missile delivery schedule requires. The contractor plans in this way because it is incentivized to deliver missiles 2 months ahead of schedule. As a result, negative schedule variances have occurred as the contractor is pushing to deliver missiles early. Figure 4 shows both cost and schedule trends during fiscal year 2009. Cumulatively, since the contract began in February 2008, the contractor is underrunning its contract’s budgeted cost by $1.4 million but is behind on $2.1 million worth of work. The contractor attributes the cost underrun to labor efficiencies and reduced manpower within the seeker design as well as a slower-than-planned ramp-up of some engineering efforts. The schedule delays are mainly driven by non-delivery of parts for the first stage rocket motor and late deliveries of parts associated with the third stage rocket motor. If the contractor continues to perform as it did through September 2009, our analysis projects that at completion in December 2011, the work under the contract could cost from $15.3 million less to $1.9 million more than the budgeted cost of $192.6 million. Aegis BMD SM-3 Block IA and IB Technical Development and Engineering For the majority of the fiscal year, the Aegis BMD SM-3 Block IA and IB Technical Development and Engineering contractor experienced a negative downward trend in cost and schedule performance. The program attributes its fiscal year cost overrun of $44.6 million to engineering development on its Aegis BMD SM-3 Block IB throttleable divert and attitude control system being more difficult than planned. The $29.4 million of unaccomplished work during the fiscal year was due to late receipt of materials that drove delays in some of the hardware testing. See figure 5 for trends in the contractor’s cost and schedule performance during the fiscal year. Cumulatively, since the contract began in December 2007, the program also has unfavorable cost and schedule variances of $51.2 million and $40.0 million, respectively. Drivers of the $51.2 million in cost overruns are throttleable divert and attitude control system engineering and hardware major submaterial price increases in support of design reviews and demonstration unit. In addition, quality issues added to cost overruns as the contractor experienced unanticipated design changes to the nozzle resulting from foreign object debris issues. The $40.0 million worth of work that the contractor was unable to achieve was driven by several issues, including late receipt of hardware and late production-level drawings. In addition, delays in testing for attitude control system thrusters and a quality issue that led to the contractor receiving nonconforming hardware also contributed to unaccomplished work. If the contractor continues to perform as it did through September 2009, our analysis projects that at completion in December 2010, the work under the contract could cost from $94.0 million to $194.8 million more than the budgeted cost of $588.9 million. ABL Cost and Schedule Performance Declined during Fiscal Year 2009 The ABL contractor, Boeing, experienced cost growth and schedule delays throughout the fiscal year. The contractor overran budgeted fiscal year 2009 cost and schedule by $10.2 million and $14.9 million respectively. The major drivers of fiscal year negative variances were technical issues and the addition of some testing that was not originally anticipated. For example, a fire suppression system failed to meet performance requirements for the laser flight test which limited the scope of the testing, added an unscheduled ground test and flight tests to ensure that the system worked properly, and increased costs. In addition, the contractor experienced a failure in some of the system’s optics which required it to develop and procure new high-power optics and ultimately delayed the test schedule and increased program cost. Lastly, because of issues discovered during beam control/fire control flights, the program scheduled additional unplanned beam control flights to accomplish the necessary objectives. The contractor experienced a continuing cost and schedule performance decline, as seen in figure 6. The contractor’s cumulative cost variance is over budget by $95.0 million and behind schedule by $38.5 million from when the contract began in November 1997. The program attributes these variances to optics issues that have affected delivery and installation and caused test program delays. If the contractor continues to perform as it did through September 2009, our analysis projects that at completion in February 2010, the work under the contract could cost from $98.0 million to $116.8 million more than the budgeted cost of $3.7 billion. C2BMC Overrunning Cumulative Cost and Schedule The Command and Control, Battle Management, and Communications (C2BMC) contractor, Lockheed Martin Information Systems & Global Services, is currently overrunning budgeted costs for the agreement since it began performance in February 2002 by $29.5 million and has a cumulative schedule variance of $4.2 million. According to program officials, the main drivers of the cumulative variances are associated with the Part 4 and Part 5 portions of the agreement. The Part 4 effort, which began in January 2006 and finished December 2007, was for the completion of several spiral capabilities, the upgrade for spiral suites, and implementation of initial global engagement capabilities at its operations center. The Part 5 effort, which began in January 2008 and is still ongoing, covers operations and sustainment support for fielded C2BMC; deliveries of spiral hardware, software, and communications; and initiated development of initial global engagement capabilities. MDA and the contractor anticipate being able to cover cost overruns on the agreement with the nearly $39 million in management reserve set aside by the contractor. Part 5 accounts for nearly $10.4 million of the $29.5 million in negative cumulative cost variances. These budgeted cost overruns are driven by increased technical complexity of Spiral 6.4 development, and more support needed than planned to address requests from the warfighter for software modifications. The $4.2 million of unaccomplished work on the agreement is driven by efforts in the Part 5 portion of the agreement, including delays in system level tests, late completion of C2BMC interface control document updates, and unexpected complexity of algorithm development and network design. See figure 7 for an illustration of cumulative cost and schedule performance during fiscal year 2009. The contractor overran its fiscal year 2009 budgeted cost by $5.2 million but is $2.9 million ahead of schedule. The drivers of the unfavorable fiscal year cost variance of $5.2 million are complexities associated with Spiral 6.4 development, additional design excursions, and additional costs to address system modifications requested by the warfighter. The contractor achieved a favorable fiscal year schedule variance largely because of gains in the month of September 2009. During this month, the contractor performed a replan of its work content and a future spiral’s scope was removed from the Part. This replan eliminated approximately $10 million in schedule variances for labor and materials because the work was no longer to be performed. If the contractor continues to perform as it did through September 2009, our analysis projects that at completion in December 2011, the work under the agreement could cost from $26.5 million to $33.1 million more than the budgeted cost of $1.0 billion. Sensors Contractor Experienced Mixed Performance during the Fiscal Year This year we are reporting on three contracts under the Sensors program—the Ballistic Missile Defense System (BMDS) Radars contract on which we have reported in prior years, the Terminal High Altitude Area Defense (THAAD) fire unit radar #7 contract, and the Thule radar contract. During fiscal year 2009, the Sensors’ contractor, Raytheon, experienced declining cost and schedule performance on the Thule radar and Army Navy/Transportable Radar Surveillance—Model 2 (AN/TPY-2) radar #7 contracts, but had favorable cost and schedule performance on the BMDS Radars contract. BMDS Radars Throughout fiscal year 2009, the BMDS Radars contractor exhibited improved cost and schedule performance. The contractor was able to perform $5.8 million under budgeted cost and $3.5 million ahead of schedule for the fiscal year. The drivers of the contractor’s improved cost performance are efficiencies in the software development and systems engineering. The contractor reports that the improved schedule performance is due to software schedule improvement as well as completion of manufacturing and integration testing on one of the radars. The variances, depicted in figure 8, represent the BMDS Radars contractor’s cumulative cost and schedule performance over fiscal year 2009. Since the contract began in March 2003, the BMDS Radars contractor is under budget by $27.8 million but is behind on accomplishing $6.1 million worth of work. The favorable cost variance of $27.8 million is driven by the use of less manpower than planned and the benefit of lessons learned from previous radar software builds. The unfavorable $6.1 million of unaccomplished work was driven by the late start on restructuring the latest software release and rework and subcomponent delays with one of the radars. If the contractor continues to perform as it did through September 2009, our analysis projects that at completion in August 2010, the work under the contract could cost from $31.3 million to $43.0 million less than the budgeted cost of $1.2 billion. AN/TPY-2 #7 Radar The AN/TPY-2 radar #7 contractor experienced unfavorable fiscal year 2009 cost and schedule variances of $4.3 million and $15.2 million, respectively. As of September 2009, the AN/TPY-2 radar #7 contract had overrun its budgeted cost by $1.9 million but was ahead in completing $9.0 million worth of work. Contributors to the cumulative cost overruns included supplier quality issues that required an increase in supplier quality support that was not in the original baseline. In addition, the program’s prime power unit purchase orders were over budgeted cost because the budgeted cost for four of the prime power units was prematurely established before the design of the first prime power unit was finalized. These delays caused some uncertainty in the final production costs until the design was finalized. As of August 2009, the contractor was working to develop a cost model and establish a true unit cost price per prime power unit. Trends in cost and schedule performance during the fiscal year are depicted in figure 9. Cumulatively, since the contract began in February 2007, the AN/TPY-2 Radar #7 contractor has completed $9.0 million worth of work ahead of schedule on this contract by executing work ahead of the contract baseline plan in some areas, including obtaining materials for equipment supporting radar operation. If the contractor continues to perform as it did through September 2009, our analysis projects that at completion in April 2010, the work under the contract could cost from $0.3 million less to $36.9 million more than the budgeted cost of $172.5 million. Thule Radar The Thule radar contractor overran fiscal year 2009 budgeted costs by $0.4 million and was unable to accomplish $0.8 million worth of work. The contractor attributes the cost overruns to exceeding planned engineering efforts in order to proactively work on issues prior to equipment delivery and ship readiness. The unfavorable schedule performance is due to the contractor expending some if its positive schedule variance in 2008 and from being behind schedule on the implementation of information assurance requirements. Figure 10 shows cumulative variances at the beginning of fiscal year 2009 along with a depiction of the contractor’s cost and schedule performance throughout the fiscal year. The Thule radar contractor, since it began performance in April 2006, is underrunning budgeted costs by $2.5 million and overrunning schedule by $0.2 million. Underruns in hardware, manufacturing, and facility design, construction, and installation drove the $2.5 million in cost underruns. If the contractor continues to perform as it did through September 2009, our analysis projects that at completion in September 2010, the work under the contract could cost from $1.4 million to $2.8 million less than the budgeted cost of $101.9 million. STSS Maintained Schedule Performance, but Cost Performance Continued to Decline during the Fiscal Year During fiscal year 2009, the Space Tracking and Surveillance System (STSS) contractor, Northrop Grumman, was able to accomplish $0.1 million more worth of work than originally anticipated, but overran budgeted costs by $72.6 million. The contractor reports that the favorable schedule variances are due to completed space vehicle 1 and 2 shipment, setups and validations, and launch. In addition, the contractor overran budgeted fiscal year costs because of additional support required to support launch operations including addressing hardware anomalies, payload integration, procedure development, and launch site activities. Additional support was also required to support the delays to the launch date beyond the original plan. See figure 11 for an illustration of the cumulative cost and schedule variances during fiscal year 2009. Despite the small gains in schedule variances during the fiscal year, the contractor maintains cumulative negative cost and schedule variances of $391.8 million and $17.7 million respectively from the contract’s inception in August 2002. Drivers of the $391.8 million in contract cost overruns are for labor resources exceeding planned levels and unanticipated difficulties related to space vehicle environment testing, hardware failures and anomalies, and program schedule extension. In addition, space vehicle-1 testing, rework, hardware issues, and sensor testing anomaly resolution as well as space vehicle-2 anomalies and testing have also contributed to the unfavorable cost variances. System test and operations and program management experienced cost overruns because of launch date schedule extensions. Lastly, ground labor resources exceeded planned levels because of the unanticipated need for a new ground software build and ground acceptance and verification report activities. The contractor has been unable to accomplish $17.7 million worth of work on the contract because of launch schedule delays, delays in verification of system requirements caused by late space segment deliveries, and tasks slipping in response to fiscal year 2009 funding reductions. If the contractor continues to perform as it did through September 2009, our analysis projects that at completion in September 2010, the work under the contract could cost from $620.9 million to $1.6 billion more than the budgeted cost of $1.6 billion. THAAD Development Contract Overran Cost and Schedule While THAAD Fire Unit Fielding Production Contract Experienced Underruns This year we report on two THAAD contracts—the development contract and the fire unit fielding production contract. As the contractor for both of these contracts, Lockheed Martin Space Systems Company was overrunning budgeted cost and schedule on the THAAD development contract but remained under cost and ahead of schedule on the THAAD fire unit fielding production contract. THAAD Development During fiscal year 2009, the THAAD development contractor overran its budgeted cost by $33.1 million but was ahead on completing $7.4 million worth of work. The fiscal year cost overruns are mainly in the missile, launcher, and radar portions of the contract. The missile experienced overruns on divert and attitude control system assembly redesigns, correcting issues with its boost motor, and making changes on the design of its optical block—a safety system to prevent inadvertent launches. The contractor spent more than expected during the fiscal year on the launcher portion of the contract, investing in labor and overtime to recover schedule. Lastly, the prime power unit in the radar portion of the contract required extended testing and redesign, which also contributed to fiscal year costs. Despite fiscal year cost overruns, the contractor was able to accomplish $7.4 million more worth of work than originally anticipated also in the missile and launcher portions of the contract. The schedule variance improved in the missile portion because of completion of missile qualification work. The contractor was also able to complete software activities and resolve hardware design and qualification issues in the launcher. See figure 12 for trends in the contractor’s cost and schedule performance during the fiscal year. Although the contractor made some schedule gains during the fiscal year, overall the contractor since it began performance in June 2000 is behind on $9.1 million worth of work. The radar’s portion of unfavorable schedule variance is driven by delays to THAAD flight test missions during fiscal year 2009. In addition, the fire control’s software qualification testing had to be extended because of the number of software changes and because the welding on the fire control power distribution unit’s chassis failed weld inspection and was subsequently unusable which contributed to the unfavorable schedule variance. The launcher experienced design delays and quality issues that led to nonconformances in delivered hardware. This hardware subsequently required investigation and rework, which also added unexpected work to the schedule. Lastly, the program was unable to accomplish work in the missile component’s flight sequencing assembly component because qualification tests were delayed due to failures with the optical block switch. The unfavorable fiscal year cost variances added to the overall cost overruns of $261.9 million. The contractor attributes overruns to the missile, launcher, and radar portions of the contract. The missile’s unfavorable cost variance is driven by unexpected costs in electrical subsystems, propulsion, and divert and attitude control systems. Also contributing are issues associated with the optical block, range safety, communications systems, and boost motors. The launcher has experienced cost growth because of inefficiencies that occurred during hardware design, integration difficulties, quality issues leading to delivered hardware nonconformances, and ongoing software costs being higher than planned because of rework of software to correct testing anomalies. These problems resulted in schedule delays and higher labor costs to correct the problems. In addition, cooling and power issues with the radar have contributed to overruns with the prime power unit. Numerous fan motor control system redesigns and retrofits for the cooling system drove costs by the supplier. Inexperience with building a prime power unit and a limited understanding of the true complexity and risks associated with the system led to significant cost growth and delivery delays. If the contractor continues to perform as it did through September 2009, our analysis projects that at completion in January 2011, the work under the contract could cost from $267.2 million to $287.4 million more than the budgeted cost of $4.8 billion. THAAD Fire Unit Fielding Production The THAAD fire unit fielding production contractor overran fiscal year 2009 budgeted cost and schedule by $4.7 million and $10.7 million, respectively. The fiscal year cost and schedule overruns were caused primarily by the missile and fire control components. Unfavorable missile cost and schedule variances were the result of hardware failures associated with components of the inertial measurement unit, communications transponder, and the boost motor causing delays and rework. In addition, the fire control portion of the contract experienced overruns because of unplanned engineering design changes and labor associated with fire control hardware and issues identified during testing. These changes were made to the hardware and deliveries already completed. See figure 13 for an illustration of cumulative cost and schedule variances during the course of the fiscal year. Despite fiscal year overruns, the fire unit production contractor continues to underrun its total contract cost and schedule. The contractor, since it began performance in December 2006, is currently $6.1 million under budgeted costs and has completed $11.3 million more worth of work than originally anticipated. The cost underruns are primarily due to a slow start-up on fire unit fielding level of effort activities. Schedule variances are not reported on level of effort activities, so delaying these activities would save on costs without affecting reported schedule. However, these false positive cost variances will erode over time once the work gets accomplished. When planned level of effort work is not performed, EVM metrics are distorted because they show cost savings for work that has not yet been accomplished. However, once the work is finished, large unfavorable cost variances will be revealed since the program will need to expend funds to accomplish the work for which it has already received credit. In addition, the program reports its favorable schedule variances are due to the transfer of excess interceptor hardware from the development contract to the fire unit fielding contract. Although the favorable schedule variance from this transfer of hardware is nearly $23.0 million, offsets occurred from delayed interceptor build activity driven by multiple supplier hardware issues and schedule delays because of issues with the boost motor including unplanned replacement of motor cases, delayed case fabrication, and slowed operations caused by a safety incident at a production facility. If the contractor continues to perform as it did through September 2009, our analysis projects that at completion in August 2011, the work under the contract could cost from $1.3 million to $17.9 million less than the budgeted cost of $604.4 million. However, it should be noted that the projection of the estimated cost at completion may also be overestimated because it is based on current cost performance that is inflated because of level of effort activities and schedule performance which are inflated by transfers of materials from another contract. Appendix III: Scope and Methodology To examine the progress Missile Defense Agency (MDA) prime contractors made in fiscal year 2009 in cost and schedule performance, we examined contractor performance on 14 Ballistic Missile Defense System (BMDS) element contracts. In assessing each contract, we examined contract performance reports from September 2008 through October 2009 for each contract, including the format 1 variance data report, cost and schedule variance explanations included in the format 5, and format 2 organizational category variance totals where available. We performed extensive analysis on the format 1 of the contract performance reports in order to aggregate the data and verify data reliability. To ensure data reliability, we performed a series of checks based on consultation with earned value experts and in accordance with GAO internal reliability standards. We began by tracking which earned value management (EVM) systems that produced the contract performance reports were compliant with American National Standards Institute standards in 2009 by reviewing the certification documentation. We received this documentation through the Defense Contract Management Agency (DCMA), which performs independent EVM surveillance of MDA contractors. We then reviewed the latest integrated baseline review out- briefs for the BMDS elements’ contracts to examine the earned value- related risks that were identified during the review and followed up with the program office to see which, if any, risks were still open action items. To further review the contract performance report format 1 data, we performed basic checks on the totals from contract performance report format 1 to ensure that they matched up with organizational totals from the contract performance report format 2, where available. This check enabled us to review whether the earned value data were consistent across the report. In addition, we obtained a spreadsheet tool from GAO internal earned value experts to perform a more extensive check of the data. Using this tool, we ran various analyses on the data we received to search for anomalies. We then followed up on these anomalies with the program offices that manage each of the 14 BMDS element contracts. We reviewed the responses with GAO EVM experts and further corroborated the responses with DCMA officials. We used contract performance report data in order to generate our estimated overrun or underrun of the contract cost completion by using formulas accepted by the EVM community and printed in the GAO Cost Estimating and Assessment Guide. We generated multiple formulas for the projected contract cost at completion that were based on how much of the contract had been completed up to September 2009. The ranges in the estimates at completion are driven by using different efficiency indices based on the program’s completion to adjust the remaining work according to the program’s past cost and schedule performance. The idea in using the efficiency index is that how a program has performed in the past will indicate how it will perform in the future. In close consultation with earned value experts, we reviewed the data included in the analysis and made adjustments for anomalous data where appropriate. We conducted this performance audit from February 2010 to July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix IV: GAO Contact and Staff Acknowledgments Acknowledgments In addition to the contact named above, David Best, Assistant Director; Meredith Kimmett; LaTonya Miller; Karen Richey; Robert Swierczek; Alyssa Weir, and John A. Krump made key contributions to this report.
By law, GAO is directed to assess the annual progress the Missile Defense Agency (MDA) made in developing and fielding the Ballistic Missile Defense System (BMDS). GAO issued its latest assessment of MDA's progress covering fiscal year 2009 in February 2010. This report supplements that assessment to provide further insight into MDA's prime contractor performance for fiscal year 2009. Prime contractors track earned value management (EVM) by making comparisons that inform the program as to whether the contractor is completing work at the cost budgeted and whether the work scheduled is being completed on time. Our analysis of contractor EVM data included examining contract performance reports for 14 BMDS contracts, reviewing the latest integrated baseline reviews, performing extensive analysis of data anomalies, and conducting interviews with Defense Contract Management Agency (DCMA) officials--the independent reviewers of MDA contractor EVM systems. Unlike GAO's reports in previous years, GAO was unable to analyze the EVM data for all MDA contracts. GAO determined that the data for the Ground-based Midcourse Defense (GMD) and Targets and Countermeasures programs were not sufficiently reliable to include in our report because of instability in these programs' baselines. When the baseline on which the work is performed and measured against is no longer representative of the program of record, program managers and other decision makers lose the ability to develop constructive corrective action plans to get the program on track. Specifically, without reliable EVM data, GAO was unable to identify significant performance drivers or forecast future cost and schedule performance. Because the two contracts associated with these programs represent half of the budgeted cost at completion for the 14 contracts GAO reviewed, GAO also determined it was not appropriate in this report to aggregate total projected underruns or overruns of the remaining 12 prime contracts as GAO has in prior reports. The GMD prime contractor performance data was not sufficiently reliable to use as the basis for analysis because the contractor was unable to update its baseline to include numerous changes to the program and modifications to the contract. Despite three large restructures since 2007 totaling over $2 billion, the GMD program has not conducted an integrated baseline review since December 2006. DOD acquisition policy states that an integrated baseline review is to be conducted within 6 months after contract award, exercise of contract options, or a major modification to an existing contract. The Director, MDA has taken extra steps to gain insight into the contractor's performance. Further, he intends to report EVM information to Congress annually. Similarly, the EVM data for the Targets and Countermeasures contractor is also not sufficiently reliable to use in our analysis. DCMA identified several issues with the stability of the Targets and Countermeasures program baseline including a large amount of schedule and quantity changes to planned flight tests and over 20 contract changes to the scope of work or corrective actions to quality issues for one of the delivery orders over the course of a year. Because the contractor has not been able to update the established budget in the baseline, the cost performance reports do not reflect an appropriate baseline against which to measure cost and schedule progress. Nine of the remaining twelve contracts experienced cost overruns for fiscal year 2009 mostly because of issues with maturing technologies, immature designs, or other technical issues. For example, the Airborne Laser contractor experienced a failure in some of the system's optics which required it to develop and procure new high power optics, delaying the test schedule and increasing program cost.
GAO_T-GGD-98-7
U.S. Postal Service: Little Progress Made in Addressing Persistent Labor-Management Problems Mr. Chairman and Members of the Subcommittee: We are pleased to be here today to discuss our report on the efforts of the Postal Service, the four major labor unions, and the three management associations to improve employee working conditions and overall labor-management relations. Our recently issued report provides updated information related to our September 1994 report, which identified various labor-management relations problems in the Postal Service and made recommendations for addressing such problems. In our most recent report, we discussed the challenges that these eight organizations continue to face in attempting to improve labor-management relations. Specifically, this report provides information on three topics: (1) the extent to which the Service, the four unions, and the three management associations have progressed in addressing persistent labor-management relations problems since our 1994 report was issued; (2) the implementation of various improvement efforts, referred to in the report as initiatives, some of which were intended to help these eight organizations deal with the problems that we identified in our 1994 report; and (3) approaches that might help the eight organizations improve labor-management relations. which the Service was using a third party to serve as a facilitator in labor-management discussions, which we recommended in our 1994 report. Little Progress Has Been Made in Improving Labor-Management Relations Problems Since our 1994 report was issued, the Postal Service has improved its overall financial performance, as well as its delivery of First-Class Mail. However, little progress has been made in improving persistent labor-management relations problems. In many instances, such problems were caused by autocratic management styles, the sometimes adversarial relationships between postal management and union leadership at the local and national levels, and an inappropriate and inadequate performance management system. Labor-management problems make it more difficult for these organizations to work together to improve the Service’s performance so it can remain competitive in today’s dynamic and competitive communications market. In recent years, we have found that the sometimes adversarial relationships between postal management and union leadership at national and local levels have generally persisted, as characterized by (1)a continued reliance on arbitration by three of the four major unions to settle their contract negotiation impasses with the Service, also known as interest arbitration; (2)a significant rise not only in the number of grievances that have been appealed to higher levels but also in the number of those awaiting arbitration; and (3)until recently, the inability of the Service and the other seven organizations to convene a labor-management relations summit to discuss problems and explore solutions. According to various postal, union, and management association officials whom we interviewed, the problems persist primarily because the parties involved cannot agree on common approaches for addressing these problems. This, in turn, has prevented the Service and the other seven organizations from sustaining the intended benefits of specific improvement efforts that could help improve the postal workroom climate. I would now like to discuss these problems in more detail. Regarding the use of interest arbitration, as discussed in our 1994 report, contract negotiations occur nationally between the Service and the four labor unions every 3 or 4 years. Since as far back as 1978, interest arbitration has sometimes been used to resolve bargaining deadlocks in contract negotiations by APWU, NALC, and Mail Handlers. The most recent negotiations occurred for contracts expiring in November 1994 for those three unions. The issues at stake were similar to those raised in previous negotiations, which included the unions’ concerns about wage and benefit increases and job security and postal management’s concerns about cost cutting and flexibility in hiring practices. According to a postal official, negotiations about old issues that keep resurfacing have at times been bitter and damaging to the relationship between the Service and the unions at the national level. Union officials also cited the Service’s contracting out of various postal functions—also known as outsourcing—as a topic that has caused them a great deal of concern. high volume. These officials told us that their views had not changed significantly since we issued our 1994 report. Generally, the officials tended to blame each other for the high volume of grievances being filed and the large number of backlogged grievances. Finally, at the time our 1997 report was issued, the Postal Service and the other seven organizations had been unable to convene a labor-management relations summit. The Postmaster General (PMG) proposed the summit over 2 years ago to, among other things, address our recommendation to establish a framework agreement of common goals and approaches that could help postal, union, and management association officials improve labor-management relations and employee working conditions. Initially, the responses from the other seven organizations to the PMG’s invitation were mixed. For instance, around January 1995, the leaders of the three management associations and the Rural Carriers union accepted the invitation to participate in the summit. However, at that time, the contracts for three unions—APWU, NALC, and Mail Handlers—had expired and negotiations had begun. The union leaders said they were waiting until contract negotiations were completed before making a decision on the summit. In April 1996, when negotiations had been completed, the three unions agreed to participate. Because of these initial difficulties in convening the summit, in February 1996, the Service asked the Director of FMCS to provide mediation services to help convene the summit. Also, in March 1996, Mr. Chairman, you encouraged the FMCS Director to assist the Service by providing such services. As discussed in our 1997 report, although various preliminary meetings had taken place to determine an agenda, the efforts to convene a summit were not successful. Recently, according to an FMCS official, a summit occurred on October 29, 1997, that was attended by various officials from the eight organizations, including the Postal Service, the four major unions, and the three management associations. We are encouraged by the fact that this meeting occurred. Such meetings can provide the participants a means of working toward reaching agreement on common approaches for addressing labor-management relations problems. We believe that such agreement is a key factor in helping these organizations sustain improvements in their relations and in the postal work environment. Actions to Implement Initiatives Have Been Taken, but Little Information Was Available on Results September 1996) on Delivery Redesign, have not endorsed the testing of the revised processes. At the national level, NALC officials told us that they believed that revisions to the processes by which city carriers sort and deliver mail should be established through the collective bargaining process. The Employee Opinion Survey (EOS) is an example of an initiative that was discontinued. The nationwide annual EOS, begun in 1992 and continued through 1995, was a voluntary survey designed to gather the opinions of all postal employees about the Service’s strengths and shortcomings as an employer. Postal officials told us that such opinions have been useful in helping the Service determine the extent of labor-management problems throughout the organization and make efforts to address those problems. Efforts to continue implementing this initiative were hampered primarily by disagreements among the Service and the other involved participants over how best to use the initiative to help improve the postal work environment. Also, according to postal officials, a lack of union participation in this initiative generally caused the Service to discontinue its use. According to some postal and union officials, the 1995 EOS was boycotted primarily because some unions believed that the Service inappropriately used the results of past surveys during the 1994 contract negotiations. Continued Need to Improve Labor-Management Relations As discussed in our report, we continue to believe that to sustain and achieve maximum benefits from any improvement efforts, it is important for the Service, the four major unions, and the three management associations to agree on common approaches for addressing labor-management relations problems. Our work has shown that there are no clear or easy solutions to these problems. But continued adversarial relations could lead to escalating workplace difficulties and hamper efforts to achieve desired improvements. In our report, we identified some approaches that might help the Service, the unions, and the management associations reach consensus on strategies for dealing with persistent labor-management relations problems. Such approaches included the use of a third-party facilitator, the requirements of the Government Performance and Results Act, and the proposed Postal Employee-Management Commission. As I mentioned previously, with the assistance of FMCS, the Postal Service, the four major unions, and the three management associations recently convened a postal summit meeting. As discussed in our 1994 report, we believe that the use of FMCS as a third-party facilitator indicated that outside advice and assistance can be useful in helping the eight organizations move forward in their attempts to reach agreement on common approaches for addressing labor-management relations problems. In addition, the Government Performance and Results Act provides an opportunity for joint discussions. Under the Results Act, Congress, the Postal Service, its unions, and its management associations as well as other stakeholders with an interest in postal activities can discuss not only the mission and proposed goals for the Postal Service but also the strategies to be used to achieve desired results. These discussions can provide Congress and the other stakeholders a chance to better understand the Service’s mission and goals. Such discussions can also provide opportunities for the parties to work together to reach consensus on strategies for attaining such goals, especially those that relate to the long-standing labor-management relations problems that continue to challenge the Service. Another approach aimed at improving labor-management relations is the proposed establishment of an employee-management commission that was included in the postal reform legislation you introduced in June 1996 and reintroduced in January 1997. Under this proposed legislation, a temporary, presidentially appointed seven-member Postal Employee-Management Commission would be established. This Commission would be responsible for evaluating and recommending solutions to the workplace difficulties confronting the Service. The proposed Commission would prepare its first set of reports within 18 months and terminate after preparing its second and third sets of reports. Comments From the Postal Service, Labor Unions, Management Associations, and FMCS We received comments on a draft of our report from nine organizations—the Service, the four major unions, the three management associations, and FMCS. The nine organizations generally agreed with the report’s basic message that little progress had been made in improving persistent labor-management relations problems, although they expressed different opinions as to why. Also, the nine organizations often had different views on such matters as the implementation of and results associated with the 10 initiatives; the likelihood of the organizations to reach consensus on the resolution of persistent labor-management relations problems; the desirability of having external parties, such as Congress, become involved in addressing such problems; and the comprehensiveness of our methodology, which we believed was reasonable and appropriate given the time and resources available. We believe that the diversity of opinions on these matters reinforces the overall message of our most recent report and provides additional insight on the challenges that lie ahead with efforts to try to improve labor-management relations problems in the Postal Service. In summary, the continued inability to reach agreement has prevented the Service, the four major unions, and the three management associations from implementing our recommendation to develop a framework agreement. We continue to believe that such an agreement is needed to help the Service, the unions, and the management associations reach consensus on the appropriate goals and approaches for dealing with persistent labor-management relations problems and improving the postal work environment. Although we recognize that achieving consensus may not be easy, we believe that without it, workplace difficulties could escalate and hamper efforts to bring about desired improvements. Mr. Chairman, this concludes my prepared statement. My colleague and I would be pleased to respond to any questions you may have. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed its report on the efforts of the Postal Service, the four major labor unions, and the three management associations to improve employee working conditions and overall labor-management relations, focusing on: (1) the extent to which the Postal Service, the four unions, and the three management associations have progressed in addressing persistent labor-management relations problems since GAO's 1994 report was issued; (2) the implementation of various improvement efforts, or initiatives, some of which were intended to help these eight organizations deal with problems identified in the 1994 report; and (3) approaches that might help the eight organizations improve labor-management relations. GAO noted that: (1) little progress has been made in improving persistent labor-management relations problems at the Postal Service since 1994; (2) although the Postal Service, the four major unions, and the three management associations generally agreed that improvements were needed, they have been unable to agree on common approaches to solving such problems; (3) these parties have not been able to implement GAO's recommendation to establish a framework agreement that would outline common goals and strategies to set the stage for improving the postal work environment; (4) in a recent report, GAO described some improvement initiatives that many postal, union, and management association officials believed held promise for making a difference in the labor-management relations climate; (5) despite actions taken to implement such initiatives, little information was available to measure results, as some initiatives: (a) had only recently been piloted or implemented; or (b) were not fully implemented or had been discontinued because postal, union, and management association officials disagreed on the approaches used to implement the initiatives or on their usefulness in making improvements; (6) efforts to resolve persistent labor-management relations problems pose an enormous challenge for the Postal Service and its unions and management associations; (7) with assistance from a third-party facilitator, the Postal Service and leaders from the four unions and the three management associations convened a summit, aimed at providing an opportunity for all the parties to work toward reaching agreement on how best to address persistent labor-management relations problems; (8) another such opportunity involves the strategic plan required by the Government Performance and Results Act, which can provide a foundation for all major postal stakeholders to participate in defining common goals and identifying strategies to be used to achieve these goals; (9) a proposal was included in pending postal reform legislation to establish a presidentially appointed commission that could recommend improvements; (10) GAO continues to believe that it is important for the eight organizations to agree on appropriate strategies for addressing labor-management relations problems; (11) various approaches exist that can be used to help the organizations attain consensus; and (12) without such consensus, the ability to sustain lasting improvements in the postal work environment may be difficult to achieve.
GAO_GAO-16-768
Background The OMB bulletin establishes ForeignAssistance.gov as an official site for U.S. government foreign assistance data and requires U.S. agencies to report, among other things, funding and activity-level data by implementing mechanism (e.g., contract or grant), including activity purpose, description, and location. In addition, the bulletin indicates that agencies are required to provide transaction-level data for each activity. Transactions are individual financial records of obligations and disbursements in an agency’s accounting system. Data on the website are categorized under the nine U.S. foreign assistance framework categories (economic development; education and social services; health; peace and security; democracy, human rights, and governance; environment; humanitarian assistance; program management; and multisector). In general, as of July 2016, data were available from fiscal years 2006 to 2017. Users can view the data through graphic presentations, including maps; filter data by agency, country, and sector; and download data in a spreadsheet. Figure 1 shows a graphic presentation of data (funding, agencies, and fiscal year) available on ForeignAssistance.gov. Among other things, the United States is publishing data on ForeignAssistance.gov to meet international commitments and domestic data transparency initiatives. The website incorporates key elements necessary for the United States to meet its IATI commitment, such as frequency of reporting (quarterly), activity-level data, and publishing format. In addition, a 2015 State guidance document (toolkit) indicates that ForeignAssistance.gov is expected to meet key domestic data reporting requirements on U.S. government activities, including those in the Digital Accountability and Transparency Act of 2014 (DATA Act). The DATA Act aims to improve the transparency and quality of the federal spending data by requiring that agencies begin reporting data on all federal spending—including grants, contracts, and other types of financial assistance—using governmentwide data standards by May 2017 and publish these data in a computer-readable format by May 2018. The specific reporting guidelines for ForeignAssistance.gov are outlined in the bulletin, which also notes two existing reports on U.S. foreign assistance—the U.S. Overseas Loans and Grants: Obligations and Loans Authorizations (Greenbook) report to Congress and the U.S. Annual Assistance Report to the Organisation for Economic Co-operation and Development’s Development Assistance Committee (OECD/DAC). The bulletin states that USAID will be the lead agency for verifying the data and assembling these reports. USAID publishes Greenbook and OECD/DAC data on Foreign Aid Explorer. Table 1 describes the key characteristics of U.S. foreign assistance reporting, including lead agency, frequency, and type of data collected. The bulletin outlines a quarterly data collection process for agencies to submit data for ForeignAssistance.gov. The process begins with State providing agencies a data submission template to assist with the collection of agency data. The quarterly process outlined in the bulletin includes the following key steps: Agencies are required to submit data on their ongoing foreign assistance activities on a quarterly basis. State is responsible for working with each agency’s designated point of contact to coordinate on the data submitted for ForeignAssistance.gov and identify areas in which agencies may need to make corrections. State is responsible for publishing agency data on ForeignAssistance.gov. In addition, on a quarterly basis, State is responsible for developing the U.S. government IATI-formatted file and submitting it to the IATI Registry. State Has Established a Process for Collecting and Publishing ForeignAssistance.gov Data State Is Collecting and Publishing ForeignAssistance.gov Data for 10 Agencies Since 2013, State has collected and published data from 10 of the 22 agencies identified in the bulletin. State focused on these 10 agencies because they are responsible for providing 98 percent of U.S. foreign assistance, according to State. In addition, State officials told us that they prioritized improving data quality before collecting data from the 12 agencies that are not yet reporting data for ForeignAssistance.gov. Figure 2 illustrates State’s data collection and publishing process for the 10 agencies currently reporting data to ForeignAssistance.gov. The 10 agencies are the Department of Defense (DOD), the Department of Health and Human Services (HHS), the Inter-American Foundation (IAF), the Millennium Challenge Corporation (MCC), the Peace Corps, State, the Department of the Treasury (Treasury), the U.S. African Development Foundation (USADF), USAID, and the Department of Agriculture (USDA). The process consists of five key steps, which occur on a quarterly basis: 1. State’s ForeignAssistance.gov team reaches out to each of the agency points of contact to collect that quarter’s data and provides technical guidance documents, including the bulletin. 2. Each agency point of contact e-mails data to State in a spreadsheet or the extensible markup language (XML) format, which can contain as many as 189 data fields per activity. In 2014, State expanded the number of data fields to a total of 189 to align with the IATI Standard. State officials indicated that some of the fields (e.g., currency and U.S. government) are auto-populated by the ForeignAssistance.gov team and that not all data fields are relevant to every agency. State officials told us that 55 of the 189 data fields are the most relevant to users and can be downloaded from ForeignAssistance.gov in spreadsheet format (see fig. 3). In November 2015, State prioritized 37 data fields that are critical to the U.S. government’s foreign assistance reporting, according to State. According to State officials, on average, the 10 agencies reporting data for ForeignAssistance.gov submit quarterly data for 40 to 50 data fields. 3. After converting the agency-submitted spreadsheet data to XML format for agencies that do not have conversion capability, State checks the agency data to determine whether all required fields are populated. 4. State relays any missing values or possible data reporting errors to agencies and allows them to review and make corrections before it publishes the data on the public website. 5. Using the agency-corrected data, State creates and publishes downloadable data files on ForeignAssistance.gov. During this final step, State also simultaneously links the quarterly data files to the IATI Registry. In November 2015, State created a community of practice website to allow agency points of contact to engage online, clarify any issues, and share lessons learned. However, as of May 2016, agencies had not posted comments or questions on the website. State Has Made Some Progress in Preparing the 12 Agencies Not Yet Reporting data for ForeignAssistance.gov In 2015, State developed a process to prepare the 12 agencies that had not yet reported data for ForeignAssistance.gov. The 12 agencies are the Department of Commerce (Commerce), the Department of Energy (DOE), the Department of Homeland Security (DHS), the Department of the Interior (DOI), the Department of Justice (DOJ), the Department of Labor (DOL), the Department of Transportation (DOT), the Environmental Protection Agency (EPA), the Export-Import Bank of the United States (Ex-Im), the Federal Trade Commission (FTC), the Overseas Private Investment Corporation (OPIC), and the U.S. Trade and Development Agency (USTDA). In September 2015, officials from all 12 agencies told us that State had not reached out to them with specific reporting instructions. Between November 2015 and May 2016, State provided the toolkit to all 12 agencies and conducted information sessions with most of them on reporting data for ForeignAssistance.gov. The toolkit contains descriptions of the data fields and resources to support the five-phase data publishing process. It also includes a list of 37 priority data fields, which, according to State, will make it easier for agencies to identify where they should focus their data collection efforts. The five phases of the publishing process are the following: 1. Planning phase: Agencies collaborate with the ForeignAssistance.gov team to understand the general reporting requirements. 2. Discovery phase: Agencies review their systems to identify foreign assistance data and better understand the data sources and needs. 3. Preparation phase: Agencies create and submit data samples to the ForeignAssistance.gov team, which provides feedback. 4. Processing phase: Agencies submit a final dataset to the ForeignAssistance.gov team and conducts quality assurance checks. 5. Execution phase: State uploads the final approved dataset to ForeignAssistance.gov, and agencies can issue press releases to external stakeholders. Between December 2015 and January 2016, State conducted two information sessions on the toolkit. Six agencies (DOE, DOJ, DOL, EPA, FTC, and OPIC) attended one or both sessions. As of May 2016, State had introduced all 12 agencies to the data publishing process, but none had published data on ForeignAssistance.gov. According to State, as of May 2016, of the 12 agencies, Six—Commerce, DHS, DOI, DOT, EPA, and USTDA—were in the planning phase. These agencies had received the toolkit, and some were beginning to review their systems to identify foreign assistance data and better understand the data required. Two—DOJ and Ex-Im—were in the preparation phase. These agencies prepared or provided sample activity-level and organization- level datasets for State to review as part of toolkit implementation. Four—DOE, DOL, FTC, and OPIC—were in the processing phase. These agencies had submitted at least one quarterly dataset for the ForeignAssistance.gov team to review. State has identified limited staff resources as a key constraint in collecting and publishing ForeignAssistance.gov data. According to State officials, the ForeignAssistance.gov team would be challenged to manage data provided by the 12 agencies not yet reporting, while continuing to publish data and work on data quality improvements with the 10 agencies that are reporting. Agencies Identified Various Impediments to Collecting ForeignAssistance.gov Data Our analysis shows that agencies face impediments in collecting the required information. Most of the 10 agencies reporting data for ForeignAssistance.gov identified limitations in their information technology (IT) systems as a key impediment in collecting and reporting data, while most of the 12 agencies not yet reporting data identified lack of staff time as a key impediment that they anticipate facing. In November 2015, we surveyed officials at all 22 agencies to identify and rate key factors that may impede their ability to collect and report data for ForeignAssistance.gov. Agencies Reporting Data Identified Limited Information Technology Systems and Availability of Data as Key Impediments Based on our analysis of survey data, the top three factors most of the 10 agencies reporting data identified as presenting a moderate or great impediment to collecting and reporting data for ForeignAssistance.gov were (1) limitations in their IT systems, (2) lack of available data, and (3) having to adjust available data to fulfill ForeignAssistance.gov requirements (see table 2). More than half of the agencies reporting data also identified as moderate or great impediments to collecting and reporting ForeignAssistance.gov data the lack of a single agency internal IT system from which to pull all ForeignAssistance.gov data and the number of data fields required (see table 2). No agency reporting data identified that a lack of a State point of contact or a lack of a governmentwide dedicated server presented a moderate or great impediment. Based on interviews conducted prior to the survey, most agency officials noted that their existing IT systems were limited in that they did not track data at a level of detail required by ForeignAssistance.gov. Several agencies whose main mission is not foreign assistance explained that updating their existing systems for these requirements was not a priority. Two agencies whose main mission is foreign assistance, State and USAID, undertook assessments of their current systems to understand and better align the capabilities of their systems with these reporting requirements. One agency, DOD, told us that it had plans to update its IT system to be able to report quarterly data, but that it would take some time. Most agency officials we interviewed also told us that they lacked an integrated system to track both the financial and project data required for ForeignAssistance.gov. To fill out the ForeignAssistance.gov spreadsheet data template provided by State, they said they had to collect data from key documents and multiple internal systems for accounting and project management. However, one agency—MCC—indicated in interviews that it already had an integrated system and was therefore able to consolidate its reporting. The agency noted recent updates to its existing IT system and attributed its upgraded system to being a newer agency. Agencies Not Yet Reporting Data Identified Lack of Staff Time as a Key Anticipated Impediment Based on our analysis of survey data, lack of staff time was the top factor identified as an anticipated impediment by most of the 12 agencies not yet reporting data for ForeignAssistance.gov (see table 3). More than half of the agencies reporting data also identified lack of funding, number of data fields, and limitations in their IT systems as moderate or great impediments to collecting and reporting data for ForeignAssistance.gov (see table 3). Based on interviews conducted prior to the survey, some of the agency officials noted that they would have to add ForeignAssistance.gov reporting duties to existing staff’s primary job functions, which could be burdensome, especially for agencies with smaller foreign assistance portfolios. Other agency officials said they anticipated that staff time would be an issue, because the collection process would involve many people throughout the agency. Furthermore, some of the agency officials also noted their limited capacity to provide data quarterly for the number of data fields that ForeignAssistance.gov requires. Officials from one agency explained that completing the annual requests for data for the Greenbook and OECD/DAC was already time-consuming, as it required sending a data call to subcomponents and field staff, compiling the data into one spreadsheet, and going through multiple layers of review. See appendix II for key factors identified by agencies reporting and not yet reporting data as impediments to their data collection process. ForeignAssistance.gov Is Not Transparent about Data Limitations, and Data Are Not Updated Annually to Ensure Quality Data from some agencies that report on their foreign assistance to ForeignAssistance.gov are incomplete at the aggregate level. We found that in the aggregate, 14 percent of obligations and 26 percent of disbursements for fiscal year 2014 were not published on the website, when compared to USAID’s verified data. We also found that for some high-priority data fields, information was missing or inconsistent with State’s definition for each data field. In addition, although ForeignAssistance.gov discloses that published data are incomplete, we found that the website is not fully transparent about these data limitations. Moreover, the data published on ForeignAssistance.gov are not annually updated against verified—complete and accurate—foreign assistance data, as required in the bulletin. Data Published on ForeignAssistance.gov Are Incomplete We analyzed fiscal year 2014 data downloaded from ForeignAssistance.gov to assess the completeness of aggregate funding data as well as the completeness and consistency of information in selected data fields with State’s definitions for those data fields. We found that data on ForeignAssistance.gov were incomplete at the aggregate funding level as well as for some disaggregated data at the transaction level. In addition, we found that data for selected high-priority data fields were inconsistent with State’s definitions. State officials told us that they rely on agencies to provide complete and accurate information for ForeignAssistance.gov. Fourteen Percent of Obligations and 26 Percent of Disbursements for Fiscal Year 2014 Were Not Published on ForeignAssistance.gov We found that in the aggregate, 14 percent of obligations and 26 percent of disbursements for fiscal year 2014 from the 10 agencies reporting data for ForeignAssistance.gov were not published on the website as of April 2016. Our comparison of funding data on ForeignAssistance.gov to funding data published on the Foreign Aid Explorer website showed that ForeignAssistance.gov did not reflect more than $10 billion in disbursements and about $6 billion in obligations provided by the 10 agencies in fiscal year 2014. Data on these two websites are generally comparable because both essentially use the same definition of foreign assistance, based on the Foreign Assistance Act of 1961, as amended. The OMB bulletin indicates that USAID’s data are verified—checked for completeness and accuracy. The bulletin notes that USAID’s verification includes checking for common errors, comparing with third-party sources to identify gaps and more complex errors, accommodating negative entries, and taking other steps to ensure data quality. Additionally, according to USAID officials, as part of the verification process for the data published on Foreign Aid Explorer, they check to ensure that there are no anomalies, errors, duplicates, or missing values. Furthermore, USAID checks to ensure that data are consistent with those for prior years and verifies the data against official U.S. government documents. We selected fiscal year 2014 because all 10 agencies published data for that year on both websites and because it was the most recent year for which fully reported and verified USAID foreign assistance data were available at the time of our analysis. Based on our analysis of the fiscal year 2014 funding data for the 10 agencies (see table 4), the total obligations on ForeignAssistance.gov were $36.1 billion, almost $6 billion (14 percent) less than the $42 billion identified on Foreign Aid Explorer. The total disbursements on ForeignAssistance.gov were $29.9 billion, more than $10 billion (26 percent) less than the $40.4 billion on Foreign Aid Explorer in the same year. Specifically, fiscal year 2014 obligations and disbursements were about the same or identical for four agencies (IAF, MCC, the Peace Corps, and USADF) and had a difference of less than 10 percent for two agencies (Treasury and USAID). However, the differences in obligations or disbursements exceeded 10 percent for DOD, HHS, State, and USDA, with DOD’s and USDA’s data showing the largest differences. The four agencies whose fiscal year 2014 ForeignAssistance.gov funding data showed a difference of more than 10 percent from the Foreign Aid Explorer data for the same year provided the following explanations: DOD. According to DOD officials, two factors explain the discrepancies. First, the two websites attribute funding for a significant portion of U.S. security assistance differently: on ForeignAssistance.gov, the assistance—which State funds and DOD implements—is attributed to State, whereas on Foreign Aid Explorer, it is attributed to DOD. The second factor, according to DOD officials, is inconsistent reporting of fiscal year 2014 funding data for ForeignAssistance.gov: DOD reported obligations, but not disbursements, for some programs, and disbursements, but not obligations, for other programs. For example, DOD did not report $4.7 billion in fiscal year 2014 obligations or disbursements for the Afghanistan Security Forces Fund on ForeignAssistance.gov. HHS. HHS officials stated that the data the agency published on Foreign Aid Explorer more accurately reflect the agency’s foreign assistance portfolio than the data the agency published on ForeignAssistance.gov. HHS officials suggested that their agency data for fiscal years 2013, 2014, and 2015 on ForeignAssistance.gov should not be used until the quality of the data published on the website is improved. They did not explain the differences in the funding data on the two websites. State. State officials told us that to some extent the discrepancies came about because funding for peacekeeping and U.S. contributions to international organizations was not included in the fiscal year 2014 data on ForeignAssistance.gov. State officials also noted that data published on Foreign Aid Explorer are considered to be more fully reported because they are submitted to USAID by State’s bureaus, which manually enter detailed data; by comparison, State’s data for ForeignAssistance.gov are generated from the agency’s accounting system. In addition, State’s accounting system at present includes transactions reported by State’s main office in Washington, D.C., but does not include transactions of overseas locations. USDA. USDA officials told us that they reported incorrect fiscal year 2014 obligations for ForeignAssistance.gov because USDA misinterpreted State’s guidance. They also noted that USDA is working with State and USAID to ensure that USDA’s foreign assistance data are accurate and consistent on both websites. DOD, HHS, State, and USDA officials said that they were aware of the differences in their funding data published on the two websites and were working on improving the quality of the data published on ForeignAssistance.gov. State officials could not indicate when the 10 reporting agencies would be able to report complete funding data for ForeignAssistance.gov and stated that it is the agencies’ responsibility to report complete and accurate data. Information Was Missing or Inconsistent with Definitions for Some High-Priority Data Fields In addition to discrepancies in the aggregate funding data, we found that for some of the high-priority data fields the information was either missing or inconsistent with definitions that State developed based on the IATI Standard. We analyzed fiscal year 2014 data for six of the data fields prioritized by IATI—implementing organization, award title, award description, award status, award location, and award sector— to determine if agencies populated these data fields with information and if the information was consistent with State’s definitions. According to State, data fields prioritized by IATI should be populated because they provide key information necessary to track a specific activity. We analyzed the content of the six data fields using a probability sample of 106 transactions drawn from the fiscal year 2014 data. We found that for three data fields—implementing organization, award location, and award sector—information was provided and was consistent with State’s definition for a majority or all of the transactions (see table 5). For the other three data fields—award title, award description, and award status—the information was missing or inconsistent with the definitions for the majority of the transactions in the sample. For example, for award title, 82 percent of the transactions were either missing information or had information that was inconsistent with the definition for this data field. We also found that for award title, agencies often provided program or sector descriptors, and for award description, agencies routinely provided shorthand descriptions, acronyms, or terminology that could only be understood by officials at the agency that made the award. For example, an award description would contain “Train, Eval & Oth Related Act” or “AIDSTAR Sector II Award.” Only three transactions in our sample contained award descriptions that were somewhat consistent with the definition of a brief narrative that provided an understanding of the undertaking for which the award was funded, its objectives, and the hypothesis of the award’s development impact. The lack of clarity for data fields, such as award description, could make it difficult for domestic and international users of ForeignAssistance.gov to understand the data. State officials told us that our findings were consistent with their observations of the data. They explained that the reporting agencies’ data systems were not currently capable of capturing and generating data that would fully meet IATI’s expectations for detailed, transaction-level information. Our analysis also shows that most of the reporting agencies identified impediments in collecting and reporting data, including limitations in their IT systems and lack of available data required by ForeignAssistance.gov. State Relies on Agencies to Provide Complete and Accurate Data State officials told us that they rely on agencies to provide complete and accurate data because, according to the bulletin, reporting agencies are responsible for the accuracy, completeness, and integrity of their data submissions. Additionally, State officials noted that the ForeignAssistance.gov team has neither the expertise nor the resources to check the data for completeness or accuracy on a quarterly basis. They explained that collecting and publishing data from 10 agencies on a quarterly basis is inherently challenging and can lead to trade-offs between quality and timeliness. Nevertheless, State’s ForeignAssistance.gov team conducts certain checks, such as ensuring that agency data are formatted properly and that dates are within valid ranges. (See table 6 in app. III for data field format values.) ForeignAssistance.gov Is Not Fully Transparent about Data Limitations Although ForeignAssistance.gov discloses—through a graphic summary and agency notes—that published data may not be complete, it is not fully transparent about data limitations. Specifically, we found that ForeignAssistance.gov does not clearly identify what data are missing and consistently identify and publish data limitations by agency. According to OMB’s Open Government Directive, which is Memorandum M-10-06, agencies should take specific actions to implement the principles of transparency, among other things, in publishing government information online. The memorandum indicates steps to improve transparency in publishing information, which include identifying high- value information not yet available for online publication. In addition, according to prior GAO work, useful practices that help foster transparency to the public and assist decision makers in understanding the quality of agency data include, among other things, discussion of data quality and limitations. ForeignAssistance.gov Is Not Clear about Missing Data Although ForeignAssistance.gov discloses that some data are not yet available, it does not clearly state what data are missing. State presents a graphic summary of data from the 10 reporting agencies published on the website (fig. 4). It uses the terms “full” and “partial” to characterize agency datasets, but does not explain what these terms mean. Specifically, as of July 2016, ForeignAssistance.gov indicated that six agencies—IAF, MCC, the Peace Corps, Treasury, USADF, and USAID—reported a full set of data and that four agencies—DOD, HHS, State, and USDA—reported a partial set of data (see fig. 4). In particular, it is not clear whether full reporting means that an agency reported data (1) reflecting all of its foreign assistance funding or (2) across all of the required data fields. While the agency notes published on ForeignAssistance.gov provide some information to help understand an agency’s foreign assistance activities, not all agencies report this information and, moreover, these notes do not clearly identify data limitations for reporting agencies. We found that 6 of the 10 reporting agencies (DOD, the Peace Corps, State, Treasury, USAID, and USDA) provided agency notes on ForeignAssistance.gov as of July 2016. All six agencies provided general descriptions of their foreign assistance programs and accounts. However, the agency notes for three of the four agencies characterized on ForeignAssistance.gov as reporting partial data (see fig. 4)—DOD, State, and USDA—did not identify data limitations for a given year. For example, they did not comment on gaps in obligation and disbursement amounts generally or by fiscal year. The fourth agency characterized on ForeignAssistance.gov as reporting partial data—HHS—did not post agency notes. In contrast, other publishers of U.S. data provide more detailed information on data limitations or changes to the data in a given year. For example, the 2014 Greenbook identified an agency that provided data for two additional programs that year, an agency that continued to not report data, and another agency that was unable to report on certain data in detail. Agency notes are not required by State or the bulletin, which states that agencies may provide supplemental narratives and can include data explanations and other clarifying information. Additionally, State’s most recent guidance (toolkit) on reporting to ForeignAssistance.gov does not mention agency notes or provide instructions for agencies to identify the limitations of their data. State officials told us that they rely on agencies to report data limitations. However, if State does not provide agencies with guidance to identify data limitations that State can clearly disclose on ForeignAssistance.gov, it may undermine the website’s goal of improving public knowledge and accountability of U.S. foreign assistance. ForeignAssistance.gov Data Are Not Updated Annually with Verified Foreign Assistance Data As of May 2016, State had not updated ForeignAssistance.gov data with verified—complete and accurate—annual foreign assistance data to improve the quality and ensure consistency in the reporting of U.S. foreign assistance. OMB indicates in its bulletin that ForeignAssistance.gov data should be updated at the end of each calendar year using verified data reported by USAID for the Greenbook and OECD/DAC to ensure consistency in published information. Additionally, the bulletin indicated that in 2014, USAID, State, OMB, the National Security Staff, and the Office of Science and Technology were expected to undertake a review of the first 2 years’ experience to assess whether agencies whose data are published on ForeignAssistance.gov had demonstrated sufficient internal data quality control to graduate from the USAID verification process. However, as of May 2016, this interagency review had not taken place because, according to OMB officials, only 3 of the 10 reporting agencies were providing data to ForeignAssistance.gov of sufficient quality to meet the Greenbook and OECD/DAC reporting requirements. Since the majority of the agencies’ data were not yet of sufficient quality, OMB officials noted that a review to graduate agencies from USAID’s verification process was premature. State and USAID officials told us that they are unable to update ForeignAssistance.gov with USAID’s verified data because of differences between the two datasets. State officials cited three key differences: Number of data fields. ForeignAssistance.gov data includes up to 189 data fields; however, USAID verifies the information only for a subset of about 20 data fields. Frequency of data reported. ForeignAssistance.gov captures quarterly data; however, USAID uses annual data. Transaction-level data are stored differently. ForeignAssistance.gov captures transaction-level data for each activity; however, USAID aggregates the transaction-level data to the activity level. USAID noted that reconciling ForeignAssistance.gov data with verified Greenbook and OECD/DAC data would be problematic, especially for the seven agencies whose data do not meet the quality standards for Greenbook and OECD/DAC reporting. For these agencies, USAID (1) obtains missing information for some data fields (e.g., recipient country) directly from agency officials or units that report the information, and (2) assigns sector codes and other fields—which are not always provided in the data that agencies report for ForeignAssistance.gov—based on OECD/DAC statistical reporting directives and methodologies. OMB officials agreed that the bulletin’s requirement for annually updating data published on ForeignAssistance.gov with USAID-verified data has not been feasible. They also acknowledge that the quality of ForeignAssistance.gov data needs to be improved. Since State, USAID, and OMB recognize that a key step outlined in the bulletin to ensure data quality may not be feasible, and in the absence of the 2-year review on data verification or guidance on how to address the quality of the data on ForeignAssistance.gov, data will likely remain inconsistent across the range of U.S. foreign assistance reporting. Conclusions In response to domestic and international initiatives in the last decade, the U.S. government has increased the frequency and amount of foreign assistance data made available to the public. In 2011, the U.S. government made an international commitment to publishing more detailed and timely funding and activity data for users, including partner country governments, civil society organizations, and taxpayers. As the U.S. government’s lead agency for this reporting, State established ForeignAssistance.gov, with guidance from OMB, to provide detailed foreign assistance data on a quarterly basis. Given the magnitude and frequency of data collection, State prioritized collection and publishing of data for 10 agencies that account for the majority of U.S. foreign assistance. Facing trade-offs—which agencies recognize—between data quality and timeliness in reporting, State has experienced challenges in ensuring transparency and data quality on ForeignAssistance.gov. In particular, in the absence of guidance from State to reporting agencies to clearly identify their data limitations, State has not fully disclosed data limitations of ForeignAssistance.gov, thereby undermining the website’s goal of increasing the transparency of U.S. foreign assistance information. Moreover, because updating ForeignAssistance.gov with USAID verified data has not been feasible and the interagency assessment of the process to ensure sufficient quality control has not been done, gaps in data quality remain unaddressed, and users may risk using inaccurate or incomplete information for decision-making and accountability purposes. Recommendations for Executive Action To improve the transparency of ForeignAssistance.gov, we recommend that the Secretary of State provide guidance to agencies to identify data limitations that State can clearly disclose on the website. To improve the quality of the data published on ForeignAssistance.gov and help ensure consistency in published information, we recommend that the Secretary of State, in consultation with the Director of OMB and the USAID Administrator, take the following two actions: undertake a review of the efforts to date on ensuring data quality and develop additional guidance that takes into consideration current challenges to updating ForeignAssistance.gov with verified data. Agency Comments and Our Evaluation We provided a draft of this report to OMB and the 22 agencies reviewed in this report (Commerce, DHS, DOD, DOE, DOI, DOJ, DOL, DOT, EPA, Ex-Im, FTC, HHS, IAF, MCC, OPIC, PC, Treasury, State, USADF, USAID, USDA, and USTDA) for review and comment. In written comments, reprinted in appendix IV and V, State and USAID agreed with our recommendations. OMB also agreed with our recommendation and provided us with the following comments in an e-mail: it will continue to work with State and USAID to help guide agencies in improving the quality and consistency of the data published on ForeignAssistance.gov. However, State expressed concern that the report did not provide specific, actionable recommendations to the other 20 agencies responsible for reporting ForeignAssistance.gov. As noted in the report, OMB Bulletin No. 12-01 provides overall guidance on data standards and requirements for the other 20 agencies. We made the recommendations to State, in consultation with OMB and USAID, because these agencies are responsible for improving guidance and reporting requirements that can help achieve the website’s goal of improving public knowledge and accountability of U.S. foreign assistance. DOD, EPA, FTC, HHS, MCC, State, USAID, and USDA provided technical comments that we incorporated as appropriate. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 13 days from the report date. At that time, we will send copies of this report to the Directors of the Office of Management and Budget, Peace Corps, and U.S. Trade and Development Agency; the Administrators of the Environmental Protection Agency and U.S. Agency for International Development; the Secretaries of State, Agriculture, Commerce, Defense, Energy, Health and Human Services, Homeland Security, the Interior, Labor, Transportation, and the Treasury; the Attorney General of the United States; General Counsel of the Federal Trade Commission; and the Chief Executive Officers of the Export-Import Bank, Inter-American Foundation, Millennium Challenge Corporation, Overseas Private Investment Corporation, and U.S. African Development Foundation. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-3149 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology To review the collection and reporting of data for ForeignAssistance.gov, this report examines (1) the Department of State’s (State) data collection and publishing process for ForeignAssistance.gov; (2) key impediments, if any, that agencies face in collecting and reporting data for ForeignAssistance.gov; and (3) the data published on ForeignAssistance.gov. To conduct this work, we analyzed key guidance documents relating to the data collection process: the 2012 Office of Management and Budget (OMB) Bulletin No. 12-01, Guidance on Collection of U.S. Foreign Assistance Data (Sept. 25, 2012) (bulletin), the 2009 OMB Memorandum M-10-06, Open Government Directive (Dec. 8, 2009), and State’s November 2015 Agency Reporting Toolkit (toolkit). We also reviewed key U.S. government plans and international agreements that outline steps for ensuring transparency in foreign assistance reporting, including the 2011 and 2013 U.S. Open Government National Action Plan, the 2005 Paris Declaration on Aid Effectiveness, and the 2011 Busan Outcome Agreement. To examine State’s data collection process for ForeignAssistance.gov, we conducted semistructured interviews with officials of 22 U.S. agencies— the Departments of Agriculture, Commerce, Defense, Energy, Health and Human Services, Homeland Security, Justice, the Interior, Labor, State, Transportation, and the Treasury; the Environmental Protection Agency; the Export-Import Bank of the United States; the Federal Trade Commission; the Inter-American Foundation; the Millennium Challenge Corporation; the Overseas Private Investment Corporation; the Peace Corps; the U.S. African Development Foundation; the U.S. Agency for International Development; and the U.S. Trade and Development Agency—concerning four areas of the data collection process, including (1) data collection and validation processes; (2) guidance provided by State on the data collection process; (3) resource needs to collect data, such as infrastructure, staff, and training; and (4) impediments that agencies may face in collecting and reporting data for ForeignAssistance.gov. We also interviewed OMB on its monitoring role for ForeignAssistance.gov. To identify the impediments that agencies face in collecting and reporting data for ForeignAssistance.gov, we surveyed the same 22 U.S. agencies to identify key impediments, if any, that agencies may face in collecting and reporting data for ForeignAssistance.gov, including agency reporting systems; required data fields in ForeignAssistance.gov; resources; and guidance provided that may impede an agency’s ability to collect and report data for ForeignAssistance.gov. For the 12 agencies that do not yet report data for ForeignAssistance.gov, we modified our survey instrument to probe the extent to which they anticipated impediments in collecting and reporting foreign assistance data to State. The survey was administered in November 2015, and agencies provided their responses between November 2015 and January 2016. All 22 agencies responded to the survey. To assess the data from the 10 reporting agencies published on ForeignAssistance.gov, we compared fiscal year 2014 funding (obligation and disbursement) data published on Foreign Assistance.gov to the data collected and published by the U.S. Agency for International Development’s (USAID) data on the Foreign Aid Explorer website (http://explorer.usaid.gov/). Foreign assistance data available on these two websites are based on essentially the same definition of foreign assistance. To determine the reliability of the Foreign Aid Explorer data, we interviewed USAID officials, reviewed documentation about the data, and examined the data published on USAID’s website. We determined that USAID’s verification processes for the Foreign Aid Explorer data include checks to identify potential anomalies, duplicates, missing values, and other errors. In addition, we found that USAID compares the agencies’ data submissions to other available sources as completeness checks. We determined that data published on Foreign Aid Explorer were sufficiently reliable to serve as a reasonable comparison for the ForeignAssistance.gov data for the purposes of our reporting objectives. However, it was beyond the scope of this engagement to independently verify agency source data. We downloaded fiscal year 2014 data from the two websites in April 2016. To examine the completeness of ForeignAssistance.gov data across data fields for the 10 reporting agencies, we analyzed the entire fiscal year 2014 dataset downloaded from ForeignAssistance.gov, which contained 176,651 transactions and 55 data fields, for missing values. We also conducted a more in-depth analysis of specific data fields using a random stratified sample of 106 transactions drawn from the fiscal year 2014 ForeignAssistance.gov data. We stratified the records by agency and allocated sample units to each agency’s stratum in proportion to its representation in the population of 176,651 transactions. Using this sample, we analyzed the information reported by the agencies in six data fields—implementing organization, award title, award description, award status, award location, and award sector. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95-percent confidence interval (e.g., plus-or-minus 10 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. We also interviewed State officials and contractors on the ForeignAssistance.gov team as well as key users of ForeignAssistance.gov—representatives from a consortium of nongovernmental organizations and the International Aid Transparency Initiative (IATI). We conducted this performance audit from June 2015 to August 2016 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Key Factors Identified by Agencies Reporting and Not Reporting ForeignAssistance.gov Data as Impediments to Their Data Collection Process In November 2015, we surveyed officials of the 22 agencies identified in the OMB Bulletin No. 12-01 required to collect and report data for ForeignAssistance.gov. The survey asked the respondents to rate factors that impede their ability to collect and report data as follows: not at all an impediment; slight impediment; moderate impediment; great impediment; or no basis to judge. Figure 5 shows the factors that were identified by the 10 agencies reporting data for ForeignAssistance.gov that presented a great impediment. Figure 6 shows the factors that were identified by the 12 agencies that have yet to report data for ForeignAssistance.gov that they anticipate as presenting a great impediment. Appendix III: Analysis of Fiscal Year 2014 Transaction-Level Data on ForeignAssistance.gov We analyzed fiscal year 2014 data that we downloaded from ForeignAssistance.gov in April 2016. This downloadable data file, in the comma-separated values format published by the Department of State (State) on ForeignAssistance.gov, contained 55 data fields and 175,651 transactions for 10 agencies reporting foreign assistance data. In the data file, each data field is represented by a column and each transaction by a row of data. According to State officials, these 55 data fields contain the most useful information about U.S. foreign assistance for website users and are a subset of the 189 data fields for which State collects foreign assistance data for ForeignAssistance.gov. A transaction is an individual financial record for each activity in an agency’s accounting system that has been processed in the given time period for program work with implementing partners and other administrative expenses. We found that 24 of the data fields had fully reported information for all transactions and that the remaining 31 data fields were missing information, including 17 data fields for which 50 to 100 percent of the transactions had no data. ForeignAssistance.gov does not explain the reasons for missing information. However, State and other agency officials told us that a data field without any data may not necessarily mean that the agency did not provide required information because (1) the data field may not be relevant to the agencies’ reporting of foreign assistance, or (2) such data are not yet available for U.S. foreign assistance. For example, the data field for award interagency transfer status may not be relevant for an agency if there are no interagency funds to report. Additionally, other data fields may not be reported by any agency because of the nature of U.S. foreign assistance. For example, data fields for the budget period may not be populated because U.S. agencies simply do not provide such information, according to State. Table 6 below provides the data field name, data field value format, and definition for each of the 55 data fields, as well as the number and percentage of transactions that contained no data for each data field. In the table, if all transactions for a data field were populated with data, then the number of transactions (and the percentage of transactions) with no data are zero. Appendix IV: Comments from the Department of State Appendix V: Comments from the U.S. Agency for International Development Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Mona Sehgal (Assistant Director), Farahnaaz Khakoo-Mausel, Sada Aksartova, Melissa Wohlgemuth, Bridgette Savino, Debbie Chung, Martin De Alteriis, Carl Ramirez, Alexander Welsh, David W. Hancock, Julie Hirshen, David Dayton, Neil Doherty, and Alexandra Jeszeck made key contributions to this report. Kevin Walsh, Peter DelToro, and Shirley Hwang provided technical assistance.
The overarching goal of ForeignAssistance.gov is to enable a variety of stakeholders—including U.S. citizens, civil society organizations, the U.S. Congress, federal agencies, partner countries, and other donors—to research and track U.S. government foreign assistance investments in an accessible and easily understood format. GAO was asked to review the collection and reporting of ForeignAssistance.gov data. GAO examined (1) State's data collection and publishing process; (2) key impediments, if any, that agencies face in collecting and reporting data to State; and (3) the data published on ForeignAssistance.gov. GAO reviewed agency documents, assessed ForeignAssistance.gov data for completeness by comparing them to USAID's Foreign Aid Explorer data on U.S. foreign assistance for fiscal year 2014, and conducted semistructured interviews with the 22 agencies on their data collection and verification processes. GAO also interviewed OMB officials. Since 2013, the Department of State (State) has collected and published quarterly data on ForeignAssistance.gov from 10 agencies that provide the majority of U.S. foreign assistance and more recently has initiated a process to prepare the12 remaining agencies to collect and report data. The 2012 Office of Management and Budget (OMB) Bulletin No. 12-01 outlined requirements for collecting and publishing data from 22 agencies and designated State as the lead agency for implementing ForeignAssistance.gov. GAO's survey showed that most of the 10 agencies reporting data for ForeignAssistance.gov identified limitations in their information technology systems and data availability as key impediments in collecting and reporting data, while most of the 12 agencies not yet reporting data identified lack of staff time as a potential key impediment. GAO found that the data on ForeignAssistance.gov were incomplete and that State was not fully transparent about such limitations on the website. In addition, State has not updated ForeignAssistance.gov with verified annual data to ensure quality. GAO's analysis of fiscal year 2014 data showed that ForeignAssistance.gov did not report over $10 billion in disbursements and about $6 billion in obligations provided by the 10 reporting agencies, compared to U.S. Agency for International Development (USAID)-verified data (see fig.). A 2009 OMB memorandum requires agencies to improve transparency in published information, which includes identifying high-value information not yet available online. However, State, as the publisher of this information, does not provide agencies with guidance to identify data limitations that it can clearly disclose on the website and noted that it relies on agencies to report these. The absence of clear information on data limitations may undermine the goal of ForeignAssistance.gov to improve public knowledge and accountability of U.S. foreign assistance. Moreover, State, as the lead agency, has not updated ForeignAssistance.gov with verified data even though OMB Bulletin No. 12-01 indicates that these data should be updated annually using USAID-verified data. State and USAID officials told GAO that they are unable to update ForeignAssistance.gov with verified data because of differences in their datasets. OMB also noted that a review to assess whether agencies had sufficient internal data quality controls did not take place, although it was required by the bulletin. In the absence of a review or additional guidance to address the quality of the data on ForeignAssistance.gov, data will likely remain incomplete.
GAO_GAO-03-935
Background DOD is historically the federal government’s largest purchaser of services. Between 2001 and 2002, DOD’s reported spending for services contracting increased almost 18 percent, to about $93 billion. In addition to the sizeable sum of dollars involved, DOD contracts for a wide and complex range of services, such as professional, administrative, and management support; construction, repair, and maintenance; information technology; research and development; medical care; operation of government-owned facilities; and transportation, travel, and relocation. In each of the past 5 years, DOD has spent more on services than on supply and equipment goods (which includes weapon systems and other military items) (see fig. 1). Despite this huge investment in buying services, our work—and the work of the DOD Inspector General—has found that DOD’s spending on services could be more efficient and more effectively managed. In fact, we have identified DOD’s overall contract management as a high-risk area, most recently in our Performance and Accountability and High-Risk Series issued this past January. Responsibility for acquiring services is spread among individual military commands, weapon system program offices, or functional units in various defense organizations, with limited visibility or control at the DOD- or military-department level. Our reports on DOD’s contract management have recommended that DOD use a strategic approach to improve acquisition of services. Our work since 2000 at leading companies found that taking a more strategic approach to acquiring services enabled each company to stay competitive, reduce costs, and in many cases improve service levels. Pursuing such a strategic approach clearly pays off. Studies have reported some companies achieving savings of 10 to 20 percent of their total procurement costs, which include savings in the procurement of services. These leading companies reported achieving or expecting to achieve billions of dollars in savings as a result of taking a strategic approach to procurement. For example, table 1 summarizes the savings reported by the companies we studied most recently. The companies we studied did not follow exactly the same approach in the manner and degree to which they employed specific best practices, but the bottom line results were the same—substantial savings and, in many cases, service improvements. Figure 2 elaborates on the four broad principles and practices of leading companies that are critical to successfully carrying out the strategic approach. These principles and practices largely reflect a common sense approach, yet they also represent significant changes in the management approach companies use to acquire services. Companies that have been successful in transforming procurement generally begin with a corporate decision to pursue a more strategic approach to acquiring services, with senior management providing the direction, vision, and clout necessary to obtain initial buy-in and acceptance of procurement reengineering. When adopting a strategic, best-practices approach for changing procurement business processes, companies begin with a spend analysis to examine purchasing patterns to see who is buying what from whom. By arming themselves with this knowledge, they identify opportunities to leverage their buying power, reduce costs, and better manage their suppliers. Companies also institute a series of structural, process, and role changes aimed at moving away from a fragmented acquisition process to a more efficient and effective corporate process. These changes include adjustments to procurement management structure and processes such as instituting companywide purchasing of specific services; reshaping a decentralized process to follow a more coordinated, strategic approach; and increasing the involvement of the corporate procurement organization, including working across units to help identify service needs, select providers, and better manage contractor performance. DOD Has Made Limited Progress Reforming Management Structure and Improving Knowledge of Service Spending DOD has made limited progress in its overall implementation of section 801, particularly with respect to establishing a management structure to oversee a more strategic approach to the acquisition of services, as envisioned by the legislative history of this provision. While DOD’s leaders express support for a strategic approach in this area, they have not translated that support into broad-based reforms. The experience of leading companies offers particularly relevant insights into the nature of long-term changes in management structure and business processes. Long-term changes will be needed if the military departments and the defense agencies are to be successful in adopting a more strategic approach to acquiring services and achieving substantial savings and other benefits. Private sector experience demonstrates the need to change how services are acquired—by modernizing management structure and business processes—and setting performance goals, including savings, and establishing accountability for achieving them. Such changes are needed to move DOD and the military departments from a fragmented approach to doing business to one that is more coordinated and strategically oriented. The end goal is to institute a departmentwide perspective—one that will ensure that the organization is getting the best overall value. Industry has found that several ingredients are critical to the successful adoption of a strategic approach. For example, senior management must provide continued support for common services acquisitions processes beyond the initial impetus. Another example is to cut across traditional organizational boundaries that contributed to the fragmented approach by restructuring procurement management and assigning a central or corporate procurement organization greater responsibility and authority for strategic planning and oversight of the companies’ service spending. Companies also involve business units in this coordinated approach by designating commodity managers to oversee key services and making extensive use of cross-functional commodity teams to make sure they have the right mix of knowledge, technical expertise, and credibility. Finally, companies extensively use metrics to measure total savings and other financial and nonfinancial benefits, to set realistic goals for improvement, and to document results over time. To date, DOD has not significantly transformed its management structure in response to the 2002 national defense authorization requirements, and its crosscutting effort to improve oversight will focus on only a portion of military department spending for services. Specifically, the Under Secretary of Defense for Acquisition, Technology, and Logistics and each of the military departments now have policies in place for a management structure and a process for reviewing major (i.e., large-dollar or program- critical) services acquisitions for adherence to performance-based, competition and other contracting requirements. (See app. I for a descriptive comparison of DOD and military department policies.) DOD modeled its review process for acquiring services after the review process for acquiring major weapons systems; the policy is intended to elevate high-dollar value services to the same level of importance and oversight. DOD intends that the new program review structure provide oversight before it commits the government to a major acquisition to ensure that military departments and defense agencies’ buying strategies are adequately planned, performance-based, and competed. The new policy similarly establishes a high-dollar threshold of $500 million or more for selecting which service acquisitions must move forward from lower-level field activities, commands, and program offices to the military department headquarters (and possibly to DOD) for advance review and approval. We expect that this new policy will lead to very few service acquisition strategies and a small portion of overall service spending being subjected to central oversight at the military department headquarters level or at DOD headquarters. DOD officials acknowledge that most service acquisitions cost less than the $500 million threshold required for headquarters-level reviews, and the total value of the few contract actions likely to be forwarded under that threshold will amount to a small portion of DOD’s total spending on services, which is approaching $100 billion each year. DOD’s review criteria indicate that the central reviews that do take place will be focused on approving individual acquisitions rather than coordinating smaller, more fragmented requirements for service contracts to leverage buying power and assessing how spending could be more effective. Our discussions with procurement policy officials in the various military departments confirmed that they expect no more than a few acquisitions to be reviewed at the DOD or military department headquarters level each year. While the new process complies with the act’s requirements to improve oversight of major service acquisitions, it has not led to centralized responsibility, visibility, or accountability over the majority of contracting for services. In response to the legislative requirement to develop an automated system to collect and analyze data, DOD has started a spend analysis pilot that views spending from a DOD-wide perspective and identifies large-scale savings opportunities. However, the scope of the pilot is limited to a test of a few service categories. Thirteen months after Congress directed that DOD create an automated system to support management decisions for the acquisition of services, the Deputy Secretary of Defense tasked a new team to carry out the pilot. In May 2003, DOD hired a vendor to support the team by performing an initial spend analysis and developing strategic sourcing business cases for only 5 to 10 service categories. Efforts to extract data for the pilot spend analysis will be restricted to information taken from centrally available databases on services contract actions (excluding research and development) in excess of $25,000, a limitation due to the 90-day time frame established for completing the spend analysis. Pilot projects and associated efforts will be completed by September 2004, so it is too early to tell how DOD will make the best use of the results. DOD Does Not Have a Strategic Plan for Integrating Early Initiatives Even though DOD’s senior leadership called for dramatic changes to current practices for acquiring services about 2 years ago, and proposed various initiatives and plans to transform business processes, DOD’s early initiatives have not moved forward quickly, expanded or broadened in scope, or been well coordinated. The experience of leading companies we studied in our prior work indicates that successfully addressing service acquisition challenges requires concerted action and sustained top-level attention, efforts that must be reinforced by a sound strategic plan. Moreover, section 801 required DOD to issue guidance on how the military departments should carry out their management responsibilities for services contracting. To date, the only guidance that DOD has issued involves review of individual major service acquisitions for adherence to performance-based, competition, and other acquisition strategy requirements. DOD has not established a strategic plan that provides a road map for transforming its services contracting process and recognizes the integrated nature of services contracting management problems and their related solutions. Air Force, Army, and Navy headquarters procurement organizations have initiatives underway to better manage the acquisition of services, but they are in the early stages of development and unconnected to each other. Limited progress has taken place on key efforts to coordinate responsibility and leverage purchasing power, even in the pursuit of key goals such as reducing unnecessary spending and redirecting funds to higher priorities such as modernization and readiness. Information we obtained on the military departments’ early efforts suggests that military department leaders understand the value of a strategic approach in this area, but they have not yet translated that understanding into broad-based reforms to meet comprehensive performance goals, including savings. Although the Air Force, Army, and Navy initiatives that follow seek to include the basic principles of the framework used by leading companies when they acquire services, the initiatives are still under study, or in the early stages of implementation. At a January 2003 symposium, Air Force participants from headquarters and major commands discussed a vision for transforming contracting for services and taking a strategic, departmentwide approach based on commercial best practices. At this event, the Deputy Assistant Secretary for Contracting called for rethinking business processes, noting that the Air Force spends over half of its discretionary dollars on services, yet most of the attention goes to managing goods. To move forward on this initiative, staff from acquisition headquarters and major commands are to work together on an 18-month project to capture, analyze, and use spend analysis data and develop an Air Force strategic sourcing plan for services acquisitions. Another key initiative participants considered was the establishment by the Air Force of a management council for services contracting. No time frame has been set for when the Air Force would activate such a council. However, the deputy assistant secretary’s vision for adopting a best practices approach to contracting for services calls for radically transforming business processes within 5 years and establishing cross-functional, Air Force-wide councils to consolidate market knowledge and carry out strategic sourcing projects. In July 2003, in the first such effort to take advantage of its overall buying power, the Air Force formed a commodity council responsible for developing departmentwide strategies for buying and managing information technology products. According to an Air Force official involved with this council, the lessons learned and best practices of this council will be carried forward to other commodity councils that will be established by the Air Force. Another category that the Air Force is considering for a future commodity council is construction services. In 2001, top Army leadership approved a consolidation of Army contracting activities that focuses on the areas of installation management and general-purpose information technology. This initiative covers only a portion of the Army’s service spending, and it involved the establishment of the Army Contracting Agency in October 2002 to centralize much installation-support contracting under a corporate management structure and called for consolidating similar and common use requirements to reduce costs. This central agency will be fully responsible for Army-wide purchases of general information technology and electronic commerce purchases and for large installation management contracting actions over $500,000 that were previously decentralized. The agency’s key anticipated benefit will be its ability to centralize large buys that are common Army-wide, while continuing to provide opportunities for small businesses to win contracts. To have an early demonstration of the value of this approach, the agency plans an October 2003 spend analysis of several services that could offer easy savings, including security guards, furniture refinishing, telecommunications, building demolition, and photocopying. The agency has yet to set a time frame for carrying out the consolidated purchases, which could be national or regional in scope. The agency’s organizational structure assigns regional executive responsibility for managing services contracting, and includes a high-level council in headquarters for overseeing more strategic approaches to buying Army installation support services. The Navy is considering pilot tests of a more strategic approach for services spending in a few categories. Senior Navy leadership began a study in September 2002 to recommend business process changes in the Navy’s acquisition program. A Navy official conducting the preliminary spend analysis of Navy purchasing data estimated opportunities to save $115 million through taking a more strategic, coordinated approach to buying $1.5 billion in support services (engineering; logistics; program, general, and facilities management; and training). The Navy official said that, sometime this year, senior Navy leadership is expected to approve the study’s recommendations to pilot-test consolidated acquisition for support services. To lead these innovative management approaches, the Secretary of the Navy earlier this year approved a new position for a Director of Program Analysis and Business Transformation within the Office of the Deputy Assistant Secretary for Acquisition Management. A Navy procurement policy official involved with the ongoing effort told us that the Navy’s pilot tests are likely to be affected by DOD’s spend analysis pilot that is testing DOD-wide strategic sourcing strategies for 5 to 10 services. Since Navy procurement policy officials are also involved in DOD’s pilot, he anticipates having to coordinate the Navy’s pilot as both initiatives move forward. A strategic plan could help DOD ensure that these early initiatives successfully lead to lower costs and improved acquisition of services. Such a plan would identify, coordinate, and prioritize these initiatives; integrate the military departments’ services contracting management structures; ensure comprehensive coverage of services spending; promote and support collaboration; and establish accountability, transparency, and visibility for tracking performance and achieving results. However, some of the procurement policy officials we interviewed have expressed skepticism that broad-based reforms to foster a more strategic approach are necessary or beneficial, or that DOD could fully adopt private sector strategies in view of its current decentralized acquisition environment and other constraints. Conclusions Given the federal government’s critical budget challenges, DOD’s transformation of its business processes is more important than ever if the department is to get the most from every dollar spent. Senior leadership has for 2 years expressed a commitment to improving the department’s acquisition of services. Nonetheless, DOD and the military departments remain in the early stages of developing new business processes for the strategic acquisition of services. DOD’s leaders have made a commitment to adopt best practices and make dramatic changes. Translating that commitment into specific management improvements will allow DOD to take on the more difficult tasks of developing a reliable and accurate picture of spending on services across DOD; determining what structures, mechanisms, and metrics can be employed to foster a strategic approach; and tailoring those structures to meet DOD’s unique requirements. Given that DOD’s spending on services contracts is approaching $100 billion annually, the potential benefits for enhancing visibility and control of services spending are significant. Recommendations for Executive Action To achieve significant improvements across the range of services DOD purchases, we recommend that the Secretary of Defense direct the Under Secretary of Defense for Acquisition, Technology, and Logistics to work with the military departments and the defense agencies to further strengthen the management structure. This structure, established in response to section 801, should promote the use of best commercial practices such as centralizing key functions, conducting spend analyses, expanding the use of cross-functional commodity teams, achieving strategic orientation, achieving savings by reducing purchasing costs and other efficiencies, and improving service contracts’ performance and outcomes. We also recommend that the Secretary of Defense direct the Under Secretary to develop a strategic plan with guidance for the military departments and the defense agencies on how to carry out their responsibilities for managing acquisition of services. Key elements of this guidance should address improving knowledge of services spending by collecting and analyzing data about services procurements across DOD and within military departments and defense agencies, promoting collaboration across DOD and within military departments and defense agencies by establishing cross-functional teams to carry out coordinated purchasing of services, and establishing strategic savings and performance goals, measuring results, and ensuring accountability by assigning high-level responsibility for monitoring those results. Agency Comments In commenting on a draft of this report, DOD concurred in principle with the recommendation to further strengthen the management structure established in response to section 801 and partially concurred with the recommendation to develop a plan with guidance to the military departments on carrying out their strategic and centralized responsibilities for the acquisition of services. DOD expects that various initiatives being pursued to enhance services acquisition management structures and processes—such as the management structure for reviewing individual service acquisitions valued at more than $500 million and the spend analysis pilot assessed in this report—will ultimately provide the information with which to decide what overarching joint management and business process changes are necessary. DOD cites these initiatives as demonstrating a full commitment to improving acquisition of services. DOD further states that these efforts—such as collecting and enhancing data, performing spend analyses, and establishing commodity teams—are similar to industry best practices—and have already had significant impacts on the manner in which services are acquired. We agree that the initiatives are positive steps in the right direction to improve acquisition of services. However, it is too early to tell if these early efforts will lead DOD and the military departments to make the type of long-term changes that are necessary to achieve significant results in terms of savings and service improvements. Moreover, according to DOD, factors such as unusual size, organizational complexity, and restrictive acquisition environment mean that DOD cannot adhere strictly to the commercial best practices described in the report. Yet, none of the companies we studied followed exactly the same approach in employing specific best practices. Likewise, DOD and the military departments need to work together and determine how these practices can be adapted to fit their unique needs, challenges, and complexities. Significant bottom line results in terms of savings and service improvements are likely with adequate follow-through on the various initiatives. DOD’s strategic plan should be explicit about how and when appropriate follow-through actions will take place so that significant, long-lasting performance improvements and cost savings are achieved. DOD’s comments can be found in appendix II. Scope and Methodology Section 801 of the National Defense Authorization Act for Fiscal Year 2002 requires DOD to establish a management structure and a program review structure and to collect and analyze data on purchases in order to improve management of the acquisition of services. As described in the legislative history, these requirements provide tools with which the department can promote the use of best commercial practices to reform DOD’s services procurement management and oversight and to achieve significant savings. Section 801 also directed us to assess DOD’s compliance with the requirements and to report to congressional armed services committees on the assessment. To conduct this work, we interviewed officials—including those responsible for Defense Procurement and Acquisition Policy, and Acquisition Resources and Analysis—in the Office of the Secretary of Defense and the Office of the Under Secretary of Defense for Acquisition, Technology, and Logistics. We also interviewed officials responsible for service acquisition policy and management in the Air Force, the Army, and the Navy. We interviewed both DOD’s and the various services’ officials about policy memoranda and related actions taken to implement section 801 requirements, including the evolving nature of implementation actions over several months. We also discussed comparisons between DOD’s and the military departments’ services acquisition management reforms and leading companies’ best practices for taking a strategic approach, which were identified in our previous work and promoted by the legislation. To assess compliance with the policy and guidance requirements for the management and program review structures, we reviewed internal memoranda and policy documents issued by the Under Secretary of Defense and the military departments. For background on DOD’s contract spending on services, we analyzed computer-generated data extracted from the Defense Contract Action Data System. We did not independently verify the information contained in the database. There are known data reliability problems with this data source, but we determined that the data are sufficient to provide general trend information for background reporting purposes. We conducted our review from November 2002 to July 2003 in accordance with generally accepted government auditing standards. We are sending copies of this report to other interested congressional committees; the Secretary of Defense; the Deputy Secretary of Defense; the Secretaries of the Army, Navy, and Air Force; and the Under Secretaries of Defense (Acquisition, Technology, and Logistics) and (Comptroller). We will also provide copies to others on request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Should you have any questions on matters discussed in this report, please call me at (202) 512-4841. Other contacts and staff acknowledgments are listed in appendix III. Appendix I: Comparison of Selected Program Review Structure Requirements In response to 2002 national defense authorization requirements, the Under Secretary of Defense for Acquisition, Technology, and Logistics and the military departments developed and implemented policies for a program review structure to oversee large-dollar and program-critical services acquisitions. The review process, modeled after DOD’s review process for major weapons systems, seeks to ensure major service acquisition strategies are adequately planned, performance-based, competed, and address socioeconomic goals. In most cases, an acquisition must be valued at $500 million or more to prompt review at the headquarters level for DOD and the military departments. Table 2 compares selected aspects of the legislation’s requirements with, and the implementation status of, DOD and military department policies. Appendix II: Comments from the Department of Defense Appendix III: GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to those named above, Cordell Smith, Bob Swierczek, and Ralph White made key contributions to this report.
The Department of Defense's (DOD) spending on service contracts approaches $100 billion annually, but recent legislation directs DOD to manage its services procurement more effectively. Leading companies transformed management practices and achieved major savings after they analyzed spending patterns and coordinated procurement. This report evaluates DOD's implementation of the legislation in light of congressional interest in promoting the use of best commercial practices for acquiring services. DOD and the military departments each have a management structure in place for reviewing individual services acquisitions valued at $500 million or more, but that approach does not provide a departmentwide assessment of how spending for services could be more effective. Greater attention is needed by DOD management to promote a strategic orientation by setting performance goals, including savings goals, and ensuring accountability for achieving them. To support management decisions and improve visibility over spending on service contracts, DOD is developing an automated system to collect and analyze data by piloting a spend analysis. The analysis views spending from a DOD-wide perspective and identifies large-scale savings opportunities, but its scope is limited, and it is too early to tell how the department can make the best use of its results. The military departments are in the early stages of separate initiatives that may lead them to adopt a strategic approach to buying services, but DOD lacks a plan that coordinates these initiatives or provides a road map for future efforts.
GAO_GAO-13-538SP
Background Limited Federal Role in Administering Elections The administration of federal elections is a massive enterprise, conducted primarily at the state and local level, under applicable state and federal voting laws. Responsibility for holding elections and ensuring that each voter has the ability to fully participate in the electoral process—including registering to vote, accessing polling places or alternative voting methods, and casting a vote—primarily rests with state and local governments. While federal elections are generally conducted under state laws and policies, several federal laws apply to voting and some provisions specifically address accessibility issues for voters with disabilities, including the Americans with Disabilities Act of 1990 (ADA) and HAVA. Americans with Disabilities Act of 1990 Title II and III of the ADA contain provisions that help increase the accessibility of voting for individuals with disabilities. Specifically, Title II and its implementing regulations require that people with disabilities have access to basic public services, including the right to vote. The ADA requires that public entities make reasonable modifications in policies, practices, or procedures to avoid discrimination against people with disabilities. Moreover, no person with a disability may, by reason of disability, be excluded from participating in or be denied the benefits of any public program, service, or activity. State and local governments may generally comply with ADA accessibility requirements in a variety of ways, such as reassigning services to accessible buildings or alternative Title III of the ADA generally covers commercial accessible sites.facilities and places of public accommodation that may also be used as polling places. Public accommodations must make reasonable modifications in policies, practices, or procedures to facilitate access for people with disabilities. These facilities are also required to remove physical barriers in existing buildings when it is “readily achievable” to do so; that is, when the removal can be done without much difficulty or expense, given the entity’s resources. Help America Vote Act of 2002 HAVA, which contains a number of provisions to help increase voting accessibility for people with disabilities, establishes the Election Assistance Commission (EAC) and grants the Attorney General enforcement authority. In particular, section 301(a) of HAVA outlines This minimum standards for voting systems used in federal elections.section specifically states that the voting system must be accessible for people with disabilities, including nonvisual accessibility for the blind and visually impaired, in a manner that provides the same opportunity for access and participation as is provided for other voters. To satisfy this requirement, each polling place must have at least one direct recording electronic or other voting system equipped for people with disabilities. HAVA also established the EAC as an agency with wide-ranging duties to help improve state and local administration of federal elections, including providing voluntary state guidance on implementing HAVA provisions. The EAC also has authority to make grants for the research and development of new voting equipment and technologies and the improvement of voting systems. Additionally, HAVA vests enforcement authority with the Attorney General to bring a civil action against any state or jurisdiction as may be necessary to carry out specified uniform and nondiscriminatory election technology and administration requirements under HAVA. Characteristics of Long- term Care Facility Residents As the proportion of older Americans in the country increases, the number of voters residing in long-term care facilities who may face challenges voting at polling places on Election Day due to their physical and mental condition could also increase. By 2030, those aged 65 and over are projected to grow to more than 72 million individuals and represent a quarter of the voting age population. Older voters, who consistently vote in higher proportions than other voters, may face challenges exercising their right to vote because disability increases with age. Moreover, it is estimated that 70 percent of people over age 65 will require some long- term care services at some point in their lives, such as residing in a nursing home or assisted living facility. The physical and cognitive impairments of many long-term care facility residents may make it more difficult for them to independently drive, walk, or use public transportation to get to their designated polling place. Once at the polling place, they may face challenges finding accessible parking, reaching the ballot area, and casting a ballot privately and independently. Recent GAO Election Reports We recently issued two reports on elections in which the findings may have implications for voters with disabilities. Specifically, in 2012, we issued a report examining state laws addressing voter registration and voting on or before Election Day. In the report, we found that states had been active in the past 10 years in amending their election codes, regulations, and procedures, not only to incorporate requirements mandated by HAVA, but also in making substantive changes to their laws in the areas of voter identification, early voting, and requirements for third- party voter registration organizations. We found that states had a variety of identification requirements for voters when they register to vote, vote at the polls on Election Day, and seek to cast an absentee ballot by mail that were in effect for the November 2012 election. Specifically, while voter identification requirements varied in flexibility, the number and type of documents allowed, and alternatives available for verifying identity, 31 states had requirements for all eligible voters to show identification at the polls on Election Day. We also found that most states had also established alternatives for voters to cast a ballot other than at the polls on Election Day. Thirty-five states and the District of Columbia provided an opportunity for voters to cast a ballot prior to the election without an excuse, either by no-excuse absentee voting by mail or in-person early voting, or both. States also regulated the process by which voters registered to vote and had a variety of requirements that address third- party voter registration organizations that conduct voter registration drives. In addition, in 2012, we issued a report looking at the potential implementation of weekend voting and similar alternative voting methods. In the report, we found that in the 2010 general election, 35 states and the District provided voters at least one alternative to casting their ballot on Election Day through in-person early voting, no-excuse absentee voting, or voting by mail. However, state and local election officials we interviewed identified challenges they would anticipate facing in planning and conducting Election Day activities on weekends— specifically, finding poll workers and polling places, and securing ballots and voting equipment—and expected cost increases. Specifically, officials in 14 of the 17 jurisdictions and the District expected that at least some of the polling places they used in past elections—such as churches—would not be available for a weekend election, and anticipated difficulty finding replacements. Additionally, officials in 5 of the 7 states and the District that conducted early voting and provided security over multiple days explained that the level of planning needed for overnight security for a weekend election would far surpass that of early voting due to the greater number and variety of Election Day polling places. For example, officials in one state said that for the 2010 general election, the state had fewer than 300 early voting sites—which were selected to ensure security— compared to more than 2,750 polling places on Election Day, which are generally selected based on availability and proximity to voters. The Proportion of Polling Places Without Potential Impediments Increased Between the 2000 and 2008 Elections In comparison to our findings in 2000, the proportion of polling places with no potential impediments increased in 2008. In 2008, we estimated that 27 percent of polling places had no potential impediments in the path from the parking area to the voting area—up from 16 percent in 2000. Specifically, polling places with four or more potential impediments decreased significantly—from 29 percent in 2000 to 16 percent in 2008 (see fig. 1). Potential impediments included a lack of accessible parking and obstacles en route from the parking area to the voting area. Figure 2 shows some key polling place features that we examined in our 2008 review of polling places. These features primarily affect individuals with mobility impairments, in particular voters using wheelchairs. Similar to our findings in 2000, the majority of potential impediments at polling places in 2008 occurred outside of or at the building entrance, although improvements were made in some areas. In particular, the percentage of polling places with potential impediments at the building entrance dropped sharply—from 59 percent in 2000 to 25 percent in 2008. In addition, polling places made significant gains in providing designated parking for people with disabilities, which decreased from 32 percent with no designated parking in 2000 to only 3 percent in 2008 (see fig. 3).ramps or curb cuts in the parking area, unpaved or poor surfaces in the path from the parking lot or route to the building entrance, and door thresholds exceeding ½ inch in height. We did not assess polling places’ legal compliance with HAVA accessible voting system requirements. For our 2008 Election Day data collection instrument, we compiled a list of commonly known accessible voting machines by consulting with disability experts and others. Justice Assessed States’ Implementation of HAVA Requirements for the 2006 Deadline, but Its Oversight Had Some Gaps Justice’s Outreach, Guidance, and Oversight From shortly after the passage of HAVA until 2006, Justice officials provided educational outreach and guidance on polling place accessibility and conducted an initial assessment of states’ compliance with HAVA’s January 2006 deadline for accessible voting systems.guidance on the new HAVA voting system requirements while the EAC Justice provided was being formed. During this time, Justice officials said they made a considerable effort to educate state and local election officials and national organizations representing election officials and people with disabilities on HAVA voting system requirements. As part of these early efforts, Justice provided guidance to poll workers on how to assess and create a physically accessible polling place. Specifically, in 2004, Justice published the Americans with Disabilities Act: ADA Checklist for Polling Places, which provided information to voting officials on key accessibility features needed by most voters with disabilities to go from the parking area to the voting area. According to our survey, 34 states found the checklist to be moderately to very helpful. While the checklist provides limited guidance on accessibility features within the voting area, it does not provide information about the configuration of the voting system. In addition to early guidance, Justice also conducted an initial assessment of states’ progress toward meeting the January 2006 deadline for compliance with HAVA voting system requirements. For example, in 2003, Justice sent letters to state election officials summarizing HAVA voting system requirements. Justice later followed up with letters in 2005 and 2006, which outlined HAVA voting system requirements, and asked states to respond to a series of questions to help gauge whether every polling place in the state had at least one accessible voting machine and whether poll workers were trained in the machine’s operation. Finally, with the full implementation of HAVA in 2006, the EAC took over Justice’s state educational outreach and guidance efforts. Justice’s limited oversight of HAVA voting system requirements and polling place accessibility, by 2009, left gaps in ensuring voting accessibility for people with disabilities. For example, Justice supervised polling place observations for federal elections on Election Day 2008, primarily to assess compliance with the Voting Rights Act of 1965. However, Justice did not systematically assess the physical accessibility of the polling places or the level of privacy and independence provided to people with disabilities by the accessible voting system, which limited the department’s ability to identify potential accessibility issues facing voters with disabilities. In addition, Justice initiated a small number of annual community assessments—called Civic Access assessments—of ADA compliance in public buildings, including buildings designated as polling places, but these assessments included a small portion of polling places nationwide and were generally not conducted on Election Day. According to Justice, these assessments could be resource-intensive, which, in part, may have limited the number that the department could complete in a given year. Justice initiated Civic Access assessments for three communities in calendar year 2008. When onsite reviews identified physical barriers and impediments for people with disabilities, Justice generally negotiated and entered into a settlement agreement with the election jurisdiction. Between 2000 and 2008, Justice entered into 69 Civic Access settlement agreements containing one or more recommendations aimed at polling place provisions, but given the small number of Civic Access assessments conducted annually, they did not provide a national perspective on polling place accessibility. In addition, since these assessments were not conducted during elections, they did not assess any special features of voting areas and accessible voting systems that are set up only on Election Day. Implementation of Recommended Monitoring and Oversight Would Reduce Potential Voting Impediments and Other Challenges In our 2009 report on polling place accessibility, we recommended that the Department of Justice look for opportunities to expand its monitoring and oversight of the accessibility of polling places for people with disabilities in a cost-effective manner. This effort might include: working with states to use existing state oversight mechanisms and using other resources, such as organizations representing election officials and disability advocacy organizations, to help assess and monitor states’ progress in ensuring polling place accessibility, similar to the effort used to determine state compliance with HAVA voting system requirements by the 2006 deadline; expanding the scope of Election Day observations to include an assessment of the physical access to the voting area and the level of privacy and independence being offered to voters with disabilities by accessible voting systems; and expanding the Americans with Disabilities Act: ADA Checklist of Polling Places to include additional information on the accessibility of the voting area and guidance on the configuration of the accessible voting system to provide voters with disabilities with the same level of privacy and independence as is afforded to other voters. Justice generally agreed with this recommendation in commenting on the draft report, and when we reached out for an update in preparation of this testimony, indicated it has taken steps towards addressing the recommendation. For example, Justice noted that it has entered into settlements—with Philadelphia, Pennsylvania, in 2009 and Flint, Michigan, in 2012—to resolve allegations of inaccessible polling places. In addition, Justice stated that it has expanded the scope of Election Day observations to include an assessment of the physical accessibility of polling places, citing its monitoring of 240 polling places in about 28 jurisdictions for the 2012 general election. However, Justice did not indicate whether its expanded Election Day observations include assessing privacy and independence provided by accessible voting systems. Further, it does not appear at this time that Justice has taken action to expand the scope of the ADA Checklist for Polling Places to include additional information on the accessibility of the voting area and guidance on the configuration of the accessible voting system. We believe that expanding these additional steps could build upon Justice’s efforts to date in potentially reducing voting impediments and other challenges for voters with disabilities. Mr. Chairman, this concludes my statement. I would be pleased to respond to any questions you or other Members of the Council may have. GAO Contact and Staff and Acknowledgements Further information about this statement, please contact Barbara Bovbjerg at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Other key contributors to this statement include: Brett Fallavollita, Assistant Director; David Lin; Ryan Siegel; and Amber Yancey-Carroll. Additional contributions were made by David Alexander, Orin Atwater, Rebecca Gambler, Alex Galuten, Tom Jessor; Kathy Leslie, Mimi Nguyen, Barbara Stolz, Janet Temko, Jeff Tessin, and Walter Vance. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
Voting is fundamental to our democracy and federal law generally requires polling places to be accessible to all eligible voters, including those with disabilities and the elderly. However during the 2000 federal election, GAO found that only 16 percent of polling places had no potential impediments to voting access for people with disabilities. To address these and other issues, Congress enacted the Help America Vote Act of 2002 (HAVA), which required each polling place to have an accessible voting system by 2006. Congress asked GAO to reassess voting access on Election Day 2008, and also to study voter accessibility at long-term care facilities. This statement focuses on (1) progress made from 2000 to 2008 to improve voter accessibility in polling places, including relevancy to long-term care facilities and (2) steps the Department of Justice (Justice) has taken to enforce HAVA voting access provisions. To prepare this statement, GAO relied primarily on its prior products on polling place accessibility ( GAO-09-941 ) and voting in long-term care facilities ( GAO-10-6 ). Compared to 2000, the proportion of polling places in 2008 without potential impediments increased and almost all polling places had an accessible voting system as states and localities made various efforts to help facilitate accessible voting. In 2008, based upon GAO's survey of polling places, GAO estimated that 27 percent of polling places had no potential impediments in the path from the parking to the voting area--up from16 percent in 2000; 45 percent had potential impediments but offered curbside voting; and the remaining 27 percent had potential impediments and did not offer curbside voting. All but one polling place GAO visited had an accessible voting system--typically, an electronic machine in a voting station--to facilitate private and independent voting for people with disabilities. However, 46 percent of polling places had an accessible voting system that could pose a challenge to certain voters with disabilities, such as voting stations that were not arranged to accommodate voters using wheelchairs. In GAO's 2008 state survey, 43 states reported that they set accessibility standards for polling places, up from 23 states in 2000. Further, 31 states reported that ensuring polling place accessibility was challenging. Localities GAO surveyed in 2008 reported providing voting services directly to long-term care facility residents who may face challenges voting in a polling place. For example, close to one-third of localities GAO surveyed reported designating long-term care facilities as Election Day polling places. From shortly after the passage of HAVA until 2006, Justice provided guidance on polling place accessibility and conducted an initial assessment of states' compliance with HAVA's January 2006 deadline for accessible voting systems. After implementation of HAVA, Justice's oversight of HAVA's access requirements was part of two other enforcement efforts, but gaps remained. While Justice provided guidance on polling place accessibility, this guidance did not address accessibility of the voting area itself. In 2009, Justice conducted polling place observations for federal elections that identified whether accessible voting systems were in place, but it did not systematically assess the physical accessibility of polling places or the level of privacy and independence provided to voters with disabilities. Justice also conducted a small number of annual community assessments of Americans with Disabilities Act compliance of public buildings, which included buildings designated as polling places. However, these assessments did not provide a national perspective on polling place accessibility or assess any special features of the voting area and the accessible voting system that are set up only on Election Day.
GAO_AIMD-98-77
Significant Matters This section discusses significant matters that we considered in performing our audit and in forming our conclusions. These matters include (1) six material weaknesses in IRS’ internal controls, (2) one reportable condition representing a significant weakness in IRS’ internal controls, (3) one instance of noncompliance with laws and regulations and noncompliance with the requirements of FFMIA, and (4) two other significant matters that represent important issues that should be brought to the attention of IRS management and other users of IRS’ Custodial Financial Statements and other reported financial information. Material Weaknesses During our audit of IRS’ fiscal year 1997 Custodial Financial Statements, we identified six material weaknesses that adversely affected IRS’ ability to safeguard assets from material loss, assure material compliance with relevant laws and regulations, and assure that there were no material misstatements in the financial statements. These weaknesses relate to IRS’ inadequate general ledger system, supporting subsidiary ledger for unpaid assessments, supporting documentation for unpaid assessments, controls over refunds, revenue accounting and reporting, and computer security. These material weaknesses were consistent in all significant respects with the material weaknesses cited by IRS in its fiscal year 1997 FIA report. Although we were able to apply substantive audit procedures to verify that IRS’ fiscal year 1997 Custodial Financial Statements were reliable, the six material weaknesses discussed in the following sections significantly increase the risk that future financial statements and other IRS reports may be materially misstated. IRS’ General Ledger Does Not Support the Preparation of Financial Statements The IRS’ general ledger system is not able to routinely generate reliable and timely financial information for internal and external users. The IRS’ general ledger does not capture or otherwise produce the information to be reported in the Statement of Custodial Assets and Liabilities; classify revenue receipts activity by type of tax at the detail transaction level to support IRS’ Statement of Custodial Activity and to make possible the accurate distribution of excise tax collections to the appropriate trust funds; use the standard federal accounting classification structure to produce some of the basic documents needed for the preparation of financial statements in the required formats, such as trial balances; and provide a complete audit trail for recorded transactions. As a result of these deficiencies, IRS is unable to rely on its general ledger to support its financial statements, which is a core purpose of a general ledger. These problems also prevent IRS from producing financial statements on a monthly or quarterly basis as a management tool, which is standard practice in private industry and some federal entities. The U.S. Government Standard General Ledger (SGL) establishes the general ledger account structure for federal agencies as well as the rules for agencies to follow in recording financial events. Implementation of the SGL is called for by the Core Financial System Requirements of the Joint Financial Management Improvement Program (JFMIP), and is required by the Office of Management and Budget (OMB) in its Circular A-127, Financial Management Systems. Implementation of financial management systems that comply with the SGL at the transaction level is also required by FFMIA. However, because of the problems discussed above, IRS’ general ledger does not comply with these requirements. As we previously reported, IRS’ general ledger was not designed to support financial statement preparation. To compensate for this deficiency, IRS utilizes specialized computer programs to extract information from its master files—its only detailed database of taxpayer information—to derive amounts to be reported in the financial statements. However, the amounts produced by this approach needed material audit adjustments to the Statement of Custodial Assets and Liabilities to produce reliable financial statements. Although we were able to verify that the adjusted balances were reliable as of and for the fiscal year ended September 30, 1997, this approach cannot substitute for a properly designed and implemented general ledger as a tool to account for and report financial transactions on a routine basis throughout the year. IRS Lacks a Subsidiary Ledger for Unpaid Assessments As we have reported in our previous financial audits, IRS does not have a detailed listing, or subsidiary ledger, which tracks and accumulates unpaid assessments on an ongoing basis. To compensate for the lack of a subsidiary ledger, IRS runs computer programs against its master files to identify and classify the universe of unpaid assessments. However, this approach required numerous audit adjustments to produce reliable balances. The lack of a detailed subsidiary ledger impairs IRS’ ability to effectively manage the unpaid assessments. For example, IRS’ current systems precluded it from ensuring that all parties liable for certain assessments get credit for payments made on those assessments. Specifically, payments made on unpaid payroll tax withholdings for a troubled company, which can be collectible from multiple individuals, are not always credited to each responsible party to reflect the reduction in their tax liability. In 53 of 83 cases we reviewed involving multiple individuals and companies, we found that payments were not accurately recorded to reflect the reduction in the tax liability of each responsible party. In one case we reviewed, three individuals had multimillion dollar tax liability balances, as well as liens placed against their property, even though the tax had been fully paid by the company. While we were able to determine that the amounts reported in the fiscal year 1997 financial statements pertaining to taxes receivable, a component of unpaid assessments, were reliable, this was only after significant adjustments totaling tens of billions of dollars were made. The extensive reliance IRS must place on ad hoc procedures significantly increases the risk of material misstatement of unpaid assessments and/or other reports issued by IRS in the future. A proper subsidiary ledger for unpaid assessments, as recommended by the JFMIP Core Financial Systems Requirements, is necessary to provide management with complete, up-to-date information about the unpaid assessments due from each taxpayer, so that managers will be in a position to make informed decisions about collection efforts and collectibility estimates. This requires a subsidiary ledger that makes readily available to management the amount, nature, and age of all unpaid assessments outstanding by tax liability and taxpayer, and that can be readily and routinely reconciled to corresponding general ledger balances for financial reporting purposes. Such a system should also track and make available key information necessary to assess collectibility, such as account status, payment and default history, and installment agreement terms. Documentary Support for Unpaid Assessments Is Inadequate In our audit of IRS’ fiscal year 1996 Custodial Financial Statements, we reported that IRS could not locate sufficient supporting documentation to (1) enable us to evaluate the existence and classification of unpaid assessments or (2) support its classification of reported revenue collections and refunds paid. During our fiscal year 1997 audit, IRS was able to locate and provide sufficient supporting documentation for fiscal year 1997 revenue and refund transactions we tested. However, IRS continued to experience significant problems locating and providing supporting documentation for unpaid assessments, primarily due to the age of the items. Documentation for transactions we reviewed, such as tax returns or installment agreements, had often been destroyed in accordance with IRS record retention policies or could not be located. In addition, the documentation IRS provided did not always include useful information, such as appraisals, asset searches, and financial statements. For example, estate case files we reviewed generally did not include audited financial statements or an independent appraisal of the estate’s assets, information that would greatly assist in determining the potential collectibility and potential underreporting of these cases. Additionally, the lack of documentation made it difficult to assess the classification and collectibility of unpaid assessments reported in the financial statements as federal tax receivables. Through our audit procedures, we were able to verify the existence and proper classification of unpaid assessments and obtain reasonable assurance that reported balances were reliable. However, this required material audit adjustments to correct misstated unpaid assessment balances identified by our testing. Weaknesses Exist in Controls Over Refunds IRS did not have sufficient preventive controls over refunds to assure that inappropriate payments for tax refunds are not disbursed. Such inappropriate payments have taken the form of refunds improperly issued or inflated, which IRS did not identify because of flawed verification procedures, or fraud by IRS employees. For example, we found three instances where refunds were paid for inappropriate amounts. This occurred because IRS does not compare tax returns to the attached W-2s (Wage and Tax Statements) at the time the returns are initially processed, and consequently did not detect a discrepancy with pertinent information on the tax return. As we have reported in prior audits, such inconsistencies generally go undetected until such time as IRS completes its document matching program, which can take as long as 18 months. In addition, during fiscal year 1997, IRS identified alleged employee embezzlement of refunds totaling over $269,000. IRS is also vulnerable to issuance of duplicate refunds made possible by gaps in IRS’ controls. IRS reported this condition as a material weakness in its fiscal year 1997 FIA report. The control weaknesses over refunds are magnified by significant levels of invalid Earned Income Credit (EIC) claims. IRS recently reported that during the period January 1995 through April 1995, an estimated $4.4 billion (25 percent) in EIC claims filed were invalid. This estimate does not reflect actual disbursements made for refunds involving EIC claims. However, it provides an indication of the magnitude of IRS’ and the federal government’s exposure to losses resulting from weak controls over refunds. While we were able to substantiate the amounts disbursed as refunds as reported on the fiscal year 1997 Custodial Financial Statements, IRS needs to have effective preventive controls in place to ensure that the federal government does not incur losses due to payment of inappropriate refunds. Once an inappropriate refund has been disbursed, IRS is compelled to expend both the time and expense to attempt to recover it, with dubious prospect of success. Revenue Accounting and Reporting Does Not Meet User Needs IRS is unable to currently determine the specific amount of revenue it actually collected for the Social Security, Hospital Insurance, Highway, and other relevant trust funds. As we previously reported, the primary reason for this weakness is that the accounting information needed to validate the taxpayer’s liability and record the payment to the proper trust fund is not provided at the time that taxpayers remit payments. Information is provided on the tax return, which can be received as late as 9 months after a payment is submitted. However, the information on the return only pertains to the amount of the tax liability, not the distribution of the amounts previously collected. As a result, IRS cannot report actual revenue collected for Social Security, Hospital Insurance, Highway, and other trust funds on a current basis nor can it accurately report revenue collected for individuals. Because of this weakness, IRS had to report Federal Insurance Contributions Act (FICA) and individual income tax collections in the same line item on its Statement of Custodial Activity for fiscal year 1997. However, requirements for the form and content of governmentwide financial statements require separate reporting of Social Security, Hospital Insurance, and individual income taxes collected. Beginning in fiscal year 1998, federal accounting standards will also require this reporting. Taxes collected by IRS on behalf of the federal government are deposited in the general revenue fund of the Department of the Treasury (Treasury), where they are subsequently distributed to the appropriate trust funds. Amounts representing Social Security and Hospital Insurance taxes are distributed to their respective trust funds based on information certified by the Social Security Administration (SSA). In contrast, for excise taxes, IRS certifies the amounts to be distributed based on taxes assessed, as reflected on the relevant tax forms. However, by law, distributions of excise taxes are to be based on taxes actually collected. The manner in which both FICA and excise taxes are distributed creates a condition in which the federal government’s general revenue fund subsidizes the Social Security, Hospital Insurance, Highway, and other trust funds. The subsidy occurs primarily because a significant number of businesses that file tax returns for Social Security, Hospital Insurance, and excise taxes ultimately go bankrupt or otherwise go out of business and never actually pay the assessed amounts. Additionally, with respect to Social Security and Hospital Insurance taxes, a significant number of self-employed individuals also do not pay the assessed amounts. While the subsidy is not necessarily significant with respect to excise taxes, it is significant for Social Security and Hospital Insurance taxes. At September 30, 1997, the estimated amount of unpaid taxes and interest in IRS’ unpaid assessments balance was approximately $44 billion for Social Security and Hospital Insurance, and approximately $1 billion for excise taxes. While these totals do not include amounts no longer in the unpaid assessments balance due to the expiration of the statutory collection period, they nevertheless give an indication of the cumulative amount of the subsidy. Controls Over Computer Security Are Inadequate IRS places extensive reliance on computer systems to process tax returns, maintain taxpayer data, calculate interest and penalties, and generate refunds. Consequently, it is critical that IRS maintain adequate internal controls over these systems. We previously reported that IRS had serious weaknesses in the controls used to safeguard its computer systems, facilities, and taxpayer data. Our review of these controls as part of our audit of IRS’ fiscal year 1997 Custodial Financial Statements found that although many improvements have been made, overall controls continued to be ineffective. IRS’ controls over automated systems continued to exhibit serious weaknesses in (1) physical security, (2) logical security, (3) data communications management, (4) risk analysis, (5) quality assurance, (6) internal audit and security, and (7) contingency planning. Weaknesses in these areas can allow unauthorized individuals access to critical hardware and software where they may intentionally or inadvertently add, alter, or delete sensitive data or programs. IRS recognized these weaknesses in its fiscal year 1997 FIA report and has corrected a significant number of the computer security weaknesses identified in our previous reports. Additionally, IRS has centralized responsibility for security and privacy issues and added staff in this area. IRS is implementing plans to mitigate the remaining weaknesses by June 1999. In our fiscal year 1997 audit, we were able to verify the accuracy of the financial statement balances and disclosures originating in whole or in part from automated systems primarily through review and testing of supporting documentation. However, the absence of effective internal controls over IRS’ automated systems makes IRS vulnerable to losses, delays or interruptions in service, and compromising of the sensitive information entrusted to IRS by taxpayers. Reportable Condition In addition to the material weaknesses discussed above, we identified one reportable condition that although not a material weakness, represents a significant deficiency in the design or operation of internal controls and could adversely affect IRS’ ability to meet the internal control objectives described in this report. This condition concerns weaknesses in IRS’ controls over its manually processed tax receipts. Vulnerabilities Exist in Controls Over Manual Tax Receipts IRS’ controls over the receipt of cash and checks it manually receives from taxpayers are not adequate to assure that these payments will be properly credited to taxpayer accounts and deposited in the Treasury. To ensure that appropriate security over these receipts is maintained, IRS requires that lock box depositories receiving payments on its behalf use a surveillance camera to monitor staff when they open mail containing cash and checks. However, we found that payments received at the four IRS service centers where we tested controls over manual cash receipts were not subject to comparable controls. We found at these locations that (1) IRS allowed individuals to open mail unobserved, and relied on them to accurately report amounts received, and (2) payments received were not logged or otherwise recorded at the point of receipt to immediately establish accountability and thereby deter and detect diversion. In addition, at one service center, we observed payments being received by personnel who should not have been authorized to accept receipts. As a result of these weaknesses, IRS is vulnerable to losses of cash and checks received from taxpayers in payment of taxes due. In fact, between 1995 and 1997, IRS identified instances of actual or alleged employee embezzlement of receipts totaling about $4.6 million. These actual and alleged embezzlements underscore the need for effective internal controls over the IRS’ service center receipts process. Noncompliance With Laws and Regulations and FFMIA Requirements Our tests of compliance with selected provisions of laws and regulations disclosed one instance of noncompliance that is reportable under generally accepted government auditing standards and OMB Bulletin 93-06 Audit Requirements for Federal Financial Statements. This concerns IRS’ noncompliance with a provision of the Internal Revenue Code concerning certification of excise taxes. We also noted that IRS’ financial management systems do not substantially comply with the requirements of FFMIA, which is reportable under OMB Bulletin 98-04. IRS’ Certification of Excise Taxes Did Not Comply With Legal Requirements IRS policies and procedures for certification to Treasury of the distribution of the excise tax collections to the designated trust funds do not comply with the Internal Revenue Code. The Code requires IRS to certify the distribution of these excise tax collections to the recipient trust funds based on actual collections. However, as we have reported previously,and as discussed earlier in this report, IRS based its certifications of excise tax amounts to be distributed to specific trust funds on the assessed amount, or amount owed, as reflected on the tax returns filed by taxpayers. IRS has studied various options to enable it to make final certifications of amounts to be distributed based on actual collections and to develop the underlying information needed to support such certifications. IRS was in the process of finalizing its proposed solution at the conclusion of our fiscal year 1996 audit; however, through the end of our fiscal year 1997 audit, IRS still had not implemented its proposed solution. For example, in December 1997, IRS certified the third quarter of fiscal year 1997 based on assessments rather than collections. IRS’ Systems Did Not Comply With FFMIA Requirements As the auditor of IRS’ Custodial Financial Statements, we are reporting under FFMIA on whether IRS’ financial management systems substantially comply with the Federal Financial Management System Requirements (FFMSR), applicable federal accounting standards, and the SGL at the transaction level. As indicated by the material weaknesses we discussed earlier, IRS’ systems do not substantially comply with these requirements. For example, as noted previously, IRS does not have a general ledger that conforms with the SGL. Additionally, IRS lacks a subsidiary ledger for its unpaid assessments, and lacks an effective audit trail from its general ledger back to transaction source documents. These are all requirements under FFMSR. The other three material weaknesses we discussed above—controls over refunds, revenue accounting and reporting, and computer security—also are conditions indicating that IRS’ systems do not comply with FFMSR. In addition, the material weaknesses we noted above mean that IRS’ systems cannot produce reliable financial statements and related disclosures that conform with applicable federal accounting standards. Since IRS’ systems do not comply with FFMSR, applicable federal accounting standards, and the SGL, they also do not comply with OMB Circular A-127, Financial Management Systems. We have previously reported on many of these issues and made recommendations for corrective actions. IRS has drafted a plan of action intended to incrementally improve its financial reporting capabilities, which is scheduled to be fully implemented during fiscal year 1999. This plan is intended to bring IRS’ general ledger into conformance with the SGL and would be a step toward compliance with FFMSR. However, the plan falls short of fully meeting FFMSR requirements. For example, the plan will not provide for (1) full traceability of information through its systems (i.e., lack of an audit trail), (2) a subsidiary ledger to assist in distinguishing federal tax receivables from other unpaid assessments, and (3) reporting of revenue by tax type. As discussed later in this report, the latter example has implications for IRS’ ability to meet certain federal accounting standards required to be implemented in fiscal year 1998. IRS also has a longer-range plan to address the financial management system deficiencies noted in prior audits and in IRS’ own self-assessment. During future audits, we will monitor IRS’ implementation of these initiatives, and assess their effectiveness in resolving the material weaknesses discussed in this report. Other Significant Matters In addition to the material weaknesses and other reportable conditions and noncompliance with laws and regulations and FFMIA requirements discussed in the previous sections, we identified two other significant matters that we believe should be brought to the attention of IRS management and other users of IRS’ financial statements and other financial reports. These concern (1) the composition and collectibility of IRS’ unpaid assessments and (2) the importance of IRS successfully preparing its automated systems for the year 2000. Most Unpaid Assessments Are Not Receivables and Are Largely Uncollectible As reflected in the supplemental information to IRS’ fiscal year 1997 Custodial Financial Statements, the unpaid assessments balance was about $214 billion as of September 30, 1997. This unpaid assessments balance has historically been referred to as IRS’ taxes receivable or accounts receivable. However, a significant portion of this balance is not considered a receivable. Also, a substantial portion of the amounts considered receivables is largely uncollectible. Under federal accounting standards, unpaid assessments require taxpayer or court agreement to be considered federal taxes receivable. Assessments not agreed to by taxpayers or the courts are considered compliance assessments and are not considered federal taxes receivable. Assessments with little or no future collection potential are called write-offs. Figure 1 depicts the components of the unpaid assessments balance as of September 30, 1997. Taxes Receivable - Uncollectible ($62) Compliance Assessments ($48) Of the $214 billion balance of unpaid assessments, $76 billion represents write-offs. Write-offs principally consist of amounts owed by bankrupt or defunct businesses, including many failed financial institutions resolved by the Federal Deposit Insurance Corporation (FDIC) and the former Resolution Trust Corporation (RTC). As noted above, write-offs have little or no future collection potential. In addition, $48 billion of the unpaid assessments balance represents amounts that have not been agreed to by either the taxpayer or a court. Due to the lack of agreement, these compliance assessments are likely to have less potential for future collection than those unpaid assessments that are considered federal taxes receivable. The remaining $90 billion of unpaid assessments represent federal taxes receivable. About $62 billion (70 percent) of this balance is estimated to be uncollectible due primarily to the taxpayer’s economic situation, such as individual taxpayers who are unemployed or have other financial problems. However, IRS may continue collection action for 10 years after the assessment or longer under certain conditions. Thus these accounts may still ultimately have some collection potential if the taxpayer’s economic condition improves. About $28 billion, or about 30 percent, of federal taxes receivable is estimated to be collectible. Components of the collectible balance include installment agreements with estates and individuals, as well as relatively newer amounts due from individuals and businesses who have a history of compliance. It is also important to note that of the unpaid assessments balance, about $136 billion (over 60 percent) represents interest and penalties, as depicted in figure 2, which are largely uncollectible. Interest and Penalties ($136) Interest and penalties are such a high percentage of the balance because IRS continues to accrue them through the 10-year statutory collection date, regardless of whether an account meets the criteria for financial statement recognition or has any collection potential. For example, interest and penalties continue to accrue on write-offs, such as FDIC and RTC cases, as well as on exam assessments where the taxpayers have not agreed to the validity of the assessments. The overall growth in unpaid assessments during fiscal year 1997 was wholly attributable to the accrual of interest and penalties. Success of IRS’ Year 2000 Efforts Is Critical It is critical that IRS successfully prepare its automated systems in order to overcome the potential problems associated with the year 2000. The Year 2000 problem is rooted in the way dates are recorded and calculated in many computer systems. For the past several decades, systems have typically used two digits to represent the year in order to conserve on electronic data storage and reduce operating costs. With this two-digit format, however, the year 2000 is indistinguishable from the year 1900. As a result, system or application programs that use dates to perform calculations, comparisons, or sorting may generate incorrect results when working with years after 1999. IRS has underway one of the largest conversion efforts in the civilian sector. IRS has established a schedule to renovate its automated systems in five segments, with all renovation efforts scheduled for completion by January 1999 in order to allow a full year of operational testing. However, with less than 2 years remaining until the year 2000 arrives, the task of completing the conversion on time is formidable. If IRS is unable to make its automated systems Year 2000 compliant, IRS could be rendered unable to properly process tax returns, issue refunds, correctly calculate interest and penalties, effectively collect taxes, or prepare accurate financial statements and other financial reports. We are working with the Congress and the executive branch to monitor progress made by federal agencies and identify specific recommendations for resolving the Year 2000 problem, which we reported as a governmentwide high risk area and which the President has designated as a priority management objective. In addition to the weaknesses discussed above, we noted other, less significant matters involving IRS’ system of accounting controls and its operations which we will be reporting separately to IRS. Opinion on Custodial Financial Statements The Custodial Financial Statements, including the accompanying notes, present fairly, in all material respects, and in conformity with a comprehensive basis of accounting other than generally accepted accounting principles, as described in note 1, IRS’ custodial assets and liabilities and custodial activity. Although the weaknesses described above precluded IRS’ internal controls from achieving the internal control objectives discussed previously, we were nevertheless able to obtain reasonable assurance that the Custodial Financial Statements were reliable through the use of substantive audit procedures. However, misstatements may nevertheless occur in other financial information reported by IRS as a result of the internal control weaknesses described above. As discussed in the notes to the fiscal year 1997 Custodial Financial Statements, IRS has attempted, to the extent practical, to implement early the provisions of Statement of Federal Financial Accounting Standards (SFFAS) No. 7, Accounting for Revenue and Other Financing Sources and Concepts for Reconciling Budgetary and Financial Accounting. SFFAS No. 7 is not effective until fiscal year 1998. However, the requirement that this standard be fully implemented in fiscal year 1998 has significant implications for IRS and its fiscal year 1998 Custodial Financial Statements. The significant internal control and system weaknesses discussed earlier may affect IRS’ ability to implement this standard until corrective actions have fully resolved these weaknesses. For example, as discussed earlier, IRS currently does not capture information at the time of receipt of payments from the taxpayer on how such payments are to be applied to the various trust funds. Consequently, IRS is presently unable to report collections of tax revenue by specific tax type as envisioned in SFFAS No. 7 and OMB’s Format and Instructions for the Form and Content of the Financial Statements of the U.S. Government (September 2, 1997). Other provisions of SFFAS No. 7 will also be difficult for IRS to implement in the short term until the significant internal control and systems issues reported in prior audits and discussed above are resolved. Opinion on Management’s Assertion About the Effectiveness of Internal Controls We evaluated IRS management’s assertion about the effectiveness of its internal controls designed to safeguard assets against loss from unauthorized acquisition, use, or assure the execution of transactions in accordance with laws governing the use of budget authority and other laws and regulations that have a direct and material effect on the Custodial Financial Statements or are listed in OMB audit guidance and could have a material effect on the Custodial Financial Statements; and properly record, process, and summarize transactions to permit the preparation of reliable financial statements and to maintain accountability for assets. IRS management asserted that except for the material weaknesses in internal controls presented in the agency’s fiscal year 1997 FIA report on compliance with the internal control and accounting standards, internal controls provided reasonable assurance that the above internal control objectives were satisfied during fiscal year 1997. Management made this assertion based upon criteria established under FIA and OMB Circular A-123, Management Accountability and Control. Our internal control work would not necessarily disclose material weaknesses not reported by IRS. However, we believe that IRS’ internal controls, taken as a whole, were not effective in satisfying the control objectives discussed above during fiscal year 1997 because of the severity of the material weaknesses in internal controls described in this report, which were also cited by IRS in its fiscal year 1997 FIA report. Compliance With Laws and Regulations Except as noted above, our tests of compliance with selected provisions of laws and regulations disclosed no other instances of noncompliance which we consider to be reportable under generally accepted government auditing standards or OMB Bulletin 93-06. Under FFMIA and OMB Bulletin 98-04, our tests disclosed, as discussed above, that IRS’ financial management systems do not substantially comply with the requirements for the following: federal financial management systems, applicable federal accounting standards, and the U.S. Government Standard General Ledger at the transaction level. However, the objective of our audit was not to provide an opinion on overall compliance with laws, regulations, and FFMIA requirements tested. Accordingly, we do not express such an opinion. Consistency of Other Information IRS’ overview and supplemental information contain various data, some of which are not directly related to the Custodial Financial Statements. We do not express an overall opinion on this information. However, we compared this information for consistency with the Custodial Financial Statements and, based on our limited work, found no material inconsistencies. Objectives, Scope, and Methodology preparing the annual Custodial Financial Statements in conformity with the basis of accounting described in note 1; establishing, maintaining, and assessing internal controls to provide reasonable assurance that the broad control objectives of FIA are met; and complying with applicable laws and regulations and FFMIA requirements. We are responsible for obtaining reasonable assurance about whether (1) the Custodial Financial Statements are reliable (free of material misstatements and presented fairly, in all material respects, in conformity with the basis of accounting described in note 1), and (2) management’s assertion about the effectiveness of internal controls is fairly stated, in all material respects, based upon criteria established under the Federal Managers’ Financial Integrity Act of 1982 and OMB Circular A-123, Management Accountability and Control. We are also responsible for testing compliance with selected provisions of laws and regulations, for reporting on compliance with FFMIA requirements, and for performing limited procedures with respect to certain other information appearing in these annual Custodial Financial Statements. In order to fulfill these responsibilities, we examined, on a test basis, evidence supporting the amounts and disclosures in the Custodial Financial Statements; assessed the accounting principles used and significant estimates made by management in the preparation of the Custodial Financial Statements; evaluated the overall presentation of the Custodial Financial Statements; obtained an understanding of internal controls related to safeguarding assets, compliance with laws and regulations, including execution of transactions in accordance with budget authority and financial reporting; tested relevant internal controls over safeguarding, compliance, and financial reporting and evaluated management’s assertion about the effectiveness of internal controls; tested compliance with selected provisions of the following laws and regulations: Internal Revenue Code (appendix I), Debt Collection Act, as amended {31 U.S.C. § 3720A}, Government Management Reform Act of 1994 {31 U.S.C. § 3515, 3521 (e)-(f)}, and Federal Managers’ Financial Integrity Act of 1982 {31 U.S.C. § 3512(d)}; tested whether IRS’ financial management systems substantially comply with the requirements of the Federal Financial Management Improvement Act of 1996, including Federal Financial Management Systems Requirements, applicable federal accounting standards, and the U.S. Government Standard General Ledger at the transaction level. We did not evaluate all internal controls relevant to operating objectives as broadly defined by FIA, such as those controls relevant to preparing statistical reports and ensuring efficient operations. We limited our internal control testing to those controls necessary to achieve the objectives outlined in our opinion on management’s assertion about the effectiveness of internal controls. As the auditor of IRS’ Custodial Financial Statements, we are reporting under FFMIA on whether the agency’s financial management systems substantially comply with the Federal Financial Management Systems Requirements, applicable federal accounting standards, and the U.S. Government Standard General Ledger at the transaction level. In making this report, we considered the implementation guidance for FFMIA issued by OMB on September 9, 1997. The IRS’ Custodial Financial Statements do not reflect the potential impact of any excess of taxes due in accordance with the Internal Revenue Code, over taxes actually assessed by IRS, often referred to as the “tax gap.” SFFAS No. 7 specifically excludes the “tax gap” from financial statement reporting requirements. Consequently, the Custodial Financial Statements do not consider the impact of the tax gap. We performed our work in accordance with generally accepted government auditing standards and OMB Bulletin 93-06. Agency Comments and Our Evaluation In commenting on a draft of this report, IRS stated that it generally agreed with the findings and conclusions in the report. IRS acknowledged the internal control weaknesses and noncompliance with laws and regulations we cited, and discussed initiatives underway to address many of the issues raised in the report. We will evaluate the effectiveness of IRS’ corrective actions as part of our audit of IRS’ fiscal year 1998 Custodial Financial Statements. However, we do not agree with IRS’ assertion that it needs a change in legislation to obtain information from taxpayers at the time of remittance to properly allocate excise tax payments to the various trust funds. We recognize that resolution of many of these issues could take several years. IRS agreed with our conclusion that its financial management systems do not comply with the Federal Financial Management Systems Requirements and the U.S. Government Standard General Ledger requirements of the Federal Financial Management Improvement Act of 1996. However, IRS believes that its current accounting and financial reporting process complies with applicable federal accounting standards. OMB’s September 9, 1997, memorandum on implementation guidance for FFMIA specifies two indicators that must be present to indicate compliance with federal accounting standards. First, the agency generally should receive an unqualified opinion on its financial statements. Second, there should be no material weaknesses in internal controls that affect the agency’s ability to prepare auditable financial statements and related disclosures. As we reported, IRS received an unqualified opinion on its financial statements. However, as discussed in this report, we identified six material weaknesses in IRS’ internal controls. As a result of these weaknesses, IRS’ financial management systems are unable to produce reliable financial statements and related disclosures without extensive ad hoc procedures and tens of billions of dollars in adjustments. Consequently, IRS’ financial management systems are not in compliance with applicable federal accounting standards requirements. IRS’ written comments are included in appendix II. Principal Financial Statements Provisions of the Internal Revenue Code Tested for the Fiscal Year 1997 Audit Comments From the Internal Revenue Service Major Contributors to This Report Accounting and Information Management Division, Washington, D.C. Atlanta Field Office Dallas Field Office San Francisco Field Office Los Angeles Field Office Seattle Field Office Office of General Counsel Thomas Armstrong, Assistant General Counsel Andrea Levine, Attorney The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a legislative requirement, GAO examined the Internal Revenue Service's (IRS) custodial financial statements for fiscal year (FY) ending September 30, 1997. GAO noted that: (1) the IRS custodial financial statements were reliable in all material respects; (2) IRS management's assertion about the effectiveness of internal controls stated that except for the material weaknesses in internal controls presented in the agency's FY 1997 Federal Managers' Financial Integrity Act (FIA) report, internal controls were effective in satisfying the following objectives: (a) safeguarding assets from material loss; (b) assuring material compliance with laws governing the use of budget authority and with other relevant laws and regulations; and (c) assuring that there were no other material misstatements in the custodial financial statements; (3) however, GAO found that IRS' internal controls, taken as a whole, were not effective in satisfying these objectives; (5) due to the severity of the material weaknesses in IRS' financial accounting and reporting controls, all of which were reported in IRS' FY 1997 FIA report, extensive reliance on ad hoc programming and analysis was needed to develop financial statement line item balances, and the resulting amounts needed material audit adjustments to produce reliable custodial financial statements; and (6) one reportable noncompliance with selected provisions of laws and regulations GAO tested, and that IRS' financial management systems do not substantially comply with the requirements of the Federal Financial Management Improvement Act of 1996.
GAO_GAO-10-1063T
Background The National Flood Insurance Act of 1968 established NFIP as an alternative to providing direct assistance after floods. NFIP, which provides government-guaranteed flood insurance to homeowners and businesses, was intended to reduce the federal government’s escalating costs for repairing flood damage after disasters. FEMA, which is within the Department of Homeland Security (DHS), is responsible for the oversight and management of NFIP. Since NFIP’s inception, Congress has enacted several pieces of legislation to strengthen the program. The Flood Disaster Protection Act of 1973 made flood insurance mandatory for owners of properties in vulnerable areas who had mortgages from federally regulated lenders and provided additional incentives for communities to join the program. The National Flood Insurance Reform Act of 1994 strengthened the mandatory purchase requirements for owners of properties located in special flood hazard areas (SFHA) with mortgages from federally regulated lenders. Finally, the Bunning-Bereuter-Blumenauer Flood Insurance Reform Act of 2004 authorized grant programs to mitigate properties that experienced repetitive flooding losses. Owners of these repetitive loss properties who do not mitigate face higher premiums. To participate in NFIP, communities agree to enforce regulations for land use and new construction in high-risk flood zones and to adopt and enforce state and community floodplain management regulations to reduce future flood damage. Currently, more than 20,000 communities participate in NFIP. NFIP has mapped flood risks across the country, assigning flood zone designations based on risk levels, and these designations are a factor in determining premium rates. NFIP offers two types of flood insurance premiums: subsidized and full-risk. The National Flood Insurance Act of 1968 authorizes NFIP to offer subsidized premiums to owners of certain properties. These subsidized premium rates, which represent about 35 to 40 percent of the cost of covering the full risk of flood damage to the properties, account for about 22 percent of all NFIP policies as of September 2010. To help reduce or eliminate the long-term risk of flood damage to buildings and other structures insured by NFIP, FEMA has used a variety of mitigation efforts such as elevation, relocation, and demolition. Despite these efforts, the inventories of repetitive loss properties and policies with subsidized premium rates have continued to grow. In response to the magnitude and severity of the losses from the 2005 hurricanes, Congress increased NFIP’s borrowing authority from the Treasury to $20.775 billion. As of August 2010, FEMA owed Treasury $18.8 billion, and the program as currently designed will likely not generate sufficient revenues to repay this debt. NFIP’s Financial Challenges Have Increased the Federal Government’s and U.S. Taxpayers’ Financial Exposure from Flood Losses By design, NFIP is not an actuarially sound program, in part because it does not operate like many private insurance companies. As a government program, its primary public policy goal is to provide flood insurance in flood-prone areas to property owners who otherwise would not be able to obtain it. Yet NFIP is also expected to cover its claims losses and operating expenses with the premiums it collects, much like a private insurer. In years when flooding has not been catastrophic, NFIP has generally managed to meet these competing goals. In years of catastrophic flooding, however, and especially during the 2005 hurricane season, it has not. NFIP’s operations differ from those of most private insurers in a number of ways. First, it operates on a cash-flow basis and has the authority to borrow from Treasury. As of August 2010, NFIP owed approximately $18.8 billion to Treasury, primarily as a result of loans that the program received to pay claims from the 2005 hurricane season. NFIP will likely not be able to meet its interest payments in most years, and the debt may continue to grow as the program may need to borrow to meet the interest payments in some years and potential future flood losses. Also unlike private insurance companies, NFIP assumes all the risk for the policies it sells. Private insurers typically retain only part of the risk that they accept from policyholders, ceding a portion of the risk to reinsurers (insurance for insurers). This mechanism is particularly important in the case of insurance for catastrophic events, because the availability of reinsurance allows an insurer to limit the possibility that it will experience losses beyond its ability to pay. NFIP’s lack of reinsurance, combined with the lack of structure to build a capital surplus, transfers much of the financial risk of flooding to Treasury and ultimately the taxpayer. NFIP is also required to accept virtually all applications for insurance, unlike private insurers, which may reject applicants for a variety of reasons. For example, FEMA cannot deny insurance on the basis of frequent losses. As a result, NFIP is less able to offset the effects of adverse selection—that is, the phenomenon that those who are most likely to purchase insurance are also the most likely to experience losses. Adverse selection may lead to a concentration of policyholders in the riskiest areas. This problem is further compounded by the fact that those at greatest risk are required to purchase insurance from NFIP if they have a mortgage from a federally regulated lender. Finally, by law, FEMA is prevented from raising rates on each flood zone by more than 10 percent each year. While most states regulate premium prices for private insurance companies on other lines of insurance, they generally do not set limits on premium rate increases, instead focusing on whether the resulting premium rates are justified by the projected losses and expenses. NFIP’s Premium Rates Do Not Reflect the Full Risk of Flooding As we have seen, NFIP does not charge rates that reflect the full risk of flooding. NFIP could be placed on a sounder fiscal footing by addressing several elements of its premium structure. For example, as we have pointed out in previous reports, NFIP provides subsidized and grandfathered rates that do not reflect the full risk of potential flood losses to some property owners, operates in part with unreliable and incomplete data on flood risks that make it difficult to set accurate rates, and has not been able to overcome the challenge of repetitive loss properties. Subsidized rates, which are required by law, are perhaps the best-known example of premium rates that do not reflect the actual risk of flooding. These rates, which were authorized from when the program began, were intended to help property owners during the transition to full-risk rates. But today, nearly one out of four NFIP policies continues to be based on a subsidized rate. These rates allow policyholders with structures that were built before floodplain management regulations were established in their communities to pay premiums that represent about 35 to 40 percent of the actual risk premium. Moreover, FEMA estimates that properties covered by policies with subsidized rates experience as much as five times more flood damage than compliant new structures that are charged full-risk rates. As we have pointed out, the number of policies receiving subsidized rates has grown steadily in recent years and without changes to the program will likely continue to grow, increasing the potential for future NFIP operating deficits. Further, potentially outdated and inaccurate data about flood probabilities and damage claims, as well as outdated flood maps, raise questions about whether full-risk premiums fully reflect the actual risk of flooding. First, some of the data used to estimate the probability of flooding have not been updated since the 1980s. Similarly, the claims data used as inputs to the model may be inaccurate because of incomplete claims records and missing data. Further, some of the maps FEMA uses to set premium rates remain out of date despite recent modernization efforts. For instance, as FEMA continues these modernization efforts, it does not account for ongoing and planned development making some maps outdated shortly after their completion. Moreover, FEMA does not map for long-term erosion, further increasing the likelihood that data used to set rates are inaccurate. FEMA also sets flood insurance rates on a nationwide basis, failing to account for many topographic factors that are relevant to flood risk for individual properties. Some patterns in historical claims and premium data suggest that NFIP’s rates may not accurately reflect individual differences in properties’ flood risk. Not accurately reflecting the actual risk of flooding increases the risk that full-risk premiums may not be sufficient to cover future losses and add to concerns about NFIP’s financial stability. As mentioned earlier, we are currently reviewing FEMA’s flood mapping program. Specifically, we are trying to determine the extent to which FEMA ensures that flood maps accurately reflect flood risk and the methods FEMA uses to promote community acceptance of flood maps. We plan to issue this report in December 2010. Further contributing to NFIP’s financial challenges, FEMA made a policy decision to allow certain properties remapped into riskier flood zones to keep their previous lower rates. Like subsidized rates, these “grandfathered” rates do not reflect the actual risk of flooding to the properties and do not generate sufficient premiums to cover expected losses. FEMA officials told us that the decision to grandfather rates was based on considerations of equity, ease of administration, and goals of promoting floodplain management. However, FEMA does not collect data on grandfathered properties or measure their financial impact on the program. As a result, it does not know how many such properties exist, their exact location, or the volume of losses they generate. FEMA officials stated that beginning in October 2010 they would indicate on all new policies whether or not they were grandfathered. However, they would still be unable to identify grandfathered properties among existing policies. As FEMA continues its efforts to modernize flood maps across the country, it has continued to face resistance from communities and homeowners when remapping properties into higher-risk flood zones with higher rates. As a result, FEMA has often grandfathered in previous premium rates that are lower than the remapped rates. However, homeowners who are remapped into high-risk areas and do not currently have flood insurance may be required to purchase it at the full risk rate. In reauthorizing NFIP in 2004, Congress noted that repetitive loss properties—those that have had two or more flood insurance claims payments of $1,000 or more over 10 years—constituted a significant drain on NFIP resources. These properties account for about 1 percent of all policies but are estimated to account for up to 30 percent of all NFIP losses. Not all repetitive loss properties are part of the subsidized property inventory, but a high proportion receive subsidized rates, further contributing to NFIP’s financial risks. While Congress has made efforts to target these properties, the number of repetitive loss properties has continued to grow, making them an ongoing challenge to NFIP’s financial stability. Despite Its Financial Challenges, NFIP Has Experienced Some Positive Developments According to FEMA, expanded marketing efforts through its FloodSmart campaign have contributed to an increase in NFIP policies. This program was designed to educate and inform partners, stakeholders, property owners, and renters about insuring their homes and businesses against flood damage. Since the start of the FloodSmart campaign in 2004, NFIP has seen policy growth of more than 24 percent, and as of June 2010, had 5.6 million policies in force. Moreover, according to FEMA, despite the economic downturn, both policy sales and retention have grown. In addition, NFIP’s collected premiums have risen 24 percent from December 2006 to June 2010. This increase, combined with a relatively low loss experience in recent years, has enabled FEMA to make nearly $600 million in payments to Treasury with no additional borrowing since March 2009. FEMA has also adjusted its expense reimbursement formula. While these are all encouraging developments, FEMA is still unlikely to ever pay off its current $18.8 billion debt. FEMA’s Operational and Management Issues May Further Limit Progress in Achieving NFIP Goals We have identified a number of operational issues that affect NFIP, including weaknesses in FEMA’s oversight of WYO insurers, and shortcomings in its oversight of other contractors, as well as new issues from ongoing work. For example, we found that FEMA does not systematically consider actual flood insurance expense information when determining the amount it pays WYO insurers for selling and servicing flood insurance policies and adjusting claims. Instead, FEMA has used proxies, such as average industry operating expenses for property insurance, to determine the rates at which it pays these insurers, even though their actual flood insurance expense information has been available since 1997. Because FEMA does not systematically consider these data when setting its payment rates, it cannot effectively estimate how much insurers are spending to carry out their contractual obligations to FEMA. Further, FEMA does not compare the WYO insurers’ actual expenses to the payments they receive each year and thus cannot determine whether the payments are reasonable in terms of expenses and profits. When GAO compared payments FEMA made to six WYO insurers to their actual expenses for calendar years 2005 through 2007, we found that the payments exceeded actual expenses by $327.1 million, or 16.5 percent of total payments made. By considering actual expense information, FEMA could provide greater transparency and accountability over payments to the WYO insurers and potentially save taxpayers’ funds. FEMA also has not aligned its bonus structure for WYO insurers with NFIP goals such as increasing penetration in low-risk flood zones and among homeowners in all zones that do not have mortgages from federally regulated lenders. FEMA uses a broad-based distribution formula that primarily rewards companies that are new to NFIP, and can relatively easily increase their percentage of net policies from a small base. We also found that most WYO insurers generally offered flood insurance when it was requested but did not strategically market the product as a primary insurance line. FEMA has set only one explicit marketing goal—to increase policy growth by 5 percent each year—and does not review the WYO insurers’ marketing plans. It therefore lacks the information needed to assess the effectiveness of either the WYO insurers’ efforts to increase participation or the bonus program itself. For example, FEMA does not know the extent to which sales increases may reflect external factors such as flood events or its own FloodSmart marketing campaign rather than any effort on the part of the insurers. Having intermediate targeted goals could also help expand program participation, and linking such goals directly to the bonus structure could help ensure that NFIP and WYO goals are in line with each other. Finally, FEMA has explicit financial control requirements and procedures for the WYO program but has not implemented all aspects of its Financial Control Plan. FEMA’s Financial Control Plan provides guidance for WYO insurers to help ensure compliance with the statutory requirements for NFIP. It contains several checks and balances to help ensure that taxpayers’ funds are spent appropriately. For an earlier report, we reviewed 10 WYO insurers and found that while FEMA performed most of the required biennial audits and underwriting and claims reviews required under the plan, it rarely or never implemented most of the required audits for cause, reviews of state insurance department audits, or marketing, litigation, and customer service operational reviews. In addition, FEMA did not systematically track the outcomes of the various audits, inspections, and reviews that it performed. We also found that multiple units had responsibility for helping ensure that WYO insurers complied with each component of the Financial Control Plan; that FEMA did not maintain a single, comprehensive monitoring system that would allow it to ensure compliance with all components of the plan; and that there was no centralized access to all of the documentation produced. Because FEMA does not implement all aspects of the Financial Control Plan, it cannot ensure that WYOs are fully complying with program requirements. In another review, we found that weak internal controls impaired FEMA’s ability to maintain effective transaction-level accountability with WYO insurers from fiscal years 2005 through 2007, a period that included the financial activity related to the 2005 Gulf Coast hurricanes. NFIP had limited assurance that its financial data for fiscal years 2005 to 2007 were accurate. This impaired data reliability resulted from weaknesses at all three levels of the NFIP transaction accountability and financial reporting process. At the WYO level, WYO insurer claims loss files did not include the documents necessary to support the claims, and some companies filed reports late, undermining the reliability of the data they did report. Second, contractor-level internal control activities were ineffective in verifying the accuracy of the data that WYO insurers submitted, such as names and addresses. Lastly, at the agency level, financial reporting process controls were not based on transaction-level data. Instead FEMA relied primarily on summary data compiled using error-prone manual data entry. FEMA’s Oversight of Non- WYO Contractor Activities Is Also Lacking Also in a previous report, we pointed out that FEMA lacked records of monitoring activities for other contractors, inconsistently followed its procedures for monitoring these contractors, and did not coordinate contract monitoring responsibilities for the two major contracts we reviewed. At FEMA, a Contracting Officer’s Technical Representative (COTR) and staff (referred to as “monitors”) are responsible for, respectively, ensuring compliance with contract terms and regularly monitoring and reporting on the extent to which NFIP contractors meet standards in performance areas specified in the contracts. Internal control standards for the federal government state that records should be properly managed and maintained. But FEMA lacked records for the majority of the monitoring reports we requested and did not consistently follow the monitoring procedures for preparing, reviewing, and maintaining monitoring reports. Further, FEMA offices did not coordinate information and actions relating to contractors’ deficiencies and payments, and in some cases key officials were unaware of decisions on contractors’ performance. In particular, our review of monitoring reports for one contract revealed a lack of coordination between the COTR and the contracting officer. As a result, FEMA could not ensure that the contractor had adhered to the contract’s requirements and lacked information critical to effective oversight of key NFIP data collection, reporting, and insurance functions. Given NFIP’s reliance on contractors, it is important that FEMA have in place adequate controls that are consistently applied to all contracts. Consistent with our findings in prior work, the DHS inspector general has also identified weaknesses in FEMA’s internal controls and financial reporting related to the NFIP. Our ongoing work reviewing FEMA’s management of NFIP identifies a number of steps that FEMA has taken that are designed to improve the agency’s oversight of contractors. These efforts include the implementation of an acquisition review board and the creation of a handbook for COTRs. While these are positives steps, not enough time has passed to evaluate their effectiveness. FEMA Continues to Lack an Effective System to Manage Flood Insurance Policy and Claims Data To manage the flood policy and claims information that it obtains from insurance companies, NFIP’s Bureau and Statistical Agent (BSA) relies on a flood insurance management system from the 1980s that is difficult and costly to sustain and that does not adequately support NFIP’s mission needs. This system consists of over 70 interfaced applications that utilize monthly tape and batch submissions of policy and claims data from insurance companies. The system also provides limited access to NFIP data. Further, identifying and correcting errors in submission requires between 30 days and 6 months and the general claims processing cycle itself is 2 to 3 months. To address the limitations of this system, NFIP launched a program in 2002 to acquire and implement a modernization and business improvement system, known as NextGen. As envisioned, NextGen was to accelerate updates to information obtained from insurance companies, identify errors before flood insurance policies went into effect, and enable FEMA to expedite business transactions and responses to NFIP claims when policyholders required urgent support. As such, the system would support the needs of a wide range of NFIP stakeholders, including FEMA headquarters and regional staff, WYO insurers, vendors, state hazard mitigation officers, and NFIP state coordinators. As part of our ongoing review of FEMA’s management of NFIP, we found that despite having invested roughly $40 million over 7 years, FEMA has yet to implement NextGen. Initial versions of NextGen were first deployed for operational use in May 2008. However, shortly thereafter system users reported major problems with the system, including significant data and processing errors. As a result, use of NextGen was halted, and the agency returned to relying exclusively on its mainframe-based legacy system while NextGen underwent additional testing. In late 2009, after this testing showed that the system did not meet user needs and was not ready to replace the legacy system, further development and deployment of NextGen was stopped, and FEMA’s Chief Information Officer began an evaluation to determine what, if anything, associated with the system could be salvaged. This evaluation is currently under way, and a date for completing it has yet to be established. DHS and the Office of Management and Budget recently designated this effort as high-risk. Our ongoing review of FEMA’s management of NFIP includes identifying lessons learned about how NextGen was defined, developed, tested, and deployed, including weaknesses in requirements development and management, test management, risk management, executive oversight, and program office staffing that have collectively contributed to NextGen’s failure. In completing its evaluation and deciding how to proceed in meeting its policy and claims processing needs, FEMA could benefit by correcting these weaknesses. In the interim, the agency continues to rely on its outdated legacy system, and thus does not have the kind of robust analytical support and information needed to help address the reasons that NFIP remains on GAO’s high-risk list of federal programs. Addressing NFIP’s Challenges Would Require Actions from FEMA and Congress To address the challenges NFIP faces, FEMA would have to address its own operational and management challenges. Further, legislative reform would be needed to address structural issues. However, as you know, addressing many of these issues involves public policy trade-offs that would have to be made by Congress. In July 2010 the House of Representatives passed the Flood Insurance Reform Priorities Act, which if enacted would make a number of changes to NFIP. Moreover, part of this process requires determining whether NFIP is or should be structured as an insurance program and how much liability the government can and is willing to accept. For example, if Congress wants to structure NFIP as an insurance company and limit borrowing from Treasury in future high- or catastrophic loss years, NFIP would have to build a capital surplus fund. Our prior work has shown that building such a fund would require charging premium rates that, in some cases, could be more than double or triple current rates and would take a number of years without catastrophic losses to implement. Additionally, while private insurers generally use reinsurance to hedge their risk of catastrophic losses, it is unclear whether the private reinsurance market would be willing to offer coverage to NFIP. In the absence of reinsurance and a surplus fund, Treasury will effectively continue to act as the reinsurer for NFIP and be the financial backstop for the program. Premium Rates Could Be Made More Reflective of Flood Risk Making premium rates more reflective of flood risk would require actions by FEMA and Congress. Because subsidized premium rates are required by law, addressing their associated costs would require congressional action. As previously reported, two potential options would be to eliminate or reduce the use of subsidies over time, or target them based on need. However, these options involve trade-offs. For example, eliminating or reducing the subsidies would help ensure that premium rates more accurately reflect the actual risk of loss and could encourage mitigation efforts. But the resulting higher premiums could lead some homeowners to discontinue or not purchase coverage, thus reducing participation in NFIP and potentially increasing the costs to taxpayers of providing disaster assistance in the event of a catastrophe. Targeting subsidies based on need is an approach used by other federal programs and could help ensure that those needing the subsidy would have access to it and retain their coverage. Unlike other agencies that provide—and are allocated funds for—traditional subsidies, NFIP does not receive an appropriation to pay for shortfalls in collected premiums caused by its subsidized rates. However, one option to maintain the subsidies but improve NFIP’s financial stability would be to rate all policies at the full-risk rate and to appropriate subsidies for qualified policyholders. In this way, the cost of such subsidies would be more transparent, and policyholders would be better informed of their flood risk. Depending on how such a program was implemented, NFIP might be able to charge more participants rates that more accurately reflect their risk of flooding. However, raising premium rates for some participants could also decrease program participation, and low-income property owners and renters could be discouraged from participating in NFIP if they were required to prove that they met the requirements for a subsidy. FEMA might also face challenges in implementing this option in the midst of other ongoing operational and management challenges. NFIP’s rate-setting process for full-risk premiums may not ensure that those premium rates reflect the actual risk of flooding and therefore may increase NFIP’s financial risk. Moreover, FEMA’s rate-setting process for subsidized properties depends, in part, on the accuracy of the full-risk rates, raising concerns about how subsidized rates are calculated as well. To address these concerns, we have identified actions that FEMA could take. For example, we recommended that FEMA take steps to help ensure that its rate-setting methods and the data it uses to set rates result in full- risk premium rates that accurately reflect the risk of losses from flooding. In particular, we pointed out that these steps should include verifying the accuracy of flood probabilities, damage estimates, and flood maps, and reevaluating the practice of aggregating risks across zones. Similarly, because NFIP allows grandfathered rates for those remapped into high-risk flood zones, it would also be in the position to address some of the challenges associated with this practice. FEMA could end grandfathered rates, but it decided to allow grandfathering after consulting with Congress, its oversight committees, and other stakeholders and considering issues of equity, fairness, and the goal of promoting floodplain management. We recommended that the agency take steps both to ensure that information was collected on the location, number, and losses associated with existing and newly created grandfathered properties in NFIP and to analyze the financial impact of these properties on the flood insurance program. With such information, FEMA and Congress will be better informed on the extent to which these rates contribute to NFIP’s financial challenges. Another statutory requirement that could be revisited is the 10-percent cap on rate increases. As with all the potential reform options, determining whether such action is warranted would necessitate weighing the law’s benefits—including limiting financial hardship to policyholders—against the benefits that increasing or removing such limits would provide to NFIP, Treasury, and ultimately the taxpayer. However, as long as caps on rate increases remain, FEMA will continue to face financial challenges. Solutions for addressing the impact of repetitive loss properties would also require action by both FEMA and Congress. For example, we have reported that one option for Congress would be to substantially expand mitigation efforts and target these efforts toward the highest-risk properties. Mitigation criteria could be made more stringent – for example, by requiring all insured properties that have filed two or more flood claims (even for small amounts) to mitigate, denying insurance to property owners who refuse or do not respond to a mitigation offer, or some combination of these approaches. While these actions would help reduce losses from flood damage and could ultimately limit costs to taxpayers by decreasing the number of subsidized properties, they would require increased funding for FEMA’s mitigation programs, to elevate, relocate, or demolish the properties, would be costly to taxpayers, and could take years to complete. Congress could also consider changes to address loopholes in mitigation and repurchase requirements that allow policyholders to avoid mitigating by simply not responding to FEMA’s requests that they do so. FEMA could be required to either drop coverage for such properties or use eminent domain to seize them if owners fail to respond to FEMA’s mitigation requests. Moreover, Congress could streamline the various mitigation grant programs to make them more efficient and effective. FEMA Could Take Further Actions to Help Address Operational and Management Challenges Over the last several years we have made many recommendations for actions that FEMA could take to improve its management of NFIP. FEMA has implemented some recommendations, including among other things, introducing a statistically valid method for sampling flood insurance claims for review, establishing a regulatory appeals process for policyholders, and ensuring that WYO insurance agents meet minimum education and training requirements. FEMA has also taken steps to make analyzing the overall results of claims adjustments easier after future flood events. The efforts will help in determining the number and type of claims adjustment errors made and deciding whether new, cost-efficient methods for adjusting claims that were introduced after Hurricane Katrina are feasible to use after other flood events. However, as mentioned previously, many of our other previous recommendations have not yet been implemented. For example, we have recommended that FEMA: Address challenges to oversight of the WYO program, specifically the lack of transparency of and accountability for the payments FEMA makes to WYO insurers, by determining in advance the amounts built into the payment rates for estimated expenses and profit, annually analyzing the amounts of actual expenses and profit in relation to the estimated amounts used in setting payment rates, and by immediately reassessing the practice of paying WYO insurers an additional 1 percent of written premiums for operating expenses. Take steps to better oversee WYO insurers and ensure that they are in compliance with statutory requirements for NFIP and that taxpayers’ funds are spent appropriately by consistently following the Financial Control Plan and ensuring that each component is implemented; ensuring that any revised Financial Control Plan covers oversight of all functions of participating WYO insurers, including customer service and litigation expenses; systematically tracking insurance companies’ compliance with and performance under each component of the Financial Control Plan; and ensuring centralized access to all audits, reviews, and data analyses performed for each WYO insurer under the Financial Control Plan. Improve NFIP’s transaction-level accountability and assure that financial reporting is accurate and that insurance company operations conform to program requirements by augmenting NFIP policies to require contractors to develop procedures for analyzing financial reports in relation to the transaction-level information that WYO insurers submit for statistical purposes; revising required internal control activities for contractors to provide for verifying and validating the reliability of WYO-reported financial information based on a review of a sample of the underlying transactions or events; and obtaining verification that these objectives have been met through independent audits of the WYO insurers. Address contract and management oversight issues that we have identified in previous reports, including determining the feasibility of integrating and streamlining numerous existing NFIP financial reporting processes to reduce the risk of errors inherent in the manual recording of accounting transactions into multiple systems; establishing and implementing procedures that require the review of available information, such as the results of biennial audits, operational reviews, and claim reinspections to determine whether the targeted audits for cause should be used; establishing and implementing procedures to schedule and conduct all required operational reviews within the prescribed 3-year period; and establishing and implementing procedures to select statistically representative samples of all claims as a basis for conducting reinspections of claims by general adjusters. Address challenges to oversight of contractor activities, including implementing processes to ensure that monitoring reports are submitted on time and systematically reviewed and maintained by the COTR and the Program Management Office; ensuring that staff clearly monitor each performance standard the contractor is required to meet in the specified time frames and clearly link monitoring reports and performance areas; implementing written guidance for all NFIP-related contracts on how to consistently handle the failure of a contractor to meet performance standards; establishing written policies and procedures governing coordination among FEMA officials and offices when addressing contractor deficiencies; and ensuring that financial disincentives are appropriately and consistently applied. Building on our prior work and these recommendations, we are in the process of conducting a comprehensive review of FEMA’s overall management of NFIP that could help FEMA develop a roadmap for identifying and addressing many of the root causes of its operational and management challenges. This review focuses on a wide range of internal management issues including acquisition, contractor oversight, information technology (NextGen), internal controls, human capital, budget and resources, document management, and financial management. While our work is ongoing, we have observed some positive developments in the agency’s willingness to begin to acknowledge its management issues and the need to address them. FEMA has also taken steps to improve our access to key NFIP staff and information by providing us with an on-site office at one of FEMA’s locations, facilitating our ability to access and review documents. In addition, in April 2010 FEMA staff initiated a meeting with GAO to discuss all outstanding recommendations related to NFIP and the actions they planned to take to address them. We are in the process of obtaining and evaluating documentation related to these actions. Recent Proposals Could Provide Some Benefits but Also Raise Concerns As part of our past work, we have also evaluated other proposals related to NFIP. Each of those proposals has potential benefits as well as challenges. In a previous report, we discussed some of the challenges associated with implementing a combined federal flood and wind insurance program. While such a program could provide coverage for wind damage to those unable to obtain it in the private market and simplify the claims process for some property owners, it could also pose several challenges. For example, FEMA would need to determine wind hazard prevention standards; adapt existing programs to accommodate wind coverage, create a new rate-setting process, raise awareness of the program, enforce new building codes, and put staff and procedures in place. FEMA would also need to determine how to pay claims in years with catastrophic losses, develop a plan to respond to potential limited participation and adverse selection, and address other trade-offs, including the potential for delays in reimbursing participants, litigation, lapses in coverage, underinsured policyholders, and larger-than-expected losses. As we have previously reported, private business interruption coverage for flood damage is expensive and is generally purchased only by large companies. Adding business interruption insurance to NFIP could help small businesses obtain coverage that they could not obtain in the private market, but NFIP currently lacks resources and expertise in this area. Adding business interruption insurance could increase NFIP’s existing debt and potentially amplify its ongoing management and financial challenges. Insurers told us that underwriting this type of coverage, properly pricing the risk, and adjusting claims was complex. Finally, we have reported that creating a catastrophic loss fund to pay larger-than-average annual losses would be challenging for several reasons. For example, NFIP’s debt to Treasury would likely prevent NFIP from ever being able to contribute to such a fund. Further, such a fund might not eliminate NFIP’s need to borrow for larger-than-expected losses that occurred before the fund was fully financed. Building a fund could also require significant premium rate increases, potentially reducing participation in NFIP. Closing Comments FEMA faces a number of ongoing challenges in managing and administering NFIP that, if not addressed, will continue to work against improving the program’s long-term financial condition. As you know, improving NFIP’s financial condition involves a set of highly complex, interrelated issues that are likely to involve many trade-offs and have no easy solutions, particularly when the solutions to problems involve balancing the goals of charging rates that reflect the full risk of flooding and encouraging broad participation in the program. In addition, addressing NFIP’s current challenges will require the cooperation and participation of many stakeholders. As we noted when placing NFIP on the high-risk list in 2006, comprehensive reform will likely be needed to address the financial challenges facing the program. In addressing these financial challenges, FEMA will also need to address a number of operational and management challenges before NFIP can be eligible for removal from the high-risk list. Our previous work has identified many of the necessary actions that FEMA should take, and preliminary observations from our ongoing work have revealed additional operational and management issues. By addressing both the financial challenges as well as the operational and management issues, NFIP will be in a much stronger position to achieve its goals and ultimately to reduce its burden on the taxpayer. Chairman Dodd and Ranking Member Shelby, this concludes my prepared statement. I would be pleased to respond to any of the questions you or other members of the Committee may have at this time. GAO Contact and Staff Acknowledgments Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. For further information about this testimony, please contact Orice Williams Brown at (202) 512-8678 or [email protected]. This statement was prepared under the direction of Patrick Ward. Key contributors were Tania Calhoun, Emily Chalmers, Nima Patel Edwards, Elena Epps, Christopher Forys, Randy Hite, Tonia Johnson, and Shamiah Kerney. Related GAO Products National Flood Insurance Program: Continued Actions Needed to Address Financial and Operational Issues. GAO-10-631T. Washington, D.C.: April 21, 2010. Financial Management: Improvements Needed in National Flood Insurance Program’s Financial Controls and Oversight. GAO-10-66. Washington, D.C.: December 22, 2009. Flood Insurance: Opportunities Exist to Improve Oversight of the WYO Program. GAO-09-455. Washington, D.C.: August 21, 2009. Results-Oriented Management: Strengthening Key Practices at FEMA and Interior Could Promote Greater Use of Performance Information. GAO-09-676. Washington, D.C.: August 17, 2009. Information on Proposed Changes to the National Flood Insurance Program. GAO-09-420R. Washington, D.C.: February 27, 2009. High-Risk Series: An Update. GAO-09-271. Washington, D.C.: January 2009. Flood Insurance: Options for Addressing the Financial Impact of Subsidized Premium Rates on the National Flood Insurance Program. GAO-09-20. Washington, D.C.: November 14, 2008. Flood Insurance: FEMA’s Rate-Setting Process Warrants Attention. GAO-09-12. Washington, D.C.: October 31, 2008. National Flood Insurance Program: Financial Challenges Underscore Need for Improved Oversight of Mitigation Programs and Key Contracts. GAO-08-437. Washington, D.C.: June 16, 2008. Natural Catastrophe Insurance: Analysis of a Proposed Combined Federal Flood and Wind Insurance Program. GAO-08-504. Washington, D.C.: April 25, 2008. National Flood Insurance Program: Greater Transparency and Oversight of Wind and Flood Damage Determinations Are Needed. GAO-08-28. Washington, D.C.: December 28, 2007. National Disasters: Public Policy Options for Changing the Federal Role in Natural Catastrophe Insurance. GAO-08-7. Washington, D.C.: November 26, 2007. Federal Emergency Management Agency: Ongoing Challenges Facing the National Flood Insurance Program. GAO-08-118T. Washington, D.C.: October 2, 2007. National Flood Insurance Program: FEMA’s Management and Oversight of Payments for Insurance Company Services Should Be Improved. GAO-07-1078. Washington, D.C.: September 5, 2007. National Flood Insurance Program: Preliminary Views on FEMA’s Ability to Ensure Accurate Payments on Hurricane-Damaged Properties. GAO-07-991T. Washington, D.C.: June 12, 2007. Coastal Barrier Resources System: Status of Development That Has Occurred and Financial Assistance Provided by Federal Agencies. GAO-07-356. Washington, D.C.: March 19, 2007. National Flood Insurance Program: New Processes Aided Hurricane Katrina Claims Handling, but FEMA’s Oversight Should Be Improved. GAO-07-169. Washington, D.C.: December 15, 2006. Federal Emergency Management Agency: Challenges for the National Flood Insurance Program. GAO-06-335T. Washington, D.C.: January 25, 2006. Federal Emergency Management Agency: Improvements Needed to Enhance Oversight and Management of the National Flood Insurance Program. GAO-06-119. Washington, D.C.: October 18, 2005. Determining Performance and Accountability Challenges and High Risks. GAO-01-159SP. Washington, D.C.: November 2000. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The National Flood Insurance Program (NFIP), established in 1968, provides policyholders with insurance coverage for flood damage. The Federal Emergency Management Agency (FEMA) within the Department of Homeland Security is responsible for managing NFIP. Unprecedented losses from the 2005 hurricane season and NFIP's periodic need to borrow from the U.S. Treasury to pay flood insurance claims have raised concerns about the program's long-term financial solvency. Because of these concerns and NFIP's operational issues, NFIP has been on GAO's high-risk list since March 2006. As of August 2010, NFIP's debt to Treasury stood at $18.8 billion. This testimony discusses (1) NFIP's financial challenges, (2) FEMA's operational and management challenges, and (3) actions needed to address these challenges. In preparing this statement, GAO relied on its past work on NFIP and GAO's ongoing review of FEMA's management of NFIP, particularly data management and contractor oversight issues. While Congress and FEMA intended that NFIP be funded with premiums collected from policyholders rather than with tax dollars, the program is, by design, not actuarially sound. NFIP cannot do some of the things that private insurers do to manage their risks. For example, NFIP is not structured to build a capital surplus, is likely unable to purchase reinsurance to cover catastrophic losses, cannot reject high-risk applicants, and is subject to statutory limits on rate increases. In addition, its premium rates do not reflect actual flood risk. For example, nearly one in four property owners pay subsidized rates, "full-risk" rates may not reflect the full risk of flooding, and NFIP allows "grandfathered" rates that allow some property owners to continue paying rates that do not reflect reassessments of their properties' flood risk. Further, NFIP cannot deny insurance on the basis of frequent losses and thus provides policies for repetitive loss properties, which represent only 1 percent of policies but account for 25 to 30 percent of claims. NFIP's financial condition has improved slightly due to an increase in the number of policyholders and moderate flood losses, and since March 2009, FEMA has taken some encouraging steps toward improving its financial position, including making $600 million in payments to Treasury without increasing its borrowings. However, it is unlikely to pay off its full $18.8 billion debt, especially if it faces catastrophic loss years. Operational and management issues may also limit efforts to address NFIP's financial challenges and meet program goals. Payments to write-your-own (WYO) insurers, which are key to NFIP operations, represent one-third to two-thirds of the premiums collected. But FEMA does not systematically consider actual flood insurance expense information when calculating these payments and has not aligned its WYO bonus structure with NFIP goals or implemented all of its financial controls for the WYO program. GAO also found that FEMA did not consistently follow its procedures for monitoring non-WYO contractors or coordinate contract monitoring responsibilities among departments on some contracts. Some contract monitoring records were missing, and no system was in place that would allow departments to share information on contractor deficiencies. In ongoing GAO work examining FEMA's management of NFIP, some similar issues are emerging. For example, FEMA still lacks an effective system to manage flood insurance policy and claims data, despite investing roughly 7 years and $40 million on a new system whose development has been halted. However, FEMA has begun to acknowledge its management challenges and develop a plan of action. Addressing the financial challenges facing NFIP would likely require actions by both FEMA and Congress that involve trade-offs, and the challenges could be difficult to remedy. For example, reducing subsidies could increase collected premiums but reduce program participation. At the same time, FEMA must address its operational and management issues. GAO has recommended a number of actions that FEMA could take to improve NFIP operations, and ongoing work will likely identify additional issues. In past work, GAO recommended, among other things, that FEMA take steps to help ensure that premium rates are more reflective of flood risks; strengthen its oversight of NFIP and insurance companies responsible for selling and servicing flood policies; and strengthen its internal controls and data quality.
GAO_GAO-11-382T
Background The State OIG, as currently constituted, was established by the Omnibus Diplomatic Security and Antiterrorism Act of 1986, which expanded on the 1985 amendments to the Inspector General Act of 1978 (IG Act), as an independent office to prevent and detect fraud, waste, abuse, and mismanagement in the department’s programs and operations; conduct and supervise audits and investigations; and recommend policies to promote economy, efficiency, and effectiveness. The State OIG is unique among federal inspectors general in its history and responsibilities due to a statutory requirement for the OIG to provide inspections of the department’s bureaus and posts worldwide. From 1906 until 1957, inspections were to be carried out at least once every 2 years and were viewed as a management function, and not a function of an independent inspector general. In 1957, the State Department administratively established an Inspector General of Foreign Service, which was the first inspector general office within the State Department to conduct inspections. Congress enacted legislation in 1961 and in 1980 creating statutory inspectors general who were tasked with performing inspections on certain State Department activities. In 1978, GAO reviewed the IG’s inspection reports and questioned the independence of Foreign Service officers who were temporarily detailed to the IG’s office and recommended the elimination of this requirement. The 1980 legislation, section 209(a) of the Foreign Service Act, required the State IG to inspect every foreign service post, bureau, or other operating unit in the State Department at least once every 5 years. In 1982, we reviewed the IG’s operations and noted that the 5-year inspection cycle led to problems with the IG’s effectiveness by limiting the ability to do other work. In addition, we continued to question the use of Foreign Service officers and other persons from operational units within the department to staff the IG office. In 1986, reacting to concerns similar to those expressed in our 1982 report, Congress made the State IG a presidentially appointed inspector general subject to the Inspector General Act and prohibited a career member of the Foreign Service from being appointed as the State IG. Starting in 1996 and continuing until today, Congress, in the Department of State appropriations acts, annually waives the 5-year statutory requirement for inspections. However, while the inspection requirement is waived annually by Congress, the State IG continues to conduct inspections as part of its plan for oversight of the department. The State OIG’s inspection responsibilities encompass a wide range of objectives, which include reviewing whether department policy goals are achieved and whether the interests of the United States are represented and advanced effectively. In addition, the State OIG is assigned responsibility for specialized security inspections in support of the department’s mission to provide effective protection to its personnel, facilities, and sensitive intelligence information. Inspections are defined by the Council of the Inspectors General on Integrity and Efficiency (CIGIE) as a process that evaluates reviews, studies, and analyzes the programs and activities of an agency for the purposes of providing information to managers for decision making; making recommendations for improvements to programs, polices, or procedures; and identifying where administrative action may be necessary. There are fundamental differences between inspections and audits. Inspections and audits are typically conducted under separate standards with different basic requirements. That is, IGs are required by the IG Act to conduct audits in accordance with Government Auditing Standards (also known as generally accepted government auditing standards). In contrast, the IGs follow CIGIE’s Quality Standards for Inspection and Evaluation when conducting inspections as required by law. By d audits performed under Government Auditing Standards are subject to more in-depth requirements for levels of evidence and documentation supporting the findings than are inspections performed under CIGIE’s inspection standards. Also, auditing standards require external quality reviews of audit organizations (peer reviews) on a 3-year cycle, while inspection standards do not require such external reviews. According to CIGIE, inspections provide the benefits of a flexible mechanism for optimizing resources, expanding agency coverage, and using alternative review methods and techniques. However, as reported by a recent peer review performed by the National Aeronautics and Space Administration (NASA) IG, the State OIG’s Middle East Regional Office did not always provide audits consistent with generally accepted government auditing standards (GAGAS). Consequently, because these audits were not performed in accordance with GAGAS, they were reclassified by the OIG as inspections. Importance of Auditor and IG Independence Independence is a fundamental principle to the auditing profession and the most critical element for IG effectiveness. Without independence, an audit organization cannot conduct independent audits in compliance with generally accepted government auditing standards. Likewise, an IG who lacks independence cannot effectively fulfill the full range of requirement of the office. Lacking this critical attribute, an audit organization’s wor might be classified as studies, research re ports, consulting reports, or reviews, rather than independent audits. Quality Standards for Federal Offices of Inspector General adopted by CIGIE includes requirements for IG independence. Specifically, IG their staff must be free both in fact and appearance from personal, external, and organizational impairments to their independence. The IG s and their staff have a responsibility to maintain independence so that opinions, conclusions, judgments, and recommendations will be im and viewed as impartial by knowledgeable third parties. Likewise, Government Auditing Standards states: “in all matters relating to work, the audit organization and the individual auditor, whether government or public, must be free from personal, external, and organizational impairments to independence and must avoid the appearance of such impairments to independence. Auditors and audit organizations must maintain independence so that their opinions, find conclusions, judgments, and recommendations will be impartial and viewed as impartial by objective th ird parties with knowledge of the relevant information.” Personal independence applies to individual auditors at all levels o audit organization, including the head of the organization. Personal independence refers to the auditor’s ability to remain objective and maintain an independent attitude in all matters relating to the audit, as well as the auditor’s ability to be recognized by others as independent. The es not auditor is to have an independent and objective state of mind that do allow personal bias or the undue influence of others to override the auditor’s professional judgments. This attitude is also referred to as intellectual honesty. The auditor must also be free from direct financial or managerial involvement with the audited entity or other potential c of interest tha independent. t might create the perception that the auditor is not The IG’s personal independence and appearance of independence to knowledgeable third parties is critical to IG decision making related to th he nature and scope of audit and investigative work to be performed by t IG office. The IG’s personal independence must be maintained when conducting any audit and investigative work and when making decisions to pursue and the nature and scope of the to determine the type of work individual audits themselves. External independence refers to both the auditor’s and the audit organization’s freedom to make independent and objective judgmen o from external influences or pressures. Examples of impairments t external independence include restrictions on access to records, government officials, or other individuals needed to conduct the audit; external interference over the assignment, appointment, compensation, promotion of audit personnel; restrictions on funds or other resou provided to the audit organization that adversely affect the audit organization’s ability to carry out its responsibilities; or external authority rces to overrule or to inappropriately influence the auditors’ judgment as to appropriate reporting content. The IG Act provides the IGs with protections against impairments to external independence by providing that IGs have access to all agency documents and records, prompt access to the agency head, and the authority to independently (1) select and appoint IG staff, (2) obtain services of experts, and (3) enter into contracts. The IGs may choose whether to exercise the act’s specific authority to obtain access to information that is denied by agency officials. In addition, the IG Act granted the IGs additional insulation from impairment of external independence by requiring that IGs report the results of their work in semiannual reports to Congress without alteration by their respective agencies, and that these reports generally are to be made available to the general public. The IG Act also directed the IGs to keep their agency heads and Congress fully and currently informed of any deficiencies, abuses, fraud, or other serious problems relating to the administration of programs and operations of their agencies. Also, the IGs are required to report particularly serious or flagrant problems, abuses, or deficiencies immediately to their agency heads, who are required to transmit the IG’s report to Congress within 7 calendar days. Organizational independence refers to the audit organization’s placement in relation to the activities being audited. Professional auditing standards have different criteria for organizational independence for external and internal audit organizations. The IGs, in their statutory role of providing oversight of their agencies’ operations, represent a unique hybrid including some characteristics of both external and internal reporting responsibilities. For example, the IGs have external-reporting requirements outside their agencies, such as to the Congress, which are consistent with the reporting requirements for external auditors. At the same time the IGs are part of their respective agencies and must also keep their agency heads, as well as the Congress, concurrently informed. The IG Act provides specific protections to the IGs’ organizational independence including the requirement that IGs report only to their agency heads and not to lower-level management. The head of the agency may delegate supervision of the IG only to the officer next below in rank, and is prohibited from preventing the IG from initiating, carrying out, or completing any audit or investigation. In addition, IGs in large federal departments and agencies, such as the State Department, are appointed by the President and confirmed by the Senate. Only the President has the authority to remove these IGs and can do so only after explaining the reasons to the Congress 30 days before taking action. The Inspector General Reform Act of 2008 provided additional enhancements to overall IG independence that included establishing CIGIE by statute to continually address areas of weakness and vulnerability to fraud, waste, and abuse in federal programs and operations; requiring that IGs have their own legal counsel or use other specified counsel; and requiring that the budget amounts requested by the IGs for their operations be included in the overall agency-budget requests to the President and the Congress. Independence and Effectiveness Concerns We Reported in 2007 Concerns Regarding the State OIG’s Independence In March 2007, we reported on two areas of continuing concern regarding the independence of the State OIG. These concerns involved the appointment of management officials to head the State OIG in an acting capacity for extended periods of time and the use of Foreign Service staff to lead State OIG inspections. These concerns were similar to independence issues we reported in 1978 and 1982 regarding Foreign Service officers temporarily detailed from program offices to the IG’s office and inspection staff reassigned to and from management offices within the department. In response to concerns about personal impairments to the State IG’s independence, the act that created the current IG office prohibits a career Foreign Service official from becoming an IG of the State Department. Nevertheless, our 2007 review found that during a period of approximately 27 months, from January 2003 through April 2005, four management officials from the State Department served as an acting State IG. All four of these officials had served in the Foreign Service in prior management positions, including political appointments as U.S. ambassadors to foreign countries. In addition, we also found that three of the officials returned to significant management positions in the State Department after serving as acting IGs. We found that acting IG positions continue to be used and are filled by officials with prior management positions at the department. Independence concerns surrounding such acting appointments are additionally troublesome when the acting IG position is held for such prolonged periods. (See table 1.) Another independence concern discussed in our March 2007 report is the use of Foreign Service officers to lead inspections of the department’s bureaus and posts. We found it was State OIG policy for inspections to be led by ambassador-level Foreign Service officers. These Foreign Service officers frequently move through the OIG on rotational assignments. As Foreign Service officers, they are expected to help formulate, implement, and defend government policy which now, as team leaders for the IG’s inspections, they are expected to review. These officers may return to Foreign Service positions in the department after their rotation through the OIG which could be viewed as compromising the OIG’s independence. Specifically, the appearance of objectivity is severely limited by this potential impairment to independence resulting in a detrimental effect to the quality of the inspection results. Reliance on Inspections Limited Effectiveness due to Gaps in Oversight In our 2007 audit, we found that the State OIG’s emphasis on inspections limited its effectiveness because it resulted in gaps in the audit coverage of the State Department’s high-risk areas and management challenges. These critical areas were covered almost exclusively through OIG inspections that were not subject to the same level of scrutiny that would have been the case if covered by audits. Specifically, we found gaps of OIG audit coverage in key State Department programs and operations such as (1) information security, (2) human resources, (3) counterterrorism and border security, and (4) public diplomacy. In these areas the State OIG was relying on inspections rather than audits for oversight. In the 10 inspections that we examined, we found that the State OIG inspectors relied heavily on invalidated agency responses to questionnaires completed by the department staff at each inspected bureau or post. We did not find any additional testing of evidence or sampling of agency responses to determine the relevance, validity, and reliability of the evidence as would be required under auditing standards. In addition, we found that for 43 of the 183 recommendations contained in the 10 inspections we reviewed, the related inspection files did not contain any documented support beyond written report summaries of the findings and recommendations. Inspections by the OIG’s Office of Information Technology Were Not Included in Quality Reviews In our 2007 report we also found that inspections by the OIG’s Office of Information Technology were not included in the internal quality reviews that the OIG conducts of its own work. Information security is a high-risk area and management challenge for the State Department, and the OIG relied almost exclusively on inspections for oversight of this area. Therefore, the quality of these inspections is key to the OIG’s oversight effectiveness. In addition, CIGIE’s standards for inspections require that IG inspections be part of a quality-control mechanism that provides an assessment of the inspection work. Lack of Coordination of Investigations between the State OIG and the Bureau of Diplomatic Security We found in 2007 that there was inadequate assurance that the investigative efforts of the State Department were coordinated to avoid duplication or to ensure that independent OIG investigations of the department would be performed. Specifically, while part of its worldwide responsibilities for law enforcement and security operations, the department’s Bureau of Diplomatic Security(DS) performed investigations that included passport and visa fraud, both externally and within the department; these investigations were not coordinated with the OIG investigators. The IG Act, as amended, authorizes the State IG to conduct and supervise independent investigations and prevent and detect fraud, waste, abuse, and mismanagement throughout the State Department. DS performs its investigations as a function of management, reporting to the State Department Undersecretary for Management. In contrast, the State OIG is required by the IG Act to be independent of the offices and functions it investigates. We reported in 2007 that without a formal agreement to outline the responsibilities of both DS and the State OIG regarding these investigations, there was inadequate assurance that this work would be coordinated to avoid duplication or that independent OIG investigations of the department would be performed. The State Department’s OIG Has Actions Under Way or Completed to Address Most of Our Recommendations Recommendations from Our 2007 Report To address the concerns we raised in our March 2007 report we made five recommendations. To help ensure the independence of the IG Office, which also impacts the effectiveness of the office, we recommended that the IG work with the Secretary of State to (1) develop a succession-planning policy for the appointment of individuals to head the State IG office in an acting capacity that provides for independent coverage between IG appointments and also to prohibit career Foreign Service officers and other department managers from heading the State OIG in an acting capacity, and (2) develop options to ensure that State OIG inspections are not led by career Foreign Service officials or other staff who rotate to assignments within State Department management. We also made the following three recommendations to the State IG to address the effectiveness of the OIG: (1) help ensure that the State IG provides the appropriate breath and depth of oversight of the State Department’s high-risk areas and management challenges, reassess the proper mix of audit and inspection coverage for these areas; (2) provide for more complete internal quality reviews of inspections, include inspections performed by the State IG’s Office of Information Technology in the OIG’s internal quality review process; and (3) develop a formal written agreement with the Bureau of Diplomatic Security to coordinate departmental investigations in order to provide for more independent investigations of State Department management and to prevent duplicative investigations. Progress Has Been Made in Addressing Our Prior Recommendations In response to a draft of our 2007 report, the State OIG has implemented two recommendations and has taken actions related to the remaining three recommendations. Although the State OIG has not fully addressed a recommendation that has been the subject of GAO recommendations regarding the independence of the State OIG’s inspections since our 1978 report, there has also been some progress in this area. The OIG implemented our recommendation to include inspections performed by the Office of Information Technology in its internal quality review process in June 2008, by abolishing the State OIG’s Office of Information Technology and transferring staff into either the Office of Audits or into the Office of Inspections. As a result, the OIG’s information technology inspections are now included in the Office of Inspections’ internal quality-review process. The OIG has implemented our recommendation that the office work with the Secretary of State and the Bureau of Diplomatic Security (DS) to develop a formal written agreement that delineates the areas of responsibility for State Department investigations. In December 2010, the State IG’s investigative office completed an agreement with the bureau’s Assistant Director of Domestic Operations to address the coordination of investigative activities. This agreement, when fully implemented, should help to ensure proper coordination of these offices in their investigations. Regarding a succession plan for filling acting IGs positions, the State Deputy IG stated that he issued a memo to abolish the deputy IG for Foreign Service position to help ensure that any future deputy IG moving into an acting IG position would not be a Foreign Service officer. The Deputy IG stated that he is currently working with the department to update the Foreign Affairs Manual to reflect this change. Furthermore, the elimination of this position helps to strengthen the independence of the OIG. We believe the State IG’s changes are responsive to the recommendation made in our 2007 report. Nevertheless, the State Department has relied on acting IGs to provide oversight for over 5 of the last 8 years since January 2003. (See table 1.) This use of temporarily assigned State Department management staff to head the State OIG can affect the perceived independence of the entire office in its oversight of the department’s operations, and the practice is questionable when compared to the independence requirements of Government Auditing Standards and other professional standards followed by the IGs. Further, career members of the Foreign Service are prohibited by statute from being appointed as State IG. This exclusion helps to protect against the personal impairments to independence that could result when a Foreign Service officer reviews the bureaus and posts of fellow Foreign Service officers and diplomats. Regarding our recommendation to reassess the mix of audits and inspections for the appropriate breadth and depth of oversight coverage, especially in high-risk areas and management challenges, we noted gaps in audit coverage. Specifically, in both fiscal years 2009 and 2010, the OIG had gaps in the audit coverage of management challenges in the areas of (1) coordinating foreign assistance, (2) public diplomacy, and (3) human resources. However, the State OIG has made progress in planning for and providing additional audit coverage. Since 2007 the State OIG’s resources have increased, providing the opportunity to augment its audit oversight of the department. Specifically, the OIG’s total on board staff increased to 227, from 191 in at the end of fiscal year 2005. Also, the OIG’s audit staff increased to 64 compared to 54 at the end of fiscal year 2005. In addition, the Office of Audits and the Middle East Regional Office are planning to merge resulting in the OIG’s largest component. In January 2010, the State OIG reorganized the focus of the Office of Audits and began to align its oversight efforts with the department’s growing global mission and strategic priorities. The newly reorganized Office of Audit consists of six functional divisions and an audit operations division to address (1) contracts and grants, (2) information technology, (3) financial management, (4) international programs, (5) human capital and infrastructure, (6) security and intelligence, and (7) audit operations, which includes quality assurance. These audit areas are intended to develop expertise and address the department’s management challenges. According to the Office of Audits Fiscal Year 2011 Performance Plan, the office will target high-cost programs, key management challenges, and vital operations to provide managers with information that will assist them in making operational decisions. The 2011 plan includes new areas such as global health, food security, climate change, democracy and governance, and human resource issues within the State Department. In addition, with the assistance of an independent public accountant, the State OIG has completed an audit of a major issue in coordinating foreign assistance, the Global HIV/AIDS Initiative related to the President’s emergency plan for AIDS relief. Regarding our recommendation concerning the use of career Foreign Service officials to lead inspection teams, the State OIG’s inspections handbook requires that the team leaders for inspections be a Foreign Service officer at the rank of ambassador. We also stated in our 2007 report that experience and expertise are important on inspection teams, but the expert need not be the team leader. However, the Deputy IG stated that having Foreign Service officers with the rank of ambassador as team leaders is critical to the effectiveness of the inspection teams. OIG officials stated that there are currently six Foreign Service officers at the ambassador level serving as the team leaders for inspections, four of whom are rehired annuitants working for the State OIG. To address independence impairments the State OIG relies on a recusal policy where Foreign Service officers must self-report whether they have worked in a post or embassy that is subject to an inspection and therefore presents a possible impairment. Further, State OIG officials noted that the team leaders report to a civil service Assistant IG and the inspection teams include other members of the civil service. We continue to believe that the State OIG’s use of management staff who have the possibility of returning to management positions, even if they are rehired annuitants or currently report to civil service employees in the OIG, presents at least an appearance of impaired independence and is not fully consistent with professional standards. Closing Observations The mission of the State OIG is critical to providing independent and objective oversight of the State Department and identifying mismanagement of taxpayer dollars. While the IG Act provides each IG with the ability to exercise judgment in the use of protections to independence specified in the act, the ultimate success or failure of an IG office is largely determined by the individual IG placed in that office and that person’s ability to maintain personal, external, and organizational independence both in fact and appearance, while reporting the results of the office’s work to both the agency head and to the Congress. An IG who lacks independence cannot effectively fulfill the full range of requirements for this office. The State OIG has either implemented or is in the process of implementing the recommendations from our 2007 report, with the exception of our recommendation to discontinue the use of Foreign Service officers as team leaders for inspections. We remain concerned about the independence issues that can arise from such an arrangement. In addition, we remain concerned that a permanent IG has not been appointed at the State Department for almost 3 years. We commend the OIG for the steps it is taking to build and strengthen its audit practice, and we are re-emphasizing our 2007 recommendation for the OIG to reassess its mix of audit and inspections to achieve effective oversight of the department’s areas of high risk and management challenges. Madam Chairman Ros-Lehtinen, Ranking Member Berman, and Members of the Committee, this concludes my prepared statement. I would be happy to respond to any questions you or other Members of the Committee might have at this time. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In 2007 GAO reported on concerns with the independence and effectiveness of the Department of State Inspector General (State OIG). GAO was asked to provide testimony on the issues we raised and the status of recommendations made to the State OIG in that report. This testimony focuses on the importance of auditor and IG independence, GAO's prior concerns with the State OIG's independence and effectiveness, and the status of OIG actions to address GAO's recommendations. The testimony is primarily based on GAO's 2007 report conducted in accordance with generally accepted government auditing standards, as well as the activities conducted to follow up on the status of our previous recommendations. The State Department Office of Inspector General (State OIG) has a critical responsibility in preventing and detecting fraud, waste, abuse, and mismanagement; and in providing independent audits and investigations of the department's programs and operations. In addition, the Foreign Service Act of 1980 requires the State OIG to perform inspections of the department's bureaus and posts, which is a unique requirement for an IG office. Independence is a critical element to the quality and credibility of an IG's work under the IG Act and is fundamental to professional auditing standards as well as an essential element of IG effectiveness. An IG must be independent and free from personal, external, and organizational impairments to independence in order to effectively fulfill the full range of requirements for the office. GAO's 2007 report identified areas of concern regarding the State OIG's independence and effectiveness. Specifically, the appointment of management and Foreign Service officials to head the State OIG in an acting capacity for extended periods of time is not consistent with professional standards for independence. In addition, GAO reported that the use of Foreign Service officers at the ambassador level to lead OIG inspections resulted in, at a minimum, the appearance of independence impairment. GAO also reported that inspections, by design, are conducted under less in-depth requirements and do not provide the same level of assurance as audits. However, the OIG relied on inspections rather than audits to provide oversight coverage, resulting in gaps to the audit oversight of the department. GAO also reported that inspections performed by the OIG's Office of Information Technology (IT) were not part of an internal quality review process, and that the State OIG and the department's Bureau of Diplomatic Security (DS) lacked an agreement to coordinate their investigative activities. The State OIG implemented two of GAO's five recommendations and has actions under way related to the remaining three. Specifically, the OIG now includes IT-related inspections in its internal quality-review process and has completed an agreement to coordinate investigations with DS. Also, the OIG is implementing a change to the succession planning for acting IG positions to exclude Foreign Service officers and is in the process of increasing the level of audit coverage through the distribution of staff and audit planning. In addition, the State OIG continues to assign Foreign Service officers at the ambassador level as team leaders for inspections, however, four of the six officers are rehired annuitants unlikely to rotate to State Department Foreign Service positions. GAO remains concerned, however, about the OIG's use of Foreign Service officers and the State Department's need to rely on acting IGs for extended periods of time. GAO continues to reaffirm its recommendations, and encourages the State OIG, with the assistance of the Secretary, to fully address these recommendations to enhance the effectiveness of the OIG's oversight of the State Department's programs and operations. In the 2007 report, GAO recommended that the IG work with the Secretary of State to address two recommendations regarding concerns about the State OIG's independence, and to reassess the mix of audits and inspections to help provide effective audit coverage of the department. In addition, GAO recommended that the IG include inspections performed by the OIG's Office of Information Technology in its internal quality review process and that it work with the department's Bureau of Diplomatic Security (DS) on an agreement to coordinate their investigative efforts.
GAO_GAO-08-852
Background Risk management has been endorsed by Congress, the President, and the Secretary of DHS as a way to direct finite resources to those areas that are most at risk of terrorist attack under conditions of uncertainty. The purpose of risk management is not to eliminate all risks, as that is an impossible task. Rather, given limited resources, risk management is a structured means of making informed trade-offs and choices about how to use available resources effectively and monitoring the effect of those choices. Thus, risk management is a continuous process that includes the assessment of threats, vulnerabilities, and consequences to determine what actions should be taken to reduce or eliminate one or more of these elements of risk. To provide a basis for examining efforts at carrying out risk management, GAO developed a framework for risk management based on best practices and other criteria. The framework is divided into five phases: (1) setting strategic goals and objectives, and determining constraints; (2) assessing the risks; (3) evaluating alternatives for addressing these risks; (4) selecting the appropriate alternatives; and (5) implementing the alternatives and monitoring the progress made and the results achieved (see Fig.1). Because we have imperfect information for assessing risks, there is a degree of uncertainty in the information used for risk assessments (e.g., what the threats are and how likely they are to be realized). As a result, it is inevitable that assumptions and policy judgments must be used in risk analysis and management. It is important that key decision-makers understand the basis for those assumptions and policy judgments and their effect on the results of the risk analysis and the resource decisions based on that analysis. DHS has used an evolving risk-based methodology to identify the urban areas eligible for HSGP grants and the amount of funds states and urban areas receive (see Fig 2). For example, the risk analysis model used from fiscal year 2001 through 2003 largely relied on measures of population to determine the relative risk of potential grant recipients, and evolved to measuring risk as the sum of threat, critical infrastructure and population density calculations in fiscal years 2004 and 2005. The fiscal year 2006 process introduced assessments of threat, vulnerability and consequences of a terrorist attack in assessing risk. In addition to modifications to its risk analysis model, DHS adopted an effectiveness assessment for fiscal year 2006 to determine the anticipated effectiveness of the various risk mitigation investments proposed by urban areas, which affected the final amount of funds awarded to eligible areas. For the fiscal year 2007 allocation process, DHS defined Risk as the product of Threat times Vulnerability and Consequences, or “R= T* (V & C)” and applied a three-step risk-based allocation methodology which incorporates analyses of risk and effectiveness to select eligible urban areas and allocate UASI and SHSP funds (see Fig. 3). The three steps include: 1. Implementation of a Risk Analysis model to calculate scores for states and urban areas, defining relative Risk as the product of Threat, Vulnerability and Consequences; 2. Implementation of an Effectiveness Assessment, including a process where state and urban area representatives acting as peer reviewers assess and score the effectiveness of the proposed investments submitted by the eligible applicants. This process is also known as peer review. 3. Calculation of a Final Allocation of funds based on states’ and urban areas’ risk scores as adjusted by their effectiveness scores. The Post-Katrina Emergency Management Reform Act places responsibility for allocating and managing DHS grants with the Federal Emergency Management Agency (FEMA). While FEMA is responsible for implementing the above 3-step process, FEMA relies on other DHS components such as the National Protection and Programs Directorate (NPPD) and the Office of Intelligence and Analysis (I&A) in the development of the risk analysis model, which we will discuss in greater detail below. Risk Analysis Model DHS employs a risk analysis model to assign relative risk scores to all states and urban areas under the SHSP and UASI grant programs. These relative risk scores are also used to differentiate which urban areas are eligible for UASI funding. These eligible areas are divided into two tiers: Tier 1 UASI grantees and those eligible for Tier 2. In fiscal year 2007, 45 candidates were eligible to apply for funding under the UASI program, and eligible candidates were grouped into two tiers according to relative risk. Tier 1 included the six highest risk areas; Tier 2 included the other 39 candidate areas. Figure 4 provides an overview of the factors that are included in the risk analysis model for fiscal year 2007 and their relative weights. The maximum relative risk score possible for a given area was 100. The Threat Index accounted for 20 percent of the total risk score; the Vulnerability and Consequences Index accounted for 80 percent. The Threat Index accounted for 20 percent of the total risk score, which was calculated by assessing threat information for multiple years (generally, from September 11, 2001 forward) for all candidate urban areas and categorizing urban areas into different threat tiers. According to DHS officials, the agency’s Office of Intelligence and Analysis (I&A) calculated the Threat Index by (1) collecting qualitative threat information with a nexus to international terrorism, (2) analyzing the threat information to create threat assessments for states and urban areas, (3) empanelling intelligence experts to review the threat assessments and reach consensus as to the number of threat tiers, and (4) assigning threat scores. This process, according to DHS officials, relied upon analytical judgment and interaction with the Intelligence Community, as opposed to the use of total counts of threats and suspicious incidents to calculate the Threat Index for the 2006 grant cycle. The final threat assessments are approved by the Intelligence Community—the Federal Bureau of Investigation, Central Intelligence Agency, National Counterterrorism Center, and the Defense Intelligence Agency—along with the DHS Under Secretary for Intelligence and Analysis and the Secretary of DHS, according to DHS officials. The Vulnerability and Consequences index accounts for 80 percent of the total risk score. Because DHS considered most areas of the country equally vulnerable to a terrorist attack given freedom of movement within the nation, DHS assigns vulnerability a constant value of 1.0 in the formula across all states and urban areas. Therefore, DHS’s measurement of vulnerability and consequences is mainly a function of the seriousness of the consequences of a successful terrorist attack, represented by four indices: a Population Index, an Economic Index, a National Infrastructure Index, and a National Security Index. Population Index (40 percent). This index included nighttime population and military dependent populations for states and urban areas, based upon U.S. Census Bureau and Department of Defense data. For urban areas, factors such as population density, estimated number of daily commuters, and estimated annual visitors were also included in this variable using data from private entities. DHS calculated the Population Index for urban areas by identifying areas with a population greater than 100,000 persons and cities that reported threat data during the past year, then combined cities or adjacent urban counties with shared boundaries to form single jurisdictions, and drew a 10-mile buffer zone around identified areas. Economic Index (20 percent). This index is comprised of the economic value of the goods and services produced in either a state or an urban area. For states, this index was calculated using U.S. Department of Commerce data on their percentage contribution to Gross Domestic Product. For UASI urban areas, a parallel calculation of Gross Metropolitan Product was incorporated. National Infrastructure Index (15 percent). This index focused on over 2,000 critical infrastructure/key resource (CIKR) assets that were identified by DHS’s Office of Infrastructure Protection. These particular critical infrastructure assets are divided into two rankings that, if destroyed or disrupted, could cause significant casualties, major economic losses, or widespread/long term disruptions to national well-being and governance capacity. The Tier 2 CIKR assets include the nationally-significant and high-consequence assets and systems across 17 sectors. Tier 1 assets are a small subset of the Tier 2 list that include assets and systems certain to produce at least two of four possible consequences if disrupted or destroyed: (1) prompt fatalities greater that 5,000; (2) first-year economic impact of at least $75 billion; (3) mass evacuations with prolonged (6 months or more) absence; and (4) loss of governance or mission execution disrupting multiple regions or critical infrastructure sectors for more than a week, resulting in a loss of necessary services to the public. Tier 1 assets were weighted using an average value three times as great as Tier 2 assets. The National Security Index (5 percent). This index considered three key national security factors: whether military bases are present in the state or urban area; how many critical defense industrial base facilities are located in the state or urban area; and the total number of people traversing international borders. Information on these inputs comes from the Department of Defense and DHS. Effectiveness Assessment In addition to determining relative risk using the risk analysis model, DHS added an effectiveness assessment process in fiscal year 2006 to assess and score the effectiveness of the proposed investments submitted by grant applicants. To assess the anticipated effectiveness of the various risk mitigation investments that states and urban areas proposed, DHS required states and urban areas to submit investment justifications as part of their grant applications. The investment justifications included up to 15 “investments” or proposed solutions to address homeland security needs, which were identified by the states and urban areas through their strategic planning process. DHS used state and urban area representatives as peer reviewers to assess these investment justifications. The criteria reviewers used to score the investment justifications included the following categories: relevance to national, state and local plans and policies such as the National Preparedness Guidance states’ and urban areas’ homeland security plans, anticipated impact, sustainability, regionalism, and the applicants’ planned implementation of each proposed investment. Reviewers on each panel assigned scores for these investment justifications, which, according to DHS officials, were averaged to determine a final effectiveness score for each state and urban area applicant. In fiscal year 2007, DHS provided states and urban areas the opportunity to propose investment justifications that included regional collaboration to support the achievement of outcomes that could not be accomplished if a state or urban area tried to address them independently. States and urban areas could choose to submit multi-state or multi-urban area investment justifications which outlined shared investments between two or more states or between two or more urban areas. Such investments were eligible for up to 5 additional points on their final effectiveness score, or up to 8 more effectiveness points for additional proposed investments, although these additional points would not enable a state’s or urban area’s total effectiveness score to exceed 100 points. These proposed investments were reviewed by one of two panels established specifically to consider multi-applicant proposals. Points were awarded based on the degree to which multi-applicant investments showed collaboration with partners and demonstrated value or outcomes from the joint proposal that could not be realized by a single state or urban area. Final Allocation Process DHS allocated funds based on the risk scores of states and urban areas, as adjusted by their effectiveness scores. DHS officials explained that while allocations are based first upon area risk scores, the effectiveness scores are then used to determine adjustments to states and urban areas allocations based on an “effectiveness multiplier.” States and urban areas with high effectiveness scores received an additional percentage of their risk-based allocations, while states and urban areas with low effectiveness scores had their risk-based allocations lowered by a percentage. In addition to determining funding by risk score as adjusted by an effectiveness multiplier, urban areas that received funds through the UASI grant program were subject to an additional tiering process that affected funding allocation. For example, in fiscal year 2007, the 45 eligible urban area candidates were grouped into two tiers according to relative risk. The Tier 2 UASI grantees included the 6 highest-risk areas; Tier 2 UASI grantees included another 39 candidate areas ranked by risk. The 6 Tier 1 UASI grantees were allocated fifty-five percent of the available funds, or approximately $410.8 million, while the 39 Tier 2 UASI grantees received the remaining forty-five percent of available funds, or approximately $336.1 million. Shifting to Urban Area Boundaries Defined by MSA was the Primary Change to DHS’s Risk-Based Methodology in 2008 DHS’s risk-based methodology had few changes from fiscal year 2007 to 2008. DHS changed the definition it used to identify the UASI areas included in the risk analysis model in 2008 from an urban area’s center city plus a ten-mile radius to metropolitan statistical areas (MSAs) as defined by the Census Bureau. DHS made this change in response to the 9/11Act requirement to perform a risk assessment for the 100 largest MSAs by population. Because the change in definition generally expanded the geographic area of each potential UASI grant recipient, the change had an effect on the data used to assess threat and consequences, and it may also have resulted in the use of more accurate data in the risk analysis model. The change to the use of MSA data in fiscal year 2008 also resulted in changes in the relative risk rankings of some urban areas. As a result, DHS officials expanded the eligible urban areas in fiscal year 2008 to a total of 60 UASI grantees, in part, to address the effects of this change to MSA data, as well as to ensure that all urban areas that received fiscal year 2007 funding also received funding for fiscal year 2008, according to DHS officials. Changing the boundaries had an effect on the data by which risk is calculated because the change in boundaries resulted in changes in the population and critical assets within the new boundaries. Figure 3 below uses the Chicago, IL urban area to illustrate this change. One benefit of the change to MSAs was that the UASI boundaries align more closely with the boundaries used to collect some of the economic and population data used in the model. Consequently, the fiscal year 2008 model may have resulted in more accurate data. Because the 2007 boundaries were based on distance, areas inside the boundaries may have included partial census tracts or partial counties, each of which would have required DHS to develop rules as to how to handle the partial areas. By contrast, the MSAs are based on counties and allow DHS to use standard census data instead of developing an estimated population within the defined boundaries. Additional information describing the boundaries of UASI urban areas for fiscal year 2007 versus fiscal year 2008 is presented in Appendix II. DHS calculated the Population Index of MSAs by: (1) using census data to determine the population and population density of each census tract; (2) calculating a Population Index for each individual census tract by multiplying the census tract’s population and population density figures; and (3) adding together the population indices of all of the census tracts making up the MSA. DHS did not use average population density because using an average resulted in losing information about how the population is actually distributed among the tracts. Using averages for population density over census tracts with dissimilar densities could have yielded very misleading results, according to DHS officials. The change to MSAs for fiscal year 2008 resulted in an increase of almost 162,000 square miles across the total area of urban area footprints. While 3 urban areas actually lost square mileage because of the change, the other areas all increased their square mileage footprint by almost 2,700 square miles on average. The increased size of urban areas’ footprints increased the number of critical infrastructure assets that were counted within them. We analyzed the number of Tier 1 and Tier 2 critical infrastructure assets associated with UASI areas between fiscal year 2007 and 2008, and found a higher number of total Tier 1 and Tier 2 critical infrastructure assets assigned to urban areas in 2008, and–individually—almost all urban areas increased the number of assets assigned to them. This change to the use of MSAs also resulted in changes in urban areas rankings, including the increase of the relative risk scores for such urban areas as Albany, Syracuse and Rochester, NY, and Bridgeport, CT. As a result, DHS officials expanded the eligible urban areas in fiscal year 2008 to a total of 60 with the top seven highest risk areas comprising UASI Tier 1 grantees, and the 53 other risk-ranked UASI Tier 2 grantees. As in fiscal year 2007, the top seven UASI Tier 1 grantee areas will receive fifty-five percent of the available funds, or approximately $429.9 million, and the remaining 53 UASI Tier 2 grantees will receive forty-five percent of the available funds, or approximately $351.7 million. According to DHS officials, the decision to expand the eligible urban areas to a total of sixty was a policy decision largely driven by two factors: the 9/11 Act requirement that FEMA use MSAs; and the desire to continue to fund urban areas already receiving funding. DHS’s Risk-based Methodology is Generally Reasonable, But the Vulnerability Element of the Risk Analysis Model Has Limitations that Reduce Its Value The risk-based methodology DHS uses to allocate HSGP grant dollars is generally reasonable. It includes and considers the elements of risk assessment—Threat, Vulnerability, and Consequences—and, as DHS’s risk-based methodology has evolved, its results have become less sensitive to changes in the key assumptions and weights used in the risk analysis model. Furthermore, the indices that DHS uses to calculate the variable constituting the greatest portion of the risk analysis model— Consequences—are reasonable. However, limitations such as the absence of a method for measuring variations in vulnerability reduce the vulnerability element’s value. Although DHS recognized and described the significance of Vulnerability in its FY 2006 model, the model DHS used for fiscal years 2007 and 2008 used a constant value of 1.0 in its formula, rather than measuring variations in vulnerability across states and urban areas. DHS’s Risk Analysis Model is Reasonable Because it Contains the Key Elements of Risk Assessment, Relies on Reasonable Indices to Measure Consequences, and is Less Sensitive to Changes in Variables One measure of the reasonability of DHS’s risk-based methodology is the extent to which DHS’s risk analysis model provides a consistent method to assess risk. Risk assessment helps decision makers identify and evaluate potential risks facing key assets or missions so that countermeasures can be designed and implemented to prevent or mitigate the effects of the risks. In a risk management framework, risk assessment is a function of Threat, Vulnerability, and Consequences, and the product of these elements is used to develop scenarios and help inform actions that are best suited to prevent an attack or mitigate vulnerabilities to a terrorist attack. Threat is the probability that a specific type of attack will be initiated against a particular target/class of targets, and analysis of threat- related data is a critical part of risk assessment. The Vulnerability of an asset is the probability that a particular attempted attack will succeed against a particular target or class of targets. It is usually measured against some set of standards, such as availability/predictability, accessibility, countermeasures in place, and target hardness (the material construction characteristics of the asset). The Consequences of a terrorist attack measures the adverse effects of a successful attack and may include many forms, such as the loss of human lives, economic costs, and adverse impact on national security. The risk analysis model used by DHS is reasonable because it attempts to capture data on threats, vulnerabilities, and consequences—the three types of information used in evaluating risk. Because DHS considered most areas of the country equally vulnerable to a terrorist attack given freedom of movement within the nation, DHS assigns vulnerability a constant value of 1.0 in the formula across all states and urban areas. Therefore, DHS’s measurement of vulnerability and consequences is mainly a function of the seriousness of the consequences of a successful terrorist attack. Because the risk analysis model is consequences-driven, another measure of the model’s overall reasonableness is the extent to which the indices used to calculate the consequences component of the model are reasonable. As previously described, the consequences component of the model is comprised of four indices – a Population Index, an Economic Index, a National Infrastructure Index, and a National Security Index – each assigned a different weight. These indices are generally reasonable. Both the population and economic indices are calculated from data derived from reliable sources that are also publicly available, providing additional transparency for the model. For example, according to DHS officials, the fiscal year 2008 analysis used Gross Metropolitan Product (GMP) estimates prepared by the consulting firm Global Insight for the United States Conference of Mayors and the Council for the New American City that were published in January 2007, and reported on the GMP for 2005. In addition, the National Infrastructure Index focused on over 2,000 Tier 1and Tier 2 critical infrastructure/key resource assets identified by DHS’s Office of Infrastructure Protection (IP). For both fiscal years 2007 and 2008, DHS used a collaborative, multi-step process to create the Tier 2 CIKR list. First, IP works with sector-specific agencies to develop criteria used to determine which assets should be included in the asset lists. Second, these criteria are vetted with the private-sector through sector-specific councils, who review the criteria and provide feedback to IP. Third, IP finalizes the criteria and provides it to the sector-specific agencies and State and Territorial Homeland Security Advisors (HSAs). Fourth, IP asks states to nominate assets within their jurisdiction that match the criteria. Fifth, assets nominated by states are reviewed by both the sector-specific agencies and IP to decide which assets should comprise the final Tier 2 list. For example, to identify the nation’s critical energy assets, IP will work with the Department of Energy to determine which assets and systems in the energy sector would generate the most serious economic consequences to the Nation should they be destroyed or disrupted. Further, in the fiscal year 2008 process, IP added a new, additional step to allow for the resubmission of assets for reconsideration if they are not initially selected for the Tier 2 list. In addition, the National Security Index comprises only a small fraction of the model – 5 percent – and has also evolved to include more precision, such as counting the number of military personnel instead of simply the presence or absence of military bases. To identify the nation’s critical defense industrial bases, the Department of Defense analyzes the impact on current warfighting capabilities, recovery and reconstitution, threat, vulnerability, and consequences of possible facility disruption and destruction, and other aspects. DHS’s approach to calculating threat, which accounts for the remaining 20 percent of the model, also represents a measure of the model’s overall reasonableness. DHS uses analytical judgments to categorize urban areas’ threat, which ultimately determines the relative threat for each state and urban area. DHS has used written criteria to guide these judgments, and DHS provided us with the criteria used in both of these years for our review. The criteria are focused on threats from international terrorism derived from data on credible plots, planning, and threats from international terrorist networks, their affiliates, and those inspired by such networks. The criteria provided guidance for categorizing areas based on varying levels of both the credibility and the volume of threat reporting, as well as the potential targets of threats. Results of this process are shared with the DHS Undersecretary for Intelligence and Analysis, the FBI, and the National Counterterrorism Center, all of whom are afforded the opportunity to provide feedback on the placements. Additionally, DHS develops written threat assessments that indicate whether states are “high,” “medium,” or “low” threat states. States can provide threat information that they have collected to DHS, but in order for that information to affect a state’s tier placement and threat level, the information must be relevant to international terrorism, according to DHS officials. We reviewed several examples of these assessments from 2007, which included key findings describing both identified and potential threats to the state. The classified assessments addressed potential terrorist threats to critical infrastructure in each of the 56 states and territories. However, DHS shared assessments only with state officials who had appropriate security clearances. According to DHS officials, states without officials with sufficient clearances will receive an unclassified version of their state’s assessment for the fiscal year 2009 grant process. DHS is also developing a process by which they can share the threat assessments with UASI areas, including those UASI areas whose boundaries cross state lines; however, currently the assessments are transmitted only to the DHS state representatives and state officials, and the states and representatives are responsible for sharing the information with the UASI areas, according to DHS officials. Another measure of the overall reasonableness of DHS’s risk analysis model is the extent to which the model’s results change when the assumptions and values built into the model, such as weights of variables, change. A model is sensitive when a model produces materially different results in response to small changes in its assumptions. Ideally, a model that accurately and comprehensively assesses risk would not be sensitive, and such a model exhibiting little sensitivity could be said to be more robust than a model with more sensitivity to changes in assumptions underlying the model. A robust calculation or estimation model provides its users greater confidence in the reliability of its results. For both fiscal years 2007 and 2008, substantial changes had to be made to the weights of any of the indices used in the risk model to calculate state and urban area risk scores before there was any movement in or out of the top 7 (or Tier 1) ranked UASI areas. In other words, the model provides DHS with a level of assurance that the highest at-risk areas have been appropriately identified. While Tier 1 UASI areas were similarly robust in both FY 2007 and FY 2008, the sensitivity of Tier 2 UASI areas to changes in the weights of indices used to calculate risk scores was significant in FY 2007, but improved in FY 2008. In FY 2007, very small changes in the weights for the indices used to quantify risk (for Tier 2 UASI areas at the eligibility cut point) resulted in changes in eligibility; however, FY 2008 results are more robust, as eligibility of urban areas is much less sensitive to changes in the index weights in the FY2008 model than it was in the FY2007 model. Appendix III provides an in-depth description of the sensitivity of the model to specific changes in the relative weights of each index for Tier 1 and Tier 2 UASI areas. Vulnerability Element of the Risk Analysis Model Has Limitations that Reduce Its Value Although the methodology DHS uses is reasonable, the vulnerability element of the risk analysis model—as currently calculated by DHS—has limitations that reduce its value for providing an accurate assessment of risk. DHS considered most areas of the country equally vulnerable to a terrorist attack in the risk analysis model used for fiscal years 2007 and 2008 and assigned a constant value to vulnerability, which ignores geographic differences in the social, built, and natural environments across states and urban areas. Although DHS recognized and described the significance of vulnerability in its FY 2006 model, the model used for fiscal years 2007 and 2008 did not attempt to measure vulnerability. Instead, DHS considered most areas of the country equally vulnerable to a terrorist attack due to the freedom of individuals to move within the nation. As a result, DHS did not measure vulnerability, but assigned it a constant value of 1.0 across all states and urban areas. Last year we reported that DHS measured the vulnerability of an asset type as part of its FY2006 risk analysis. DHS used internal subject matter experts who analyzed the general attributes of an asset type against various terrorist attack scenarios by conducting site vulnerability analyses on a sample of sites from the asset type in order to catalog attributes for the generic asset. These experts evaluated vulnerability by attack scenario and asset type pairs and assigned an ordinal value to the pair based on 10 major criteria. In describing its FY 2006 methodology, DHS acknowledged that because all attack types are not necessarily applicable to all infrastructures, the values for threat must be mapped against vulnerability to represent the greatest likelihood of a successful attack. DHS also acknowledged that vulnerability of an infrastructure asset was also a function of many variables and recognized that it did not have sufficient data on all infrastructures to know what specific vulnerabilities existed for every infrastructure, what countermeasures had been deployed, and what impact on other infrastructures each asset had. At that time, DHS noted it would require substantial time and resource investment to fully develop the capability to consistently assess and compare vulnerabilities across all types of infrastructure. Vulnerability is a crucial component of risk assessment. An asset may be highly vulnerable to one mode of attack but have a low level of vulnerability to another, depending on a variety of factors, such as countermeasures already in place. According to our risk management framework, the vulnerability of an asset is the probability that a particular attempted attack will succeed against a particular target or class of targets. It is usually measured against some set of standards, such as availability/predictability, accessibility, countermeasures in place, and target hardness (the material construction characteristics of the asset). Each of these four elements can be evaluated based on a numerical assignment corresponding to the conditional probability of a successful attack. Additionally, other research has developed methods to measure vulnerability across urban areas. For example, one study described a quantitative methodology to characterize the vulnerability of U.S. urban centers to terrorist attack for the potential allocation of national and regional funding to support homeland security preparedness and response in U.S. cities. This study found that vulnerability varied across the country, especially in urban areas. The study noted that “place matters,” and a one-size-fits all strategy ignores geographic differences in the social, built, and natural environments. Furthermore, in February of 2008 the Secretary of DHS said that “as we reduce our vulnerabilities, the vulnerabilities change as well.” However, while earlier iterations of the risk analysis model attempted to measure vulnerability, DHS’s risk analysis model now considers the states and urban areas of the country equally vulnerable to a terrorist attack and assigns a constant value to vulnerability, which ignores geographic differences. Conclusions In fiscal year 2008, DHS will distribute approximately $1.6 billion to states and urban areas through its Homeland Security Grant Program – a program that has already distributed approximately $20 billion over the past six years – to prevent, protect against, respond to, and recover from acts of terrorism or other catastrophic events. Given that risk management has been endorsed by the federal government as a way to direct finite resources to those areas that are most at risk of terrorist attack under conditions of uncertainty, it is important that DHS use a reasonable risk- based allocation methodology and risk analysis model as it allocates those limited resources. DHS’s risk-based allocation methodology and risk analysis model are generally reasonable tools for measuring relative risk within a given fiscal year, considering its use of a generally-accepted risk calculation formula; key model results’ decreased sensitivity to incremental changes in the assumptions related to Tier 1 UASI grantees or the eligibility for Tier 2 UASI funding, the reliability of the consequence variable component indices, and its adoption of MSAs to calculate urban area footprints. However, the element of vulnerability in the risk analysis model could be improved to more accurately reflect risk. Vulnerability is a crucial component of risk assessment, and our work shows that DHS needs to measure vulnerability as part of its risk analysis model to capture variations in vulnerability across states and urban areas. Recommendations To strengthen DHS’s methodology for determining risk, we are recommending that the Secretary of DHS take the following action: Instruct FEMA, I&A, and NPPD - DHS components each responsible for aspects of the risk-based methodology used to allocate funds under the Homeland Security Grant Program - to formulate a method to measure vulnerability in a way that captures variations across states and urban areas, and apply this vulnerability measure in future iterations of this risk-based grant allocation model. Agency Comments We requested comments on a draft of this report from the Secretary of Homeland Security, FEMA, I&A, and NPPD, or their designees. In email comments on the draft report, FEMA and I&A concurred with our recommendation that they formulate a method to measure vulnerability in a way that captures variations across states and urban areas and apply this vulnerability measure in future iterations of the risk-based grant allocation model. FEMA, I&A, and NPPD also provided technical comments, which we incorporated as appropriate. We are sending copies of this correspondence to the appropriate congressional committees, and the Secretary of Homeland Security. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. For further information about this report, please contact William Jenkins, Jr., Director, GAO Homeland Security and Justice Issues Team, at (202)-512-8777 or at [email protected]. GAO staff members who were major contributors to this report are listed in appendix IV. Appendix I: Briefing for Congressional Committees, February 11-25, 2008 For the third consecutive year, GAO has been mandated as part of DHS’s annual appropriation to review and assess the HSGP’s risk analysis model and risk-based allocation methodology for determining risk and distributing funds. We responded to the mandate in February 2008 by briefing the staffs of congressional committees on the results of this review. During the course of our engagement, we had ongoing dialog with DHS officials regarding the extent to which written criteria were used in the development of the Threat Index. At that time, officials from DHS’s Office of Intelligence and Analysis stated that the criteria were not documented. As a result, we noted in the accompanying presentation slides that DHS’s approach to measuring threat did not include specific, written criteria to use when determining the threat tiers into which states and urban areas are placed. As part of GAO’s agency protocols, we convened an exit conference with DHS officials which occurred on April 14, 2008. We provided them with a statement of facts to reflect the information gathered during our engagement. At this exit conference an official from the Office of Intelligence and Analysis said DHS had used criteria in 2007 and 2008 for categorizing cities and states based on threat, and in further discussions with DHS we were able to independently review these documents and confirm that such criteria were used in the development of the Threat Index, which is reflected in the letter above. However, we did not modify the accompanying presentation contained in this appendix. Homeland Security Grant Program Introduction According to the Department of Homeland Security (DHS), in fiscal DHS provided approximately $1.7 billion to states and urban areas through its Homeland Security Grant Program (HSGP) to prevent, protect against, respond to, and recover from acts of terrorism or other catastrophic events. DHS plans to distribute about $1.6 billion for these grants in fiscal year 2008. The HSGP risk-based allocation process is used for the State Homeland Security Program (SHSP) and Urban Area Security Initiative (UASI). In addition, DHS used this same approach to allocate $655 million in fiscal year 2007 under the Infrastructure Protection Program. Objectives In response to a legislative mandate and discussions with relevant congressional staff, we addressed the following questions: 1. What methodology did DHS use to allocate HSGP funds for fiscal years 2007 and 2008, including any changes DHS made to the eligibility and allocation processes for fiscal year 2008 and the placement of states and urban areas within threat tiers, and why? 2. How reasonable is DHS’s methodology? Scope and Methodology We analyzed DHS documents including the FY2007 and FY2008 risk analysis models, grant guidance, presentations, and interviewed DHS officials about: The HSGP grant determination process in FY07—and any changes to the The process by which DHS’s risk analysis model is used to estimate relative risk: Risk = Threat*(Vulnerability & Consequences); How the effectiveness assessment process is conducted; How final allocation decisions are made. DHS’s methodology for ranking grantees by tiered groups and the impact of this ranking on funding allocations. We did our work from September 2007 and February 2008, in accordance with generally accepted government accounting standards (GAGAS). Background: We’ve reviewed this program for the last 3 years. In previous DHS has adopted a process of “continuous improvement” to its methods for estimating risk and measuring applicants’ effectiveness. Inherent uncertainty is associated with estimating risk of terrorist attack, requiring the application of policy and analytic judgments. The use of sensitivity analysis can help to gauge what effects key sources of uncertainty have on outcomes. Sensitivity of the risk analysis In FY 2007, DHS had developed a greater understanding of the sensitivity of the risk model as a result of its changes to the model. GAO’s analysis of the FY 2007 model:It takes sizable changes to the weights of these indices used to quantify risk to change the areas that compose the Tier 1 list. For those urban areas ranked near the bottom of Tier 2 list, very small changes in the weights for the indices used to quantify risk can result in changes in eligibility. According to DHS officials, there were a number of changes in the rankings, and these changes were driven by the required change in FY2008 to use MSAs. Effectiveness Assessment For fiscal year 2007 DHS assessed the applications submitted by states and eligible urban areas. DHS used a peer-review process to assess and score the effectiveness of proposed investments by: Engaging the states in identifying and selecting peer Having peer reviewers individually score investments, and Assigning peer reviewers to panels to make final effectiveness score determinations. FY 2007 Effectiveness Assessment Final Allocation Process – FY 2007 Grants Based on Both Risk and Effectiveness Scores DHS allocated funds based on the risk scores of states and urban areas, as adjusted by their effectiveness scores. SHSP provided a minimum allocation, ensuring no state or territory’s allocation falls below the minimum levels established by the USA PATRIOT Act. For UASI, DHS established maximum and minimum allocation to minimize variations in some urban areas’ final allocations between years. minimum = 0.375% of all funds appropriated for SHSP and UASI. Final Allocation Process – Ranking UASI Grantees by Tiered groups Fiscal year 2007, 45 eligible candidates were grouped into two tiers according to relative risk. Tiering was established from a policy judgment by DHS leadership, according to DHS grant officials. Tier I included the 6 highest risk areas; Tier II included the other 39 candidate areas ranked by risk. FY 2007 Tier I Urban Areas = 6 Urban Areas, $410,795,000 allocated (55 percent of available funds). FY 2007 Tier II Urban Areas = 39 Urban Areas, $336,105,000 allocated (45 percent of available funds). 60 eligible UASI areas in FY 2008: Tier I = 7 highest risk areas and eligible for 55 percent of available funds -- $429,896,500. Tier II = 53 areas (14 more than FY 2007) and eligible for 45 percent of available funds -- $351,733,500. According to DHS officials, the expansion to 60 eligible UASI areas for FY2008 is a policy decision largely driven by two factors: 1. The new requirement that FEMA use MSAs; 2. The desire to remain consistent with the funding. Observations on the Reasonableness of the HSGP Grant Distribution Methodology Observations on the Reasonableness of the HSGP Grant Distribution Methodology (continued) DHS could take an additional step to evaluate the reliability and validity of the peer review process. One way to effectively assess the potential for human bias is to have a sample of the same applications independently rated by multiple panels to provide a measure of inter-rater reliability. DHS identified resource constraints as a reason for not measuring inter-rater reliability. Appendix II: Identifying Eligible Urban Areas As we reported in 2007, DHS first had to determine the geographic boundaries or footprint of candidate urban areas within which data were collected to estimate risk in order to determine the urban areas that were eligible to receive UASI grants,. In fiscal year 2005, the footprint was limited to city boundaries (and did not include the 10-mile buffer zone). DHS chose to further redefine the footprint for fiscal year 2006, on the basis of comments from state and local governments. DHS took several steps to identify this footprint; these included: Identifying areas with population greater than 100,000 persons and areas (cities) that had any reported threat data during that past year. For fiscal year 2006, DHS started with a total of 266 cities. Combining cities or adjacent urban counties with shared boundaries to form single jurisdictions. For fiscal year 2006, this resulted in 172 urban areas. Drawing a buffer zone around identified areas. A 10-mile buffer was then drawn from the border of that city/combined entity to establish candidate urban areas. This area was used to determine what information was used in the risk analysis, and represents the minimum area that had to be part of the state/urban areas defined grant application areas. According to DHS, for fiscal year 2006, it considered other alternatives such as a radius from a city center, although such a solution created apparent inequities among urban areas. DHS incorporated buffer zones at the suggestion of stakeholders, although this action resulted in making the analysis more difficult, according to a DHS official. In addition, DHS officials told us the steps taken to determine the footprint were based on the “best fit,” as compared with other alternatives. DHS did not provide details on what criteria this comparison was based on. A principal change between fiscal year 2007 and 2008 was the method used to identify the footprint, or boundaries, of UASI areas for the purposes of calculating relative risk. In fiscal year 2008, DHS used Metropolitan Statistical Areas (MSAs) from the Census Bureau, as required under the Implementing Recommendations of the 9/11 Commission Act of 2007. Table 1 below provide additional information listing the urban areas by its prior geographic area captures, and the areas captured by MSAs. Appendix III: DHS’s Model is Robust for Tier 1 UASI Areas Population Index: Neither maximizing nor minimizing the weight of the Population Index resulted in the movement of an area into or out of Tier 1 for either FY 2007 or FY 2008. Economic Index: In FY 2007, minimizing the weight of the Economic Index had no effect on Tier 1 placement, but increasing the weight of the Economic Index by 12.8% resulted in a new area moving into Tier 1, displacing an area that had previously been ranked in the top 7. In FY 2008, lowering the weight of the Economic Index by 15.25% resulted in a new area moving into the top 7 ranked areas, displacing an area that had been previously ranked as Tier 1, but maximizing the weight of the Economic Index had no effect on Tier 1 placement. National Infrastructure Index: In FY 2007, maximizing the weights of the National Infrastructure Index did not result in any change in those areas designated Tier 1, but lowering the National Infrastructure Index by 5.53% resulted in a new area moving into the Tier 1 areas, displacing an area that had been previously ranked as Tier 1. In FY 2008, increasing the weight of the National Infrastructure Index by 4.68% resulted in a new area moving into the top 7 ranked areas, displacing an area that had been previously ranked as Tier 1. Similarly, lowering the National Infrastructure Index by 15% resulted in a new area moving into the Tier 1 areas. National Security Index: In FY 2007, minimizing the weight of the National Security Index also did not result in any change in those areas designated Tier 1, but increasing the National Security Index by 7.5% resulted in a new area moving into Tier 1, displacing an area that had been previously ranked as Tier 1. In FY 2008, lowering the weight of the National Security Index by 3.73% resulted in a new area moving into the top 7 ranked areas, displacing an area that had been previously ranked as Tier 1. Increasing the National Security Index by 10% resulted in a new area moving into Tier 1, also displacing an area that had been previously ranked as Tier 1. Urban Area Sensitivity to Changes in Consequence Index Weights is Reduced in FY 2008 for Funding Eligibility While Tier 1 areas were similarly robust in both FY 2007 and FY 2008, the sensitivity of Tier 2 areas to changes in the weights of indices used to calculate risk scores was significant in FY 2007, but improved in FY 2008. In FY 2007, very small changes in the weights for the indices used to quantify risk for Tier 2 urban areas at the eligibility cut point resulted in changes in eligibility; however, FY 2008 results are more robust, as eligibility of urban areas is much less sensitive to changes in the index weights in the FY2008 model than it was in the FY2007 model. Population Index: In FY 2007, decreasing the weight of the Population Index by 0.4% or increasing the weight of the Population Index by 4% resulted in one area displacing another area with regard to eligibility. However, neither maximizing nor minimizing the Population Index resulted in one area displacing another area with regard to eligibility in FY 2008. Economic Index: In FY 2007, lowering the weight of the Economic Index by 0.24% or increasing the weight of the Economic Index by 2.4% resulted in one area displacing another area with regard to eligibility. By contrast, FY 2008 required an increase in the weight of the Economic Index by 12.33% or a decrease in the weight of the Economic Index by 10.48% resulted in one area displacing another area with regard to eligibility. National Infrastructure Index: In FY 2007, changing the weight for the National Infrastructure Index by 1.58% (either increase or decrease) resulted in one area displacing another area with regard to eligibility, while the FY 2008 National Infrastructure Index required an increase in the weight by 5.67% or a decrease the weight by 4.54% to result in one area displacing another area with regard to eligibility. National Security Index: In FY 2007, increasing the weight for the National Security Index by 0.08% resulted in one area displacing another area with regard to eligibility, but FY 2008 required an increase in the weight for the National Security Index by 2.34% or a decrease in the weight of the National Security Index by 1.37% to result in one area displacing another area with regard to eligibility. Appendix IV: Contacts and Staff Acknowledgments For further information about this statement, please contact William O. Jenkins Jr., Director, Homeland Security and Justice Issues, on (202) 512- 8777 or [email protected]. In addition to the contact named above, the following individuals also made major contributors to this report: GAO Homeland Security and Justice Issues Team—Chris Keisling, Assistant Director; John Vocino, Analyst-in-Charge; Orlando Copeland and Michael Blinde, Analysts; Linda Miller and Adam Vogt, Communications Analysts. Other major contributors to this report include: GAO Applied Methodology and Research Team—Chuck Bausell, Jr., Economist, and Virginia Chanley; and GAO Office of General Counsel—Frances Cook.
Since 2002, the Department of Homeland Security (DHS) has distributed almost $20 billion in funding to enhance the nation's capabilities to respond to acts of terrorism or other catastrophic events. In fiscal year 2007, DHS provided approximately $1.7 billion to states and urban areas through its Homeland Security Grant Program (HSGP) to prevent, protect against, respond to, and recover from acts of terrorism or other catastrophic events. As part of the Omnibus Appropriations Act of 2007, GAO was mandated to review the methodology used by DHS to allocate HSGP grants. This report addresses (1) the changes DHS has made to its risk-based methodology used to allocate grant funding from fiscal year 2007 to fiscal year 2008 and (2) whether the fiscal year 2008 methodology is reasonable. To answer these questions, GAO analyzed DHS documents related to its methodology and grant guidance, interviewed DHS officials about the grant process used in fiscal year 2007 and changes made to the process for fiscal year 2008, and used GAO's risk management framework based on best practices. For fiscal year 2008 HSGP grants, DHS is primarily following the same methodology it used in fiscal year 2007, but incorporated metropolitan statistical areas (MSAs) within the model used to calculate risk. The methodology consists of a three-step process--a risk analysis of urban areas and states based on measures of threat, vulnerability and consequences, an effectiveness assessment of applicants' investment justifications, and a final allocation decision. The principal change in the risk analysis model for 2008 is in the definition of the geographic boundaries of eligible urban areas. In 2007, the footprint was defined using several criteria, which included a 10-mile buffer zone around the center city. Reflecting the requirements of the Implementing Recommendations of the 9/11 Commission Act of 2007, DHS assessed risk for the Census Bureau's 100 largest MSAs by population in determining its 2008 Urban Areas Security Initiative (UASI) grant allocations. This change altered the geographic footprint of the urban areas assessed, aligning them more closely with the boundaries used by government agencies to collect some of the economic and population data used in the model. This may have resulted in DHS using data in its model that more accurately estimated the population and economy of those areas. The change to the use of MSA data in fiscal year 2008 also resulted in changes in the relative risk rankings of some urban areas. As a result, DHS officials expanded the eligible urban areas in fiscal year 2008 to a total of 60 UASI grantees, in part, to address the effects of this change to MSA data, as well as to ensure that all urban areas receiving fiscal year 2007 funding continued to receive funding in fiscal year 2008, according to DHS officials. Generally, DHS has constructed a reasonable methodology to assess risk and allocate funds within a given fiscal year. The risk analysis model DHS uses as part of its methodology includes empirical risk analysis and policy judgments to select the urban areas eligible for grants (all states are guaranteed a specified minimum percentage of grant funds available) and to allocate State Homeland Security Program (SHSP) and UASI funds. However, our review found that the vulnerability element of the risk analysis model has limitations that reduce its value. Measuring vulnerability is considered a generally-accepted practice in assessing risk; however, DHS's current risk analysis model does not measure vulnerability for each state and urban area. Rather, DHS considered all states and urban areas equally vulnerable to a successful attack and assigned every state and urban area a vulnerability score of 1.0 in the risk analysis model, which does not take into account any geographic differences. Thus, as a practical matter, the final risk scores are determined by the threat and consequences scores.
GAO_GAO-03-658
Background Although the specific duties police officers perform may vary among police forces, federal uniformed police officers are generally responsible for providing security and safety to people and property within and sometimes surrounding federal buildings. There are a number of federal uniformed police forces operating in the Washington MSA, of which 13 had 50 or more officers as of September 30, 2001. Table 1 shows the 13 federal uniformed police forces included in our review and the number of officers in each of the police forces as of September 30, 2002. On November 25, 2002, the Homeland Security Act of 2002 was enacted into law. The act, among other things, restructured parts of the executive branch of the federal government to better address the threat to the United States posed by terrorism. The act established a new Department of Homeland Security (DHS), which includes two uniformed police forces within the scope of our review—the Federal Protective Service and the Secret Service Uniformed Division. These police forces were formerly components of the General Services Administration and the Department of the Treasury, respectively. Another component of DHS is the TSA, which protects the nation’s transportation systems. TSA, which was formerly a component of the Department of Transportation, includes the Federal Air Marshal Service, which is designed to provide protection against hijacking and terrorist attacks on domestic and international airline flights. The Federal Air Marshal Program increased significantly after the September 11, 2001, terrorist attacks, resulting in the need for TSA to recruit many Air Marshals during fiscal year 2002. By fiscal year 2003, the buildup in the Federal Air Marshal Program had been substantially completed. Federal Air Marshals are not limited to the grade and pay step structure of the federal government’s General Schedule. As a result, TSA has been able to offer air marshal recruits higher compensation and more flexible benefit packages than many other federal police forces. Federal uniformed police forces operate under various compensation systems. Some federal police forces are covered by the General Schedule pay system and others are covered by different pay systems authorized by various laws. Since 1984, all new federal employees have been covered by the Federal Employees Retirement System (FERS). Federal police forces provide either standard federal retirement benefits or federal law enforcement retirement benefits. Studies of employee retention indicate that turnover is a complex and multifaceted problem. People leave their jobs for a variety of reasons. Compensation is often cited as a primary reason for employee turnover. However, nonpay factors, such as age, job tenure, job satisfaction, and job location, may also affect individuals’ decisions to leave their jobs. During recent years, the federal government has implemented many human capital flexibilities to help agencies attract and retain sufficient numbers of high-quality employees to complete their missions. Human capital flexibilities can include actions related to areas such as recruitment, retention, competition, position classification, incentive awards and recognition, training and development, and work-life policies. We have stated in recent reports that the effective, efficient, and transparent use of human capital flexibilities must be a key component of agency efforts to address human capital challenges. The tailored use of such flexibilities for recruiting and retaining high-quality employees is an important cornerstone of our model of strategic human capital management. Scope and Methodology To address our objectives, we identified federal uniformed police forces with 50 or more officers in the Washington MSA—13 in all. Specifically, we reviewed OPM data to determine the executive branch federal uniformed police forces with 50 or more police officers in the Washington MSA. We reviewed a prior report issued by the Department of Justice’s Bureau of Justice Statistics and our prior reports to determine the judicial and legislative branches’ federal uniformed police forces with 50 or more police officers in the Washington MSA. In addressing each of the objectives, we interviewed officials responsible for human capital issues at each of the 13 police forces and obtained documents on recruitment and retention issues. Using this information, we created a survey and distributed it to the 13 police forces to obtain information on (1) entry-level officer pay and benefits, types of officer duties, and minimum entry-level officer qualifications; (2) officer turnover rates and the availability and use of human capital flexibilities to retain officers; and (3) difficulties in recruiting officers, and the availability and use of human capital flexibilities to improve recruiting. We reviewed and analyzed the police forces’ responses for completeness and accuracy and followed-up on any missing or unclear responses with appropriate officials. Where possible, we verified the data using OPM’s Central Personnel Data File. In reviewing duties performed by police officers at the 13 police forces, we relied on information provided by police force officials and did not perform a detailed analysis of the differences in duties and responsibilities. Additionally, due to resource limitations, we did not survey officers who separated from the police forces to determine their reasons for leaving. We obtained this information from officials at the police forces. Although some of the police forces have police officers detailed at locations throughout the country, the data in this report are only for officers stationed in the Washington MSA. Therefore, these data are not projectable nationwide. Entry-Level Pay and Benefits Varied among the Police Forces Entry-level pay and retirement benefits varied widely across the 13 police forces. Annual pay for entry-level police officers ranged from $28,801 to $39,427, as of September 30, 2002. Officers at 4 of the 13 police forces received federal law enforcement retirement benefits, while officers at the remaining 9 police forces received standard federal employee retirement benefits. According to officials, all 13 police forces performed many of the same types of general duties, such as protecting people and property and screening people and materials entering and/or exiting buildings under their jurisdictions. Eleven of the 13 police forces had specialized teams and functions, such as K-9 and/or SWAT. The minimum qualification requirements and the selection processes were generally similar among most of the 13 police forces. At $39,427 per year, the U.S. Capitol Police, Library of Congress Police, and Supreme Court Police forces had the highest starting salaries for entry-level officers, while entry-level officers at the NIH Police and Federal Protective Service received the lowest starting salaries at $28,801 per year. The salaries for officers at the remaining 8 police forces ranged from $29,917 to $38,695. Entry-level officers at 5 of the 13 police forces received an increase in pay, ranging from $788 to $1,702, upon successful completion of basic training. Four of the 13 police forces received federal law enforcement retirement benefits and received among the highest starting salaries, ranging from $37,063 to $39,427. Figure 1 provides a comparison of entry-level officer pay and retirement benefits at the 13 police forces. Entry-level officers at 12 of the 13 police forces (all but the U.S. Postal Service Police) received increases in their starting salaries between October 1, 2002, and April 1, 2003. Entry-level officers at three of the four police forces (FBI Police, Federal Protective Service, and NIH Police) with the lowest entry-level salaries as of September 30, 2002, received raises of $5,584, $4,583, and $4,252, respectively, during the period ranging from October 1, 2002 through April 1, 2003. In addition, entry-level officers at both the U.S. Capitol Police and Library of Congress Police—two of the highest paid forces—also received salary increases of $3,739 during the same time period. These pay raises received by entry-level officers from October 1, 2002, through April 1, 2003, narrowed the entry-level pay gap for some of the 13 forces. For example, as of September 30, 2002, entry- level officers at the FBI Police received a salary $8,168 less than an entry- level officer at the U.S. Capitol Police. However, as of April 1, 2003, the pay gap between entry-level officers at the two forces had narrowed to $6,323. Figure 2 provides information on pay increases that entry-level officers received from October 1, 2002, through April 1, 2003, along with entry-level officer pay rates as of April 1, 2003. The Secret Service noted that the Uniformed Division has full police powers in Washington, D.C., and that it further has the authority to perform its protective duties throughout the United States. Although there are similarities in the general duties, there were differences among the police forces with respect to the extent to which they performed specialized functions. Table 3 shows that 11 of the 13 police forces reported that they performed at least one specialized function; 2 police forces (Government Printing Office Police and U.S. Postal Service Police) reported that they did not perform specialized functions. The minimum qualification requirements and the selection processes were generally similar among most of the 13 police forces. As part of the selection process, all 13 police forces required new hires to have successfully completed an application, an interview(s), a medical examination, a background investigation, and a drug test. Each force also had at least one additional requirement, such as a security clearance or physical fitness evaluation. The U.S. Postal Service Police was the only force that did not require a high school diploma or prior law enforcement experience. For additional information on qualification requirements and the selection process for the 13 police forces, see appendix IV. Sizable Differences in Turnover Rates among the 13 Police Forces Total turnover at the 13 police forces nearly doubled from fiscal years 2001 to 2002. Additionally, during fiscal year 2002, 8 of the 13 police forces experienced their highest annual turnover rates over the 6-year period, from fiscal years 1997 through 2002. There were sizable differences in turnover rates among the 13 police forces during fiscal year 2002. NIH Police reported the highest turnover rate at 58 percent. The turnover rates for the remaining 12 police forces ranged from 11 percent to 41 percent. Of the 729 officers who separated from the 13 police forces in fiscal year 2002, about 82 percent (599), excluding retirements, voluntarily separated. About 53 percent (316) of the 599 officers who voluntarily separated from the police forces in fiscal year 2002 went to TSA. Additionally, about 65 percent of the officers who voluntarily separated from the 13 police forces during fiscal year 2002 had fewer than 5 years of service on their police forces. The total number of separations at all 13 police forces nearly doubled (from 375 to 729) between fiscal years 2001 and 2002. Turnover increased at all but 1 of the police forces (Library of Congress Police) over this period. The most significant increases in turnover occurred at the Bureau of Engraving and Printing Police (200 percent) and the Secret Service Uniformed Division (about 152 percent). In addition, during fiscal year 2002, 8 of the 13 police forces experienced their highest annual turnover rates over the 6-year period, from fiscal years 1997 through 2002. Figure 3 displays the total number of separations for the 13 police forces over the 6-year period. The turnover rates at the 13 police forces ranged from 11 percent at the Library of Congress Police to 58 percent at the NIH Police in fiscal year 2002. In addition to the NIH Police, 3 other police forces had turnover rates of 25 percent or greater during fiscal year 2002. The U.S. Mint Police reported the second highest turnover rate at 41 percent, followed by the Bureau of Engraving and Printing Police at 27 percent and the Secret Service Uniformed Division at 25 percent. Table 4 shows that at each of the 13 police forces, turnover was overwhelmingly due to voluntary separations—about 18 percent (130) of turnover was due to retirements, disability, and involuntarily separations. There was no clear pattern evident between employee pay and turnover rates during fiscal year 2002. For example, while some police forces with relatively highly paid entry-level officers such as the Library of Congress Police (11 percent) and the Supreme Court Police (13 percent) had relatively low turnover rates, other police forces with relatively highly paid entry-level officers such as the U.S. Mint Police (41 percent), Bureau of Engraving and Printing Police (27 percent), and Secret Service Uniformed Division (25 percent) experienced significantly higher turnover rates. Additionally, turnover varied significantly among the 5 police forces with relatively lower paid entry-level officers. For example, while the Federal Protective Service (19 percent) and NIH Police (58 percent) entry-level officers both received the lowest starting pay, turnover differed dramatically. Likewise, no clear pattern existed regarding turnover among police forces receiving federal law enforcement retirement benefits and those receiving traditional federal retirement benefits. For example, entry-level officers at the Library of Congress Police, U.S. Capitol Police, and Supreme Court Police all received equivalent pay in fiscal year 2002. However, the Library of Congress (11 percent) had a lower turnover rate than the Capitol Police (13 percent) and Supreme Court Police (16 percent), despite the fact that officers at the latter 2 police forces received federal law enforcement retirement benefits. In addition, while officers at both the Park Police (19 percent) and Secret Service Uniformed Division (25 percent) received law enforcement retirement benefits, these forces experienced higher turnover rates than some forces such as U.S. Postal Service Police (14 percent) and FBI Police (17 percent), whose officers did not receive law enforcement retirement benefits and whose entry-level officers received lower starting salaries. More than half (316) of the 599 officers who voluntarily separated from the police forces in fiscal year 2002 went to TSA—nearly all (313 of 316) to become Federal Air Marshals where they were able to earn higher salaries, federal law enforcement retirement benefits, and a type of pay premium for unscheduled duty equaling 25 percent of their base salary. The number (316) of police officers who voluntarily separated from the 13 police forces to take positions at TSA nearly equaled the increase in the total number of separations (354) that occurred between fiscal years 2001 and 2002. About 25 percent (148) of the voluntarily separated officers accepted other federal law enforcement positions, excluding positions at TSA, and about 5 percent (32 officers) took nonlaw enforcement positions, excluding positions at TSA. Furthermore, about 9 percent (51) of the voluntarily separated officers took positions in state or local law enforcement or separated to, among other things, continue their education. Officials were unable to determine where the remaining 9 percent (52) of the voluntarily separated officers went. Table 5 provides a summary of where officers who voluntarily separated in fiscal year 2002 went. Figure 4 shows a percentage breakdown of where the 599 officers who voluntarily separated from the 13 police forces during fiscal year 2002 went. Although we did not survey individual officers to determine why they separated from these police forces, officials from the 13 forces reported a number of reasons that officers had separated, including to obtain better pay and/or benefits at other police forces, less overtime, and greater responsibility. Without surveying each of the 599 officers who voluntarily separated from their police forces in fiscal year 2002, we could not draw any definitive conclusions about the reasons they left. For additional details on turnover at the 13 police forces, see appendix II. The use of human capital flexibilities to address turnover varied among the 13 police forces. For example, officials at 4 of the 13 police forces reported that they were able to offer retention allowances, which may assist the forces in retaining experienced officers, and 3 of these police forces used this tool to retain officers in fiscal year 2002. The average retention allowances paid to officers in fiscal year 2002 were about $1,000 at the Pentagon Force Protection Agency, $3,500 at the Federal Protective Service, and more than $4,200 at the NIH Police. The police forces reported various reasons for not making greater use of available human capital flexibilities in fiscal year 2002, including lack of funding for human capital flexibilities, lack of awareness among police force officials that the human capital flexibilities were available, and lack of specific requests for certain flexibilities such as time-off awards or tuition reimbursement. The limited use of human capital flexibilities by many of the 13 police forces and the reasons provided for the limited use are consistent with our governmentwide study of the use of such authorities. In December 2002, we reported that federal agencies have not made greater use of such flexibilities for reasons such as agencies’ weak strategic human capital planning, inadequate funding for using these flexibilities given competing priorities, and managers’ and supervisors’ lack of awareness and knowledge of the flexibilities. We further stated that the insufficient or ineffective use of flexibilities can significantly hinder the ability of agencies to recruit, hire, retain, and manage their human capital. Additionally, in May 2003, we reported that OPM can better assist agencies in using human capital flexibilities by, among other things, maximizing its efforts to make the flexibilities more widely known to agencies through compiling, analyzing, and sharing information about when, where, and how the broad range of flexibilities are being used, and should be used, to help agencies meet their human capital management needs. For additional information on human capital flexibilities at the 13 police forces, see appendix III. Most Forces Experienced Recruitment Difficulties Nine of the 13 police forces reported difficulties recruiting officers to at least a little or some extent. Despite recruitment difficulties faced by many of the police forces, none of the police forces used important human capital recruitment flexibilities, such as recruitment bonuses and student loan repayments, in fiscal year 2002. Some police force officials reported that the human capital recruitment flexibilities were not used for various reasons, such as limited funding or that the flexibilities themselves were not available to the forces during the fiscal year 2002 recruiting cycle. Officials at 4 of the 13 police forces (Bureau of Engraving and Printing Police, the FBI Police, Federal Protective Service, and NIH Police) reported that they were having a great or very great deal of difficulty recruiting officers. In addition, officials at 5 police forces reported that they were having difficulty recruiting officers to a little or some extent or to a moderate extent. Among the reasons given for recruitment difficulties were: low pay; the high cost of living in the Washington, D.C., metropolitan area; difficulty completing the application/background investigation process; better retirement benefits at other law enforcement agencies. Conversely, officials at 4 of the 13 police forces (Library of Congress Police, the Supreme Court Police, U.S. Mint Police, and U.S. Postal Service Police) reported that they were not having difficulty recruiting officers. Library of Congress officials attributed their police force’s lack of difficulty recruiting officers to attractive pay and working conditions and the ability to hire officers at any age above 20 and who also will not be subject to a mandatory retirement age. Supreme Court officials told us that their police force had solved a recent recruitment problem by focusing additional resources on recruiting and emphasizing the force’s attractive work environment to potential recruits. U.S. Postal Service officials reported that their police force was not experiencing a recruitment problem because it hired its police officers from within the agency. Table 6 provides a summary of the level of recruitment difficulties reported by the 13 police forces. Although many of the police forces reported facing recruitment difficulties, none of the police forces used human capital recruitment tools, such as recruitment bonuses and student loan repayments, in fiscal year 2002. For more information on human capital flexibilities, see appendix III. Conclusions Without surveying each of the 599 officers who voluntarily separated from their police forces in fiscal year 2002, we could not draw any definitive conclusions about the reasons they left. However, officials at the 13 police forces included in our review reported that officers separated from their positions for such reasons as to (1) obtain better pay and/or benefits at other police forces, (2) work less overtime, and (3) assume greater responsibility. The number of separations across the 13 police forces included in our review increased by 354 between fiscal years 2001 and 2002. This increase almost equaled the number (316) of officers who voluntarily separated from their forces to join TSA. Given that TSA’s Federal Air Marshal Program has now been established, and the buildup in staffing has been substantially completed, the increase in turnover experienced in fiscal year 2002 at 12 of the 13 police forces may have been a one-time occurrence. Additionally, the recent pay increases received by officers at 12 of the 13 police forces, along with the potential implementation of various human capital flexibilities, might also help to address recruitment and retention issues experienced by many of the police forces. Agency Comments We requested comments on a draft of this report from each of the 13 federal uniformed police forces included in our review. We received written comments from 12 of the 13 police forces (the Federal Protective Service did not provide comments). Of the 12 police forces that commented, 11 either generally agreed with the information presented or did not express an overall opinion about the report. In its comments, the U.S. Secret Service raised four main issues relating to the pay, retirement benefits, and job responsibilities information. First, it suggested that we expand our review to include information on the compensation packages offered to separating officers, particularly those moving to TSA. However, our objective was to provide information on pay, retirement benefits, types of duties, turnover, and the use of human capital flexibilities at 13 federal uniformed police forces in the Washington, D.C. area. Our aim was not to compare the officers’ previous and new job pay, benefits, responsibilities, or training requirements. Second, the U.S. Secret Service suggested that we report that a pattern existed between employee turnover and pay. However, our discussions with human capital officials in the 13 police forces found that separating officers provided them with a variety of reasons why they chose to leave their police forces, including increased pay, additional benefits, greater job satisfaction, and personal reasons. We did not contact separating officers to determine why they decided to move to other jobs and whether the new jobs was comparable in pay, benefits, and job responsibilities. Nevertheless, with the information we obtained, we were unable to discern any clear patterns between employee turnover and pay. That is, turnover varied significantly among police forces that had similar pay for entry-level officers. Third, the U.S. Secret Service suggested that we calculate the differences in retirement benefits that would accrue to officers in the different forces. We noted in our report that different forces had different retirement plans with significant differences in benefits. However, calculating the retirement benefits of a hypothetical police officer at each of the forces was beyond the scope of our review. Finally, the U.S. Secret Service noted that fundamental differences exist among the agencies’ authorities, responsibilities, duties, and training requirements, and that this could account for differences in compensation. We agree that differences exist among the 13 agencies, and we captured many of these differences in the report. However, we did not attempt to determine the extent to which these differences accounted for differences in police officer compensation. We also requested and received comments from OPM. OPM was concerned that the data provided in our report will lead to unintended conclusions, citing what it considered to be a lack of substantive analysis and comparisons of the pay systems involved. OPM further commented that the data and information we report must not serve as a basis for modifying the pay structure, salaries, or retirement system of any of the police forces. Our report provides information on 13 federal uniformed police forces that had not been previously compiled, which is useful in comparing entry-level pay, retirement benefits, types of duties, turnover rates, and the use of human capital flexibilities. In preparing this report, we worked closely with these police forces to obtain reliable information on these items, as well as the conditions and challenges confronting their operations. Nevertheless, we agree that more comprehensive information would be useful in deciding how best to deal with pay, benefit, and retention issues. As the executive branch agency responsible for establishing human capital policies and monitoring their implementation, OPM is in a good position to perform the additional analysis it believes would be useful to draw conclusions on such issues. Most of the police forces and OPM provided technical comments, which were incorporated in the report, where appropriate. The Department of the Interior (U.S. Park Police), NIH, OPM, and the U.S. Supreme Court provided formal letters, and the U.S. Secret Service provided an internal memorandum, which are included in appendixes V through IX. We are sending copies of this report to the Attorney General, Secretary of the Treasury, the Secretary of Defense, the Secretary of the Department of Homeland Security, Secretary of the Interior, Chair of the Capitol Police Board, the Librarian of Congress, the Public Printer, the Marshal of the Supreme Court, the Postmaster General, the Under Secretary of Transportation for Security, and the Directors of NIH, OPM, and the Pentagon Force Protection Agency. We will also provide copies of this report to the directors of each of the 13 police forces, relevant congressional committees, and Members of Congress. We will make copies of this report available to other interested parties upon request. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you have any questions, please contact me at (202) 512-8777 or at [email protected] or Weldon McPhail, Assistant Director, at (202) 512-8644 or at [email protected]. See appendix X for additional GAO contacts and staff acknowledgments. Appendix I: Counties and Cities Included in the Washington Metropolitan Statistical Area Washington, D.C. Appendix II: Selected Turnover Data for the 13 Police Forces Table 7 shows, among other things, that during fiscal year 2002, 12 of the 13 police forces experienced increased turnover from the prior fiscal year, while 8 of the 13 police forces experienced their highest turnover rates over the 6-year period, from fiscal years 1997 through 2002. Table 8 shows that officers with fewer than 5 years of experience on their forces accounted for about 65 percent of the voluntary separations in fiscal year 2002. Figure 5 shows that officers with fewer than 2 years of service on their forces accounted for about 35 percent of the voluntary separations in fiscal year 2002, and officers with 2 to 5 years of service comprised an additional 30 percent. Table 9 shows that approximately half (316) of the 599 police officers who voluntarily separated from their forces in fiscal year 2002 went to TSA. Of the 316 officers who went to TSA, about 53 percent (166) had fewer than 5 years of experience on their forces. An additional 19 percent (59) had 5 years to less than 10 years of experience on their forces. Table 10 shows that about 25 percent (148) of the 599 police officers who voluntarily separated from their forces in fiscal year 2002 took other federal law enforcement positions. Officers with fewer than 5 years of experience on their forces accounted for about 79 percent (117) of the separations to other federal law enforcement positions, and officers with 5 years to less than 10 years of experience accounted for an additional 16 percent (23). Table 11 shows that of the 13 police forces surveyed, 11 reported problems ranging in severity from a little or some extent, to a very great extent, with retaining officers in the Washington MSA. Of these 11 police forces, 4 characterized their agencies retention difficulties as a very great extent. Two police forces, the Government Printing Office Police and the Library of Congress Police, reported no difficulty with retention. Police forces reporting difficulties indicated a number of commonalities in terms of why officers had left the forces. Among the reasons given were better pay at other agencies; better benefits, including law enforcement retirement, at other agencies; better morale at other agencies; more challenging work at other agencies; promotional opportunities at other agencies; too much overtime at their police forces; and retirements from their police forces. Library of Congress Police officials attributed their low turnover rate to pay, working conditions, and the fact that the force does not have any age restrictions, which allows the force to hire older, more experienced officers. Each of the forces with retention difficulties reported steps taken to address the problem, including providing retention allowances, improving training, and improving working conditions. Additionally, officials from several police forces reported that they were considering providing increases in retention allowances and student loan repayments to address their retention difficulties. Only two police forces, the Pentagon Force Protection Agency and the Supreme Court Police, reported that the measures they had taken had solved the retention problem to a great extent; the remaining police forces indicated either that the measures taken had had a little or no effect or that it was too early to determine whether the measures taken would solve the retention problem. Appendix III: Use of Human Capital Flexibilities Table 12 illustrates the use of human capital flexibilities by the 13 police forces included in our review. While agency officials reported that a variety of human capital flexibilities were available across the agencies, there was variation among agencies both in terms of the specific flexibilities available and in the frequency of use. For instance, only 3 of the 13 agencies reported the availability of recruitment bonuses, and none were given in fiscal year 2002. Ten of the 13 reported the availability of performance-based cash awards, and 9 of these agencies made these awards in amounts averaging $109-$2,500. Appendix IV: Recruiting Strategies and New Hire Selection Process Appendix V: Comments from the Department of the Interior Appendix VI: Comments from the National Institutes of Health Appendix VII: Comments from the Office of Personnel Management Appendix VIII: Comments from the United States Secret Service Appendix IX: Comments from the Supreme Court of the United States Appendix X: GAO Contacts and Acknowledgments GAO Contacts Staff Acknowledgments In addition to the persons named above, Leo M. Barbour, Susan L. Conlon, Evan Gilman, Kimberley Granger, Geoffrey Hamilton, Laura Luo, Michael O’ Donnell, Doris Page, George Scott, Lou V.B. Smith, Edward H. Stephenson, Jr., Maria D. Strudwick, Mark Tremba, and Gregory H. Wilmoth made key contributions to this report.
Officials at several federal uniformed police forces in the Washington, D.C., metropolitan area have raised concerns that disparities in pay and retirement benefits have caused their police forces to experience difficulties in recruiting and retaining officers. These concerns have increased during the past year with the significant expansion of the Federal Air Marshal Program, which has created numerous relatively high-paying job opportunities for existing federal uniformed police officers and reportedly has lured many experienced officers from their uniformed police forces. GAO's objectives were to (1) determine the differences that exist among selected federal uniformed police forces regarding entry-level pay, retirement benefits, and types of duties; (2) provide information on the differences in turnover rates among these federal uniformed police forces, including where officers who separated from the police forces went and the extent to which human capital flexibilities were available and used to address turnover; and (3) provide information on possible difficulties police forces may have faced recruiting officers and the extent to which human capital flexibilities were available to help these forces recruit officers. During fiscal year 2002, entry-level police officer salaries varied by more than $10,000 across the 13 police forces, from a high of $39,427 per year to a low of $28,801 per year. Four of the 13 police forces received federal law enforcement retirement benefits. Between October 1, 2002, and April 1, 2003, 12 of the 13 police forces received pay increases, which narrowed the pay gap for entry-level officers at some of the 13 forces. Officials at the 13 police forces reported that while officers performed many of the same types of duties, the extent to which they performed specialized functions varied. Total turnover at the 13 police forces nearly doubled (from 375 to 729) between fiscal years 2001 and 2002. Additionally, during fiscal year 2002, 8 of the 13 police forces experienced their highest annual turnover rates over the 6-year period, from fiscal years 1997 through 2002. Sizable differences existed in the turnover rates among the 13 federal uniformed police forces during fiscal year 2002. The availability and use of human capital flexibilities to retain employees, such as retention allowances, varied. GAO found that the increase in the number of separations (354) across the 13 police forces between fiscal years 2001 and 2002 almost equaled the number of officers (316) who left their forces to join the Transportation Security Administration (TSA). Given that the buildup in staffing for TSA's Federal Air Marshal Program has been substantially completed, the increase in turnover experienced in fiscal year 2002 at 12 of the 13 police forces may have been a one-time occurrence. Officials at 9 of 13 police forces reported at least some difficulty recruiting officers. However, none of the police forces used important human capital flexibilities, such as recruitment bonuses and student loan repayments, during fiscal year 2002.
GAO_T-HEHS-97-216
Background Charter schools are public schools that operate under a state charter (or contract) specifying the terms under which the schools may operate. They are established under state law, do not charge tuition, and are nonsectarian. State charter school laws and policies vary widely with respect to the degree of autonomy provided to the schools, the number of charter schools that may be established, the qualifications required for charter school applicants and teachers, and the accountability criteria that charter schools must meet. Since 1991, 29 states and the District of Columbia have enacted laws authorizing charter schools. In school year 1996-97, over 100,000 students were enrolled in nearly 500 charter schools in sixteen states and the District of Columbia. Most charter schools are newly created. According to the Department of Education, of the charter schools operating as of January 1996, about 56 percent were newly created, while about 33 percent were converted from preexisting public schools and about 11 percent were converted from preexisting private schools. Appendix II shows the states that have enacted charter laws, and the number of charter schools in operation during the 1996-97 school year, by state. Both the Congress and the administration have shown support for charter schools. For example, in amending the Elementary and Secondary Education Act in 1994, the Congress established a grant program to support the design and implementation of charter schools. In addition, under the Goals 2000: Educate America Act, states are allowed to use federal funds to promote charter schools. The administration proposed doubling the roughly $50 million made available under the new charter school grant program in fiscal year 1997 to $100 million for fiscal year 1998. Finally, in his 1997 State of the Union Address, the President called for the establishment of 3,000 charter schools nationwide by the next century. focused on the recent development of charter schools in various states. Concerns were raised during the hearings by charter school operators and others about whether charter schools were receiving equitable allocations of federal categorical grant funds. Recent research conducted by the Department of Education and by the Hudson Institute, a private, not-for-profit public policy research organization, raised similar concerns. Although dozens of financial aid programs exist for public elementary and secondary schools, two programs—title I and IDEA—are by far the largest federal programs. Title I Program Title I is the largest federal elementary and secondary education aid program. The Department of Education administers title I, which received over $7 billion in federal funding in fiscal year 1997. Under the program, grants are provided to school districts—or local education agencies (LEA), as defined in federal statute and regulations—to assist them in educating disadvantaged children—those with low academic achievement attending schools serving relatively low-income areas. The program is designed to provide increasing levels of assistance to schools that have higher numbers of poor children. Nationwide, the Department of Education makes available to LEAs an annual average of about $800 for each child counted in the title I allocation formula. Under title I, the federal government awards grants to LEAs through state education agencies (SEA). SEAs are responsible for administering the grants and distributing the funds to LEAs. About 90 percent of the funds the Congress appropriates is distributed in the form of basic grants, while about 10 percent is distributed as concentration grants, which are awarded to LEAs serving relatively higher numbers of children from low-income families. Roughly 90 percent of LEAs nationwide receive basic grants. 2 percent of their school-aged population. To be eligible for concentration grants, LEAs generally must have enrolled more than 6,500 children from low-income families, or more than 15 percent of their students must be from low-income families. An LEA that receives title I funds and has more than one school within its district has some discretion in allocating these funds to individual schools. The LEA must rank its schools according to the proportion of children that come from low-income families enrolled in each school. LEAs must use the same measure of poverty in ranking all their schools, but LEAs have some discretion in choosing a particular measure. LEAs must allocate title I funds or provide title I services first to schools that have more than 75 percent of their students coming from low-income families. After providing funds or services to these schools, LEAs have the option of serving schools that do not meet the 75-percent criterion with remaining funds. Although a LEA is not required to allocate the same per-child amount to each school in its district, it may not allocate a higher amount per child to schools with lower poverty rates than to schools with higher poverty rates. IDEA Program IDEA, part B, is a federal grant program administered by the Department of Education that is designed to assist states in paying the costs of providing an education to children aged 3 to 21 with disabilities. The act requires, among other things, states to provide a free appropriate public education to all children with disabilities and requires that they be served in the least restrictive environment possible. The Congress appropriated $3.5 billion for the program in fiscal year 1997. These funds were expected to provide, on average, about $625 of services for each of 577,000 eligible preschool children, and $536 of services for each of 5.8 million eligible elementary and secondary school students. during the preceding fiscal year, the national average per-pupil expenditure, and the amount appropriated by the Congress for the program. The per-disabled-pupil amount that can be allocated for IDEA services is capped at 40 percent of the national average per-pupil expenditure. States use their own formulas to allocate funds. States must provide at least 75 percent of the IDEA funds they receive to eligible LEAs or other public authorities, and they may reserve the rest for statewide programs. Before the 1997 IDEA reauthorization, an LEA entitled to an allotment of less than $7,500 could not receive funding directly, according to federal statutory provisions. Instead, the LEA had to either rely on the state for services or join with other LEAs to collectively meet the $7,500 threshold and receive funds to serve eligible students. In reauthorizing IDEA, the Congress removed the $7,500 threshold. As a result, LEAs, including charter schools that are treated as LEAs, are no longer required to join with other LEAs in order to meet that threshold. Each state has different procedures for allocating special education aid to LEAs. Some states use census information to allocate a fixed amount per eligible student. Other states allocate funds on the basis of reimbursement rates for allowable expenses. Still other states allocate funding to LEAs on the basis of the severity and types of students’ disabilities. Charter Schools’ Federal Funding Arrangements Vary States use several arrangements to provide funds to charter schools. In general, states allocate title I funds, and IDEA funds or services, to charter schools using one of three approaches. The seven states in our review used all three. allocate title I funds and IDEA funds or services to charter schools’ parent LEAs. Charter schools, along with other public schools in the district, then receive their share of funds or services from their parent LEAs. The third approach for allocating funds to charter schools involves a mixture of the first and second approaches. In general, a charter school in a state using this approach receives federal funds directly from the SEA—and thus is treated as an LEA—if the school was chartered by a state agency, or through a parent LEA, if the school was chartered by a district or substate agency. States using this model include Arizona, Michigan, and Texas. Regardless of which of the three approaches states use, individual charter schools are generally allocated funds on the basis of whether they are treated as (1) an independent LEA, or school district (independent model), or as (2) a dependent of an LEA—that is, as a public school component of a preexisting school district (dependent model). Throughout my testimony, I refer to these two methods in allocating funds to charter schools as the (1) independent model and the (2) dependent model, respectively. Charter Schools’ LEA Status Dictates Minimum Criteria Used to Determine Funding Eligibility Under title I and IDEA, the Department of Education is responsible for allocating funds to SEAs, which are required to allocate funds to LEAs. LEAs, in turn, may allocate funds to individual schools in their districts. While charter schools operating under the independent model are considered LEAs, charter schools operating under the dependent model are not. Because LEAs are allowed some discretion in allocating funds to individual schools within their districts, whether a charter school is treated as an LEA or as a dependent of an LEA is important. Under the title I program, SEAs distribute funds directly to eligible LEAs. To be eligible for funds, LEAs—including charter schools operating under the independent model—must meet the minimum statutory eligibility criteria of having enrolled at least 10 children from low-income families and having their low-income children constitute more than 2 percent of their school-aged population. No further distribution of funds needs to occur when an LEA has only one school, as is the case when an individual charter school is treated as an LEA under the independent model. LEAs that have more than one school—including charter schools operating under the dependent model—are responsible for allocating title I funds among their several schools. The federal statute and regulations lay out a complex set of criteria and conditions that LEAs use in deciding how to allocate funds to their schools. The intent of the statute is to shift title I funds received by LEAs to individual schools with relatively higher numbers and percentages of students from low-income families. Individual schools—including charter schools—within a multiple-school LEA, therefore, must potentially meet higher eligibility thresholds than they would if they were each considered an independent LEA. As a result, some charter schools that would have received title I funds under the independent model may not receive such funds because they are components of LEAs. Under the IDEA program, states have greater latitude than under title I to develop systems of their own to distribute program funds or special education services to schools and school districts. Given this latitude, the manner in which charter and other public schools receive these funds varies by state. For example, Arizona is currently in the process of allocating IDEA funds to charter schools on a pro-rata (per-eligible-student) basis. In Minnesota, the state reimburses charter schools for IDEA-eligible expenses. Yet another state—California—allocates its share of funds to so-called “special education local plan areas.” Special education local plan areas are typically composed of adjacent school districts that jointly coordinate special education programs and finances in that state. Schools within these areas generally receive special education services, rather than grant funds. Charter Schools Report Mixed Results in Receiving Federal Funds Overall, slightly more than two-fifths of the charter schools we surveyed received title I funds. Survey results indicated that slightly less than one-half of charter schools operating under the independent model, and one-half of the schools operating under the dependent model, received title I funds for the 1996-97 school year. Table 1 shows the number of charter schools surveyed that received title I funds, by funding model. About one-third of the charter schools we surveyed did not apply for title I funds. Charter school officials who did not apply cited reasons such as they (1) did not have time to do so, (2) knew they were ineligible for funds and therefore did not apply, or (3) found that applying for these funds would cost more than they would receive. Of those that applied for title I funds, two-thirds, or 14 of 21, reported receiving them. Title I funding for these schools ranged from $96 to $941 per eligible student; the average was $499 per eligible student, and the median was $435. The difference in per-student funding is related to the allocation formulas, which take into account the number and proportion of low-income children in the school, district, and county. Title I funds received by these schools represented between 0.5 percent and 10 percent of their total operating budgets. For all but three schools, funds received represented 5 percent or less of the schools’ total operating budgets. With regard to the IDEA program, one-half of our survey respondents received funds or IDEA-funded services. Of all charter schools surveyed, two-fifths of the schools operating under the independent model received funds or IDEA-funded services, while two-thirds of those operating under the dependent model received funds or services. Table 2 shows the number of charter schools surveyed that received IDEA funds or IDEA-funded services, by funding model. funds received by schools represented between 0.08 percent and 2.5 percent of their total operating budgets. Most Charter School Operators Surveyed Believed They Received an Equitable Share of Title I and IDEA Funds Regardless of funding model, more than two-thirds of charter school operators expressing an opinion believed that they received an equitable share of both title I and IDEA funding. About one-fourth of the charter school operators we surveyed told us that they had no basis on which to form an opinion or did not answer the question. (See tables 3 and 4). With regard to IDEA funding or IDEA-funded services, however, as many survey respondents under the independent funding model believed that they received an equitable share as believed that they did not receive an equitable share. For charter schools under the dependent model, on the other hand, almost five times as many survey respondents believed that their schools received an equitable share as believed that they did not receive an equitable share. (See table 4.) understanding of the allocation formulas and did not know if funds were equitably allocated. One official told us that she believed her school did not receive an equitable share of funds because the school’s parent district used its discretion in allocating higher funding levels to another school in the district. Another official told us he believed funding formulas were biased towards larger schools and school districts, which had the effect of reducing the amount of funds available for smaller schools like his. Yet another charter school operator told us that he believed title I funds were not equitably allocated because funds are not distributed on a per-capita, or per-eligible-student, basis. Charter School Officials Cited Several Barriers to Receiving Title I and IDEA Funds On the basis of our preliminary work, charter schools do not appear to be at a disadvantage in terms of how federal funds are allocated. However, our survey has identified a variety of barriers that made it difficult for charter school operators to apply for and receive title I and IDEA funds. For example, three officials told us that because they had no prior year’s enrollment or student eligibility data, they were not eligible under state guidelines for federal funds. In its July 1997 report, the Hudson Institute also found that title I funds were typically allotted on the basis of the previous year’s population of title I-eligible children, “leaving start-up charters completely stranded for their first year.” Two of our three respondents for whom lack of prior year’s enrollment data was a problem were newly created schools, while the third was converted from a formerly private institution. Start-up eligibility issues are not always limited to a school’s first year of operations. Some officials noted that their schools are incrementally increasing the number of grades served as the original student body progresses. For example, one school official told us that while the school currently serves grades 9 and 10, the school will eventually serve grades 9 through 12. about 40 newly enrolled students. But because of the time lag in reporting data, the school will have to wait until the following year for the additional funds. Over time, as enrollment stabilizes, these issues will pose fewer problems for school officials. Charter schools that were converted from traditional public schools generally do not have this problem when current enrollment is at or near full capacity and title I eligibility has previously been established. Moreover, some school officials reported difficulty obtaining the student eligibility data required to receive title I funds. In some states, school officials themselves must collect data on students’ family incomes in order to establish eligibility for federal funds. Some officials told us that because of privacy concerns, some families are reluctant to return surveys sent home with students that ask for the income levels of students’ households. An official told us that he believed parents may not understand that such data are used to qualify children for free and reduced-price lunches for schools that operate such programs, as well as for establishing the schools’ eligibility for federal grant funds. In other cases, charter school officials must take additional steps to establish their eligibility for title I funds over and above those faced by their traditional public school counterparts. For example, in one state, charter school officials must manually match their student enrollment records against state and local Aid to Families With Dependent Children records to verify student eligibility. The business administrator for a charter school with an enrollment of about 1,000 students told us that it takes him and another staff person approximately 2 full days to complete this process. He said that while this procedure is accomplished electronically for traditional public schools, city officials told him that he had no such option. Another charter school official told us that timing issues prevented her from being able to access federal funds. For example, she said that her school’s charter was approved after the deadline had passed for the state allocation of title I funds to public schools. The same school official said that her lack of awareness of what was required to obtain IDEA funds led her to underestimate the time required to prepare and submit applications, and she was thus unable to submit them on time. access these funds. For example, the business administrator at a charter school we visited told us that it took numerous visits and phone calls to district officials to understand the allocation processes and procedures, as well as to negotiate what he thought was an equitable share of federal funding for his school. District officials we spoke with noted that because their school district had approved and issued several charters to individual schools with varying degrees of fiscal autonomy, working out allocation issues has taken some time. District officials noted that they have limited time and resources to use in developing new policies and procedures for charter schools, especially because the number of charter schools and their student populations constitute a very small portion of their overall operations. In some cases, charter school officials noted that they did not receive funds because they failed to meet federal or district qualifying requirements. For example, current federal requirements mandate that LEAs—charter schools operating under the independent model—have at least 10 children from low-income families enrolled and that such children constitute more than 2 percent of their school-aged population. Of 32 schools responding to our survey, 9 had fewer than 10 students who were eligible for title I funds. Schools operating under the dependent funding model may face more barriers than do schools operating under the independent funding model because dependent-model schools must go through an intermediary—or school district—in accessing federal funds, rather than receiving funds directly from the state. One charter school operator told us that she believed that her school’s parent LEA unfairly used its discretion in allocating funds to schools within its district. She said that other schools in the district received higher funding levels than did her school. Even though state officials told her that it was within the LEA’s discretion to allocate funds the way it did, she believes that district officials were singling her school out for disparate treatment because it is a charter school. Another charter school operator told us that uncooperative district officials were an obstacle in accessing federal funds because they were unwilling to provide assistance in obtaining funding for her school. $7,500 threshold and collectively file a joint application. Given recent federal IDEA appropriations, a school district, or group of districts, is required, in effect, to have enrolled approximately 20 to 25 eligible students to meet the $7,500 threshold. Of the charter schools responding to our survey, 17 enrolled 20 or fewer IDEA-eligible students. Two survey respondents told us that the requirement for schools to join consortiums to access IDEA funds discouraged or prevented them from pursuing these funds. Moreover, some charter school officials have philosophical differences with IDEA requirements and forego IDEA funding because it does not accommodate their educational methods, according to a charter school technical assistance provider we visited. She said that IDEA requires schools to develop written individualized education programs (IEP) for disabled children, and requires schools to follow specified processes in developing these IEPs. In order to receive IDEA funds, schools must have prepared these IEPs for disabled students. In contrast to preparing IEPs for disabled children only, she said some charter schools approach to education includes considering that all children have special needs. Accordingly, they develop a unique education plan for each child, stressing individualized instruction. The Hudson Institute, in conducting its study, visited charter schools and spoke with school officials and parents who said they preferred that their children not be “labeled” and did not want their educational needs met in “cumbersome, standardized ways.” order to receive IDEA funding until the state informed her of the requirement. Another official told us that although she had contacted a local school district that was willing to jointly file an application with her school, a lack of time to prepare the application and the small amount of funds to which her school would be entitled led her to decide not to pursue the funds. In our discussions with them, several charter school officials emphasized that they had very little time and resources available to devote to accessing title I and IDEA funds. These officials often played multiple roles at their schools, including principal, office manager, nurse, and janitor. One operator told us that it would not be stretching the truth much to say that if all he was required to do was to sign on a dotted line, stuff an envelope, and lick a stamp, he would not have time. Another operator told us that if she receives anything in the mail with the words “title I” on it, she throws it away because she has so little time to attend to such matters. This operator also added that she found the costs of accessing federal funds excessive since she would be restricted in terms of how she could use these funds. She said that it was more reasonable for her to determine how such funds should be spent than for federal and state regulations to dictate these decisions. Charter School Officials Report a Variety of Factors Facilitate Their Ability to Access Federal Funds Charter school operators reported that outreach and technical assistance were key factors that facilitated their ability to access federal funds. Other factors cited by school officials included the use of consolidated program applications, the use of computerized application forms and processes, and the ability to rely on sponsoring district offices for grants administration. Outreach and Technical Assistance Most Frequently Cited as Facilitating Factors Charter school officials most frequently cited receiving information about the availability of federal funds and how much their schools would be eligible for as facilitating factors in accessing title I and IDEA monies. Officials cited a number of sources from which they had obtained such information, including their own states’ departments of education and local school district officials. In addition, charter school officials credited training and technical assistance provided by these sources with helping them to access federal funds. On the basis of our conversations with school officials, it appears that some states are doing more than others to provide assistance to charter schools. In particular, survey respondents in Arizona reported nearly unanimous praise for the amount and availability of assistance provided by the state department of education. They noted that the state has actively informed them of funding opportunities and offered them technical assistance on many occasions. A respondent in another state cited the use of consolidated applications as a facilitating factor in accessing funds. Under the title I program, SEAs may allow LEAs to submit one application for several federally funded programs. Another respondent told us that her SEA’s use of the Internet, over which she could obtain and submit her school’s title I application, facilitated her access to these funds. Still another respondent told us that being able to rely on his charter school’s parent LEA for federal grants administration relieved him of the burden of administering the grant and thus facilitated his access to federal funds. Finally, some respondents told us that their schools employed consultants to assist in applying for federal and state funds, which enabled them to focus their time and effort on other matters. Conclusion In conclusion, our preliminary work suggests that the barriers that charter schools face in accessing federal funds appear to be unrelated to whether charter schools are treated as school districts or as members of school districts. Rather, other barriers, many of which are not related to the path federal funds take, have had a more significant effect on charter schools’ ability to access title I and IDEA funds. These other barriers include state systems that base funding allocations on the prior year’s enrollment and student eligibility data, the costs of accessing funds relative to the amounts that schools would receive, and the significant time constraints that prevent charter school operators from pursuing funds. Despite these barriers, most charter school operators who expressed an opinion believe that title I and IDEA funds are equitably allocated to charter schools. This concludes my statement, Mr. Chairman. I would be happy to answer any questions you or Members of the Subcommittee may have. Charter Schools Operating in School Year 1996-97 in Selected States, Included in Our Sample, and Responding to Our Survey Schools that refused to participate Charter schools are also located in Alaska, Delaware, the District of Columbia, Florida, Georgia, Hawaii, Illinois, Louisiana, New Mexico, and Wisconsin. Not applicable. Charter School States and Number of Schools Operating in School Year 1996-97 States with charter legislation but no charter schools. States included in our survey with number of schools operating in each. States and the District of Columbia not included in our survey. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed charter schools' ability to access categorical education grant funds, focusing on: (1) how federal title I of the Elementary and Secondary Education Act and the Individuals With Disabilities Education (IDEA) funds are distributed to charter schools, and the opinions of charter school operators on whether the distribution is equitable; and (2) what factors appear to be facilitating and impeding charter schools in accessing these funds. GAO noted that: (1) title I and IDEA funds are allocated to schools that meet established federal, state, and local demographic criteria; (2) these criteria relate to the number of enrolled children from low-income families and the number of enrolled children with disabilities that require special education services; (3) although most public schools receive funding under these programs, some public schools, including some charter schools, do not meet eligibility criteria and, as a result, do not receive funding; (4) GAO's preliminary work suggests that states are allocating federal funds to charter schools in much the same manner as they allocate funds to traditional public schools; (5) in general, states either treat charter schools as individual school districts or as components of existing districts; (6) although charter schools treated as school districts avoid having to meet additional criteria used to distribute funds beyond the district level, GAO's survey results thus far indicate that these schools were no more likely to have received title I and IDEA funds for the 1996-97 school year than were charter schools treated as components of existing school districts; (7) most charter school operators surveyed who expressed an opinion told GAO that they believe they received an equitable share of federal title I and IDEA funds; (8) while charter schools do not appear to be at a disadvantage in terms of how federal funds are allocated, GAO's survey has revealed a variety of barriers that have made it difficult for charter schools to access title I and IDEA funds; (9) these factors include a lack of enrollment and student eligibility data to submit to states before funding allocation decisions are made and the time required and the costs involved in applying for such funds, given the amount of funds available; (10) in addition, some charter schools have failed to meet statutory eligibility requirements for receiving federal funds; (11) charter school operators most often cited training and technical assistance as factors that facilitated their accessing title I and IDEA funds; (12) on the basis of survey responses, some states appear to be making a comprehensive effort to inform charter schools of the availability of federal funds and how to apply for them; and (13) charter school operators in other states told GAO that they received technical assistance from local school districts, while other charter school operators employed consultants to assist them.
GAO_GAO-17-758T
FEMA Has Developed and Documented Misconduct Policies and Procedures for Most Employees, but Not its Entire Workforce FEMA has developed a policy and procedures regarding misconduct investigations that apply to all FEMA personnel and has also documented policies and procedures regarding options to address misconduct and appeal rights for Title 5 and CORE employees. However, FEMA has not documented complete misconduct policies and procedures for Surge Capacity Force members or Reservists. DHS issued the Surge Capacity Force Concept of Operations in 2010, which outlines FEMA’s base implementation plan for the Surge Capacity Force. However, the document does not address any elements pertaining to Surge Capacity Force human capital management, specifically misconduct and disciplinary policies and procedures. According to the FEMA Surge Capacity Force Coordinator, despite the lack of documentation, any incidents of misconduct would likely be investigated by FEMA’s OCSO, which would then refer the completed report of investigation to the employee’s home component for adjudication and potential disciplinary action. However, although no allegations of misconduct were made at the time, the Federal Coordinating Officer in charge of one of the Hurricane Sandy Joint Field Offices said he had not seen anything in writing or any formal guidance that documents or explains how the process would work and stated that he would have had to contact FEMA headquarters for assistance in determining how to address any misconduct. Without documented guidance, FEMA cannot ensure that Surge Capacity Force misconduct is addressed adequately in a timely and comprehensive manner. Therefore, in our July 2017 report we recommended that the FEMA Administrator document policies and procedures to address potential Surge Capacity Force misconduct. DHS concurred and stated that FEMA is developing a Human Capital plan for the Surge Capacity Force and will include policies and procedures relating to potential misconduct. DHS estimated that this effort would be completed by June 30, 2018. This action, if fully implemented, should address the intent of the recommendation. Additionally, we found that FEMA’s Reservist Program Manual lacks documented policies and procedures on disciplinary options to address misconduct and appeal rights for Reservists. Both LER and PLB officials told us that, in practice, disciplinary actions for Reservists are limited to reprimands and termination. According to these officials, FEMA does not suspend Reservists because they are an intermittent, at-will workforce deployed as needed to respond to disasters. Federal Coordinating Officers and cadre managers have the authority to demobilize Reservists and remove them from a Joint Field Office if misconduct occurs, which may be done in lieu of suspension. Furthermore, LER and PLB officials also told us that, in practice, FEMA grants Reservists the right to appeal a reprimand or termination to their second-level supervisor. However, these actions are not documented in the Reservist Program Manual. Without documented Reservist disciplinary options and appeals policies, supervisors and Reservist employees may not be aware of all aspects of the disciplinary and appeals process. Thus, in our July 2017 report, we recommended that FEMA document Reservist disciplinary options and appeals that are currently in practice at the agency. DHS concurred and stated that FEMA will update its Reservist program directive to include procedures for disciplinary actions and appeals currently in practice at the agency. DHS estimated that this effort would be completed by December 31, 2017. This action, if fully implemented, should address the intent of the recommendation. We also reported in our July 2017 report that FEMA does not communicate the range of offenses and penalties to its entire workforce. Namely, FEMA revised its employee disciplinary manual for Title 5 employees in 2015, and in doing so, eliminated the agency’s table of offenses and penalties. Tables of offenses and penalties are used by agencies to provide guidance on the range of penalties available when formal discipline is taken. They also provide awareness and inform employees of the penalties which may be imposed for misconduct. Since revising the manual and removing the table, FEMA no longer communicates possible punishable offenses to its entire workforce. Instead, information is now communicated to supervisors and employees on an individual basis. Specifically, LER specialists currently use a “comparators” spreadsheet with historical data on previous misconduct cases to determine a range of disciplinary or adverse actions for each specific misconduct case. The information used to determine the range of penalties is shared with the supervisor on a case-by-case basis; however, LER specialists noted that due to privacy protections they are the only FEMA officials who have access to the comparators spreadsheet. Because information about offenses and penalties is not universally shared with supervisors and employees, FEMA management is limited in its ability to set expectations about appropriate conduct in the workplace and to communicate consequences of inappropriate conduct. We recommended that FEMA communicate the range of penalties for specific misconduct offenses to all employees and supervisors. DHS concurred and stated that FEMA is currently drafting a table of offenses and penalties and will take steps to communicate those penalties to employees throughout the agency once the table is finalized. DHS estimated that this effort would be completed by December 31, 2017. This action, if fully implemented, should address the intent of the recommendation. FEMA Records Data on Employee Misconduct Cases and Their Outcomes, but Could Improve the Quality and Usefulness of These Data to Identify and Address Trends Multiple FEMA Offices Collect Misconduct Data; FEMA OCSO Recorded Approximately 600 Misconduct Complaints from January 2014 through September 30, 2016 The three offices on the AID Committee involved in investigating and adjudicating employee misconduct complaints each maintain separate case tracking spreadsheets with data on employee misconduct to facilitate their respective roles in the misconduct review process. We analyzed data provided by OCSO in its case tracking spreadsheet and found that there were 595 complaints from January 2014 through September 30, 2016. The complaints involved alleged offenses of employee misconduct which may or may not have been substantiated over the course of an investigation. Based on our analysis, the 595 complaints contained approximately 799 alleged offenses from January 2014 through September 30, 2016. As shown in figure 1 below, the most common type of alleged offenses were integrity and ethics violations (278), inappropriate comments and conduct (140), and misuse of government property or funds (119). For example, one complaint categorized as integrity and ethics involved allegations that a FEMA employee at a Joint Field Office was accepting illegal gifts from a FEMA contractor and a state contractor. Another complaint categorized as inappropriate comments and conduct involved allegations that a FEMA employee’s supervisor and other employees had bullied and cursed at them, creating an unhealthy work environment. Finally, a complaint categorized as misuse of government property or funds involved allegations that a former FEMA employee was terminated but did not return a FEMA-owned laptop. Aspects of FEMA’s Data Limit Their Usefulness for Identifying and Addressing Trends in Employee Misconduct OCSO, LER, and PLB collect data on employee misconduct and outcomes, but limited standardization of fields and entries within fields, limited use of unique case identifiers, and a lack of documented guidance on data entry restricts their usefulness for identifying and addressing trends in employee misconduct. FEMA employee misconduct data are not readily accessible and cannot be verified as accurate and complete on a timely basis. These limitations restrict management’s ability to process the data into quality information that can be used to identify and address trends in employee misconduct. For example, an OCSO official stated that senior OCSO officials recently requested employee misconduct information based on employee type, such as the number of Reservists. However, the data are largely captured in narrative fields, making it difficult to extract without manual review. In our July 2017 report we recommended that FEMA improve the quality and usefulness of the misconduct data it collects by implementing quality control measures, such as adding additional drop-down fields with standardized entries, adding unique case identifier fields, developing documented guidance for data entry, or considering the adoption of database software. In addition, we recommended that FEMA conduct routine reporting on employee misconduct trends once the quality of the data is improved. DHS concurred and stated that FEMA is working with the DHS OIG to develop a new case management system. The system will use drop-down fields with standardized entries and provide tools for trend analysis. Once the new system is implemented, DHS stated that FEMA will be able to routinely identify and address emerging trends of misconduct. DHS estimated that these efforts would be completed by March 31, 2018. These actions, if fully implemented, should address the intent of the recommendations. FEMA Shares Misconduct Case Information Internally and with DHS OIG, but Does Not Accurately Track DHS OIG Referred Misconduct Complaints FEMA Offices Meet Regularly to Discuss Misconduct Allegations and Ongoing Investigations and Send Monthly Status Updates to DHS OIG Officials from OCSO, LER, and PLB conduct weekly AID Committee meetings to coordinate information on misconduct allegations and investigations. The committee reviews allegations, refers cases for investigation or inquiry, and discusses the status of investigations. In addition to the weekly AID Committee meetings, LER and PLB officials stated that they meet on a regular basis to discuss disciplinary and adverse actions and ensure that any penalties are consistent and defensible in court. Employee misconduct information is also shared directly with FEMA’s Chief Security Officer and Chief Counsel. Within FEMA, these regular meetings and status reports provide officials from key personnel management offices opportunities to communicate and share information about employee misconduct. FEMA also provides DHS OIG with information on employee misconduct cases on a regular basis through monthly reports on open investigations. FEMA’s Procedures for Tracking DHS OIG Referred Cases Need Improvement We found that OCSO has not established effective procedures to ensure that all cases referred to FEMA by DHS OIG are accounted for and subsequently reviewed and addressed. As discussed earlier, OCSO sends a monthly report of open investigations to DHS OIG. However, while these reports provide awareness of specific investigations, according to OCSO officials, neither office reconciles the reports to a list of referred cases to ensure that all cases are addressed. We reviewed a non-generalizable random sample of 20 fiscal year 2016 employee misconduct complaints DHS OIG referred to FEMA for review and found that FEMA missed 6 of the 20 complaints during the referral process and had not reviewed them at the time of our inquiry. As a result of our review, FEMA subsequently took action to review the complaints. The AID Committee recommended that OCSO open inquiries in 3 of the 6 cases to determine whether the allegations were against FEMA employees, assigned 2 cases to LER for further review, and closed 1 case for lack of information. According to an OCSO official, OCSO subsequently determined that none of the allegations in the 3 cases they opened involved FEMA employees and the cases were closed. The remaining 2 cases were open as of April 2017. The results from our sample cannot be generalized to the entire population of referrals from DHS OIG to FEMA; however, they raise questions as to whether there could be additional instances of misconduct complaints that FEMA has not reviewed or addressed. Therefore, in our July 2017 report we recommended that FEMA develop reconciliation procedures to consistently track referred cases. DHS concurred and stated that once the new case management system described above is established and fully operational, FEMA will be able to upload all DHS OIG referrals into a single, agency-wide database. Additionally, FEMA will work with DHS OIG to establish processes and procedures that will improve reconciliation of case data. DHS estimated that these efforts would be completed by March 31, 2018. These actions, if fully implemented, should address the intent of the recommendation. Chairman Perry, Ranking Member Correa, Members of the Subcommittee, this concludes my prepared testimony. I would be pleased to answer any questions that you may have at this time. GAO Contact and Staff Acknowledgments If you or your staff members have any questions concerning this testimony, please contact me at 404-679-1875 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this statement. Other individuals who made key contributions to this testimony include Sarah Turpin, Kristiana Moore, Steven Komadina, and Ben Atwater. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
This testimony summarizes the information contained in GAO's July 2017 report, entitled Federal Emergency Management Agency: Additional Actions Needed to Improve Handling of Employee Misconduct Allegations ( GAO-17-613 ). The Federal Emergency Management Agency (FEMA) has developed and documented misconduct policies and procedures for most employees, but not its entire workforce. Specifically, FEMA has developed policies and procedures regarding misconduct investigations that apply to all FEMA personnel and has also documented options to address misconduct and appeal rights for Title 5 (generally permanent employees) and Cadre of On-Call Response/Recovery Employees (temporary employees who support disaster related activities). However, FEMA has not documented misconduct policies and procedures for Surge Capacity Force members, who may augment FEMA's workforce in the event of a catastrophic disaster. Additionally, FEMA's Reservist (intermittent disaster employees) policies and procedures do not outline disciplinary actions or the appeals process currently in practice at the agency. As a result, supervisors and Reservist employees may not be aware of all aspects of the process. Clearly documented policies and procedures for all workforce categories could help to better prepare management to address misconduct and mitigate perceptions that misconduct is handled inconsistently. FEMA records data on misconduct cases and their outcomes; however, aspects of this data limit their usefulness for identifying and addressing trends. GAO reviewed misconduct complaints recorded by FEMA's Office of the Chief Security Officer (OCSO) from January 2014 through September 30, 2016, and identified 595 complaints involving 799 alleged offenses, the most common of which were integrity and ethics violations. FEMA reported 546 disciplinary actions related to misconduct from calendar year 2014 through 2016. In addition to OCSO, two other FEMA offices involved in investigating and adjudicating misconduct also record data. However, limited standardization of data fields and entries within fields, limited use of unique case identifiers, and a lack of documented guidance on data entry across all three offices restricts the data's usefulness for identifying and addressing trends in employee misconduct. Improved quality control measures could help the agency use the data to better identify potential problem areas and opportunities for training. FEMA shares misconduct case information internally and with the Department of Homeland Security Office of Inspector General (DHS OIG) on a regular basis; however, FEMA does not have reconciliation procedures in place to track DHS OIG referred cases to ensure that they are reviewed and addressed. GAO reviewed a random sample of 20 cases DHS OIG referred to FEMA in fiscal year 2016 and found that FEMA missed 6 of the 20 complaints during the referral process and had not reviewed them at the time of GAO's inquiry. As a result of GAO's review, FEMA took action to review the complaints and opened inquiries in 5 of the 6 cases (1 case was closed for lack of information). In 3 of these cases, officials determined that the complaints did not involve FEMA employees. The 2 remaining cases were open as of April 2017. While the results from this review are not generalizable to the entire population of referrals from DHS OIG to FEMA, they raise questions as to whether there could be additional instances of misconduct complaints that FEMA has not reviewed or addressed. Procedures to ensure reconciliation of referred cases across FEMA and DHS OIG records could help ensure that FEMA accounts for all complaints.
GAO_GAO-02-859T
Relevant Professional Standards for Ombudsmen Through the impartial and independent investigation of citizens’ complaints, federal ombudsmen help agencies be more responsive to the public, including people who believe that their concerns have not been dealt with fully or fairly through normal channels. Ombudsmen may recommend ways to resolve individual complaints or more systemic problems, and may help to informally resolve disagreements between the agency and the public. While there are no federal requirements or standards specific to the operation of federal ombudsman offices, the Administrative Conference of the United States recommended in 1990 that the President and the Congress support federal agency initiatives to create and fund an external ombudsman in agencies with significant interaction with the public. In addition, several professional organizations have published relevant standards of practice for ombudsmen. Both the recommendations of the Administrative Conference of the United States and the standards of practice adopted by various ombudsman associations incorporate the core principles of independence, impartiality (neutrality), and confidentiality. For example, the ABA’s standards define these characteristics as follows: Independence—An ombudsman must be and appear to be free from interference in the legitimate performance of duties and independent from control, limitation, or penalty by an officer of the appointing entity or a person who may be the subject of a complaint or inquiry. Impartiality—An ombudsman must conduct inquiries and investigations in an impartial manner, free from initial bias and conflicts of interest. Confidentiality—An ombudsman must not disclose and must not be required to disclose any information provided in confidence, except to address an imminent risk of serious harm. Records pertaining to a complaint, inquiry, or investigation must be confidential and not subject to disclosure outside the ombudsman’s office. Relevant professional standards contain a variety of criteria for assessing an ombudsman’s independence, but in most instances, the underlying theme is that an ombudsman should have both actual and apparent independence from persons who may be the subject of a complaint or inquiry. According to ABA guidelines, for example, a key indicator of independence is whether anyone subject to the ombudsman’s jurisdiction can (1) control or limit the ombudsman’s performance of assigned duties, (2) eliminate the office, (3) remove the ombudsman for other than cause, or (4) reduce the office’s budget or resources for retaliatory purposes. Other factors identified in the ABA guidelines on independence include a budget funded at a level sufficient to carry out the ombudsman’s responsibilities; the ability to spend funds independent of any approving authority; and the power to appoint, supervise, and remove staff. The Ombudsman Association’s standards of practice define independence as functioning independent of line management; they advocate that the ombudsman report to the highest authority in the organization. According to the ABA’s recommended standards, “the ombudsman’s structural independence is the foundation upon which the ombudsman’s impartiality is built.” One aspect of the core principle of impartiality is fairness. According to an article published by the U.S. Ombudsman Association on the essential characteristics of an ombudsman, an ombudsman should provide any agency or person being criticized an opportunity to (1) know the nature of the criticism before it is made public and (2) provide a written response that will be published in whole or in summary in the ombudsman’s final report. In addition to the core principles, some associations also stress the need for accountability and a credible review process. Accountability is generally defined in terms of the publication of periodic reports that summarize the ombudsman’s findings and activities. Having a credible review process generally entails having the authority and the means, such as access to agency officials and records, to conduct an effective investigation. The ABA recommends that an ombudsman issue and publish periodic reports summarizing the findings and activities of the office to ensure its accountability to the public. Similarly, recommendations by the Administrative Conference of the United States regarding federal ombudsmen state that they should be required to submit periodic reports summarizing their activities, recommendations, and the relevant agency’s responses. Federal agencies face legal and practical constraints in implementing some aspects of these standards because the standards were not designed primarily with federal agency ombudsmen in mind. However, ombudsmen at the federal agencies we reviewed for our 2001 report reflected aspects of the standards. We examined the ombudsman function at four federal agencies in addition to EPA and found that three of them—the Federal Deposit Insurance Corporation, the Food and Drug Administration, and the Internal Revenue Service—had an independent office of the ombudsman that reported to the highest level in the agency, thus giving the ombudsmen structural independence. In addition, the ombudsmen at these three agencies had functional independence, including the authority to hire, supervise, discipline, and terminate their staff, consistent with the authority granted to other offices within their agencies. They also had control over their budget resources. The exception was the ombudsman at the Agency for Toxic Substances and Disease Registry, who did not have a separate office with staff or a separate budget. This ombudsman reported to the Assistant Administrator of the agency instead of the agency head. Issues Raised by EPA’s Reorganization of the Ombudsman Function In our July 2001 report, we recommended, among other things, that EPA modify its organizational structure so that the function would be located outside of the Office of Solid Waste and Emergency Response, whose activities the national ombudsman was charged with reviewing. EPA addresses this recommendation through its placement of the national ombudsman within the OIG, where the national ombudsman will report to a newly-created position of Assistant Inspector General for Congressional and Public Liaison. OIG officials also told us that locating the national ombudsman function within the OIG offers the prospect of additional resources and enhanced investigative capability. According to the officials, the national ombudsman will likely have a small permanent staff but will also be able to access OIG staff members with expertise in specific subject matters, such as hazardous waste or water pollution, on an as-needed basis. Further, OIG officials anticipate that the ombudsman will adopt many of the office’s existing recordkeeping and reporting practices, which could help address the concerns we noted in our report about accountability and fairness to the parties subject to an ombudsman investigation. Despite these aspects of EPA’s reorganization, several issues merit further consideration. First and foremost is the question of intent in establishing an ombudsman function. The term “ombudsman,” as defined within the ombudsman community, carries with it certain expectations. The role of an ombudsman typically includes program operating responsibilities, such as helping to informally resolve program-related issues and mediating disagreements between the agency and the public. Assigning these responsibilities to an office within the OIG would conflict with statutory restrictions on the Inspector General’s activities. Specifically, the Inspector General Act, as amended, prohibits an agency from transferring any function, power, or duty involving program responsibilities to its OIG. However, if EPA omits these responsibilities from the position within the OIG, then it will not have established an “ombudsman” as the function is defined within the ombudsman community. In our April 2001 report, we noted that some federal experts in dispute resolution were concerned that among the growing number of federal ombudsman offices there are some individuals or activities described as “ombuds” or “ombuds offices” that do not generally conform to the standards of practice for ombudsmen. A related issue is that ombudsmen generally serve as a key focal point for interaction between the government, or a particular government agency, and the general public. By placing the national ombudsman function within its OIG, EPA appears to be altering the relationship between the function and the individuals that make inquiries or complaints. Ombudsmen typically see their role as being responsive to the public, without being an advocate. However, EPA’s reorganization signals a subtle change in emphasis: OIG officials see the ombudsman function as a source of information regarding the types of issues that the OIG should be investigating. Similarly, rather than issue reports to complainants, OIG officials expect that the national ombudsman’s reports will be addressed to the EPA Administrator, consistent with the reporting procedures for other OIG offices. The officials told us that their procedures for the national ombudsman function, which are still being developed, could provide for sending a copy of the final report or a summary of the investigation to the original complainant along with a separate cover letter when the report is issued to the Administrator. Based on the preliminary information available from EPA, the reorganization raises other issues regarding the consistency of the agency’s ombudsman function with relevant professional standards. For example, under EPA’s reorganization, the national ombudsman will not be able to exercise independent control over budget and staff resources, even within the general constraints that are faced by federal agencies. According to OIG officials, the national ombudsman will have input into the hiring, assignment, and supervision of staff, but overall authority for staff resources and the budget allocation rests with the Assistant Inspector General for Congressional and Public Liaison. OIG officials pointed out that the issue our July 2001 report raised about control over budget and staff resources was closely linked to the ombudsman’s placement within the Office of Solid Waste and Emergency Response. The officials believe that once the national ombudsman function was relocated to the OIG, the inability to control resources became much less significant as an obstacle to operational independence. They maintain that although the ombudsman is not an independent entity within the OIG, the position is independent by virtue of the OIG’s independence. Despite the OIG’s argument, we note that the national ombudsman will also lack authority to independently select and prioritize cases that warrant investigation. According to EPA, the Inspector General has the overall responsibility for the work performed by the OIG, and no single staff member—including the ombudsman—has the authority to select and prioritize his or her own caseload independent of all other needs. Decisions on whether complaints warrant a more detailed review will be made by the Assistant Inspector General for Congressional and Public Liaison in consultation with the national ombudsman and staff. EPA officials are currently reviewing the case files obtained from the former ombudsman, in part to determine the anticipated workload and an appropriate allocation of resources. According to OIG officials, the national ombudsman will have access to other OIG resources as needed, but EPA has not yet defined how decisions will be made regarding the assignment of these resources. Under the ABA guidelines, one measure of independence is a budget funded at a level sufficient to carry out the ombudsman’s responsibilities. However, if both the ombudsman’s budget and workload are outside his or her control, then the ombudsman would be unable to assure that the resources for implementing the function are adequate. Ombudsmen at other federal agencies must live within a budget and are subject to the same spending constraints as other offices within their agencies, but they can set their own priorities and decide how their funds will be spent. EPA has also not yet fully defined the role of its regional ombudsmen or the nature of their relationship with the national ombudsman in the OIG. EPA officials told us that the relationship between the national and regional ombudsmen is a “work in progress” and that the OIG will be developing procedures for when and how interactions will occur. Depending on how EPA ultimately defines the role of its regional ombudsmen, their continued lack of independence could remain an issue. In our July 2001 report, we concluded that the other duties assigned to the regional ombudsmen—primarily line management positions within the Superfund program—hamper their independence. Among other things, we cited guidance from The Ombudsman Association, which states that an ombudsman should serve “no additional role within an organization” because holding another position would compromise the ombudsman’s neutrality. According to our discussions with officials from the Office of Solid Waste and Emergency Response and the OIG, the investigative aspects of the ombudsman function will be assigned to the OIG, but it appears that the regional ombudsmen will respond to inquiries and have a role in informally resolving issues between the agency and the public before they escalate into complaints about how EPA operates. For the time being, EPA officials expect the regional ombudsmen to retain their line management positions. Finally, including the national ombudsman function within the Office of the Inspector General raises concerns about the effect on the OIG, even if EPA defines the ombudsman’s role in a way that avoids conflict with the Inspector General Act. By having the ombudsman function as a part of the OIG, the Inspector General could no longer independently audit and investigate that function, as is the case at other federal agencies where the ombudsman function and the OIG are separate entities. As we noted in a June 2001 report on certain activities of the OIG at the Department of Housing and Urban Development, under applicable government auditing standards the OIG cannot independently and impartially audit and investigate activities it is directly involved in. A related issue concerns situations in which the national ombudsman receives an inquiry or complaint about a matter that has already been investigated by the OIG. For example, OIG reports are typically transmitted to the Administrator after a review by the Inspector General. A process that requires the Inspector General to review an ombudsman- prepared report that is critical of, or could be construed as reflecting negatively on, previous OIG work could pose a conflict for the Inspector General. OIG officials are currently working on detailed procedures for the national ombudsman function, including criteria for opening, prioritizing, and closing cases, and will have to address this issue as part of their effort. In conclusion, Mr. Chairman, we believe that several issues need to be considered in EPA’s reorganization of its ombudsman function. The first is perhaps the most fundamental—that is, the need to clarify the intent. We look forward to working with members of the Committee as you consider the best way of resolving these issues.
The Environmental Protection Agency's (EPA) hazardous waste ombudsman was established as a result of the 1984 amendments to the Resource Conservation and Recovery Act. Recognizing that the ombudsman provides a valuable service to the public, EPA retained the ombudsman function as a matter of policy after its legislative authorization expired in 1988. Over time, EPA expanded the national ombudsman's jurisdiction to include Superfund and other hazardous waste programs, and, by March 1996, EPA had designated ombudsmen in each of its ten regional offices. In November 2001, the agency announced that the national ombudsman would be relocated from the Office of Solid Waste and Emergency Response to the Office of the Inspector General (OIG) and would address concerns across the spectrum of EPA programs, not just hazardous waste programs. Although there are no federal requirements or standards specific to the operation of ombudsman offices, several professional organizations have published standards of practice relevant to ombudsmen who deal with public inquiries. If EPA intends to have an ombudsman function consistent with the way the position is typically defined in the ombudsman community, placing the national ombudsman within the OIG does not achieve that objective. The role of the ombudsman typically includes program operating responsibilities, such as helping to informally resolve program-related issues and mediating disagreements between the agency and the public. Including these responsibilities within the OIG would likely conflict with the Inspector General Act, which prohibits the transfer of program operating responsibilities to the Inspector General; yet, omitting these responsibilities would result in establishing an ombudsman that is not fully consistent with the function as defined within the ombudsman community.
GAO_GAO-04-697
Background The national security space sector is primarily comprised of military and intelligence activities. The U.S. Strategic Command, one of the combatant commands, is responsible for establishing overall operational requirements for space activities, and the military services are responsible for satisfying these requirements to the maximum extent practicable. The Air Force is DOD’s primary procurer and operator of space systems and spends the largest share of defense space funds. The Air Force Space Command is the major component providing space forces for the U.S. Strategic Command. The Army controls a defense satellite communications system and operates ground mobile terminals. The Army Space and Missile Defense Command conducts space operations and provides planning, integration, and control and coordination of Army forces and capabilities in support of the U.S. Strategic Command. The Navy operates several space systems that contribute to surveillance and warning and is responsible for acquiring the Mobile User Operations System, the next generation ultrahigh frequency satellite communication system. The Marine Corps uses space to provide the warfighter with intelligence, communications, and position navigation. The National Reconnaissance Office designs, procures, and operates space systems dedicated to national security activities and depends on personnel from each of the services’ space cadres to execute its mission. Due to continuing concerns about DOD’s management of space activities, in October 1999 Congress chartered a commission—known as the Space Commission—to assess the United States’ national security space management and organization. In its January 2001 report, the Space Commission made recommendations to DOD to improve coordination, execution, and oversight of the department’s space activities. One issue the Space Commission identified was the need to create and maintain a highly trained and experienced cadre of space professionals who could master highly complex technology, as well as develop new concepts of operations for offensive and defensive space operations. The Space Commission noted that the defense space program had benefited from world-class scientists, engineers, and operators, but many experienced personnel were retiring and the recruitment and retention of space- qualified personnel was a problem. Further, the commission concluded that DOD did not have a strong military space culture, which included focused career development and education and training. In October 2001, the Secretary of Defense issued a memorandum directing the military services to draft specific guidance and plans for developing, maintaining, and managing a cadre of space-qualified professionals. A DOD directive in June 2003 designated the Secretary of the Air Force as the DOD Executive Agent for Space, with the Executive Agent responsibilities delegated to the Under Secretary of the Air Force. The directive stated that the Executive Agent shall develop, coordinate, and integrate plans and programs for space systems and the acquisition of DOD major space programs to provide operational space force capabilities. Further, the directive required the Executive Agent to lead efforts to synchronize the services’ space cadre activities and to integrate the services’ space personnel into a cohesive joint force to the maximum extent practicable. The directive also makes the military services responsible for developing and maintaining a cadre of space-qualified professionals in sufficient quantities to represent the services’ interests in space requirements, acquisition, and operations. We have identified strategic human capital management as a governmentwide high-risk area and provided tools intended to help federal agency leaders manage their people. Specifically, we identified a lack of a consistent strategic approach to marshal, manage, and maintain the human capital needed to maximize government performance and ensure its accountability. In our exposure draft on a model of strategic human capital management, we identified four cornerstones of human capital planning that have undermined agency effectiveness, which are leadership; strategic human capital planning; acquiring, developing, and retaining talent; and results-oriented organizational cultures. We also cited critical success factors for strategic human capital planning, including integration and data-driven human capital decisions. Furthermore, we reported that many federal agencies had not put in place a strategic human capital planning process for determining critical organizational capabilities, identifying gaps in these capabilities and resources needed, and designing evaluation methods. DOD Issued a Space Human Capital Strategy but Has No Implementation Plan DOD’s space human capital strategy, which we believe is a significant first step, promotes the development and integration of the military services’ space cadres; however, DOD has not developed a plan to implement actions to achieve the strategy’s goals and objectives. A strategy and a plan to implement the strategy are central principles of a results-oriented management framework. DOD’s space human capital strategy establishes direction for the future, includes goals for integrating the services’ space cadres and developing space-qualified personnel, and identifies approaches and objectives to meet the strategy’s goals. An implementation plan for the strategy could include specific actions, responsibilities, time frames, and evaluation measures. DOD has begun to implement some of the key actions identified in the strategy. Management Framework Would Include a Strategy and an Implementation Plan A results-oriented management framework provides an approach that DOD could use to develop and manage the services’ space cadres, including a strategy and a plan to implement the strategy. Sound general management tenets, embraced by the Government Performance and Results Act of 1993, require agencies to pursue results-oriented management, whereby program effectiveness is measured in terms of outcomes or impact, rather than outputs, such as activities and processes. Management principles and elements can provide DOD and the military services with a framework for strategic planning and effectively implementing and managing programs. Table 1 describes the framework and its principles and elements. DOD’s Space Human Capital Strategy Established Direction for the Future In February 2004, DOD issued its space human capital strategy that established direction for the future and included overall goals for developing and integrating space personnel. To develop the strategy, the DOD Executive Agent for Space established a joint working group comprised of representatives from the Office of the Secretary of Defense, each of the military services, the National Reconnaissance Office, and various other defense organizations. The Office of the Secretary of Defense and the military services reviewed the strategy, and the DOD Executive Agent for Space approved it. The space human capital strategy’s goals flow from the goals in DOD’s Personnel and Readiness Strategic Plan, which is the integrated strategic plan that includes the major goals that directly support the mission of the Office of the Under Secretary of Defense for Personnel and Readiness. Two of these goals include: (1) integrating active and reserve component military personnel, civilian employees, and support contractors into a diverse, cohesive total force and (2) providing appropriate education, training, and development of the total force to meet mission requirements. The six goals for space professional management identified in the space human capital strategy are to ensure the services develop space cadres to fulfill their unique mission synchronize the services’ space cadre activities to increase efficiency and reduce unnecessary redundancies; improve the integration of space capabilities for joint war fighting and intelligence; assign the best space professionals to critical positions; increase the number of skilled, educated, and experienced space professionals; and identify critical positions and personnel requirements for them. The strategy also described approaches designed to accomplish DOD’s long-term goals. The approaches provided general direction for departmentwide actions in areas identified as key to the long-term success of the strategy, such as establishing policy concerning human capital development and a professional certification process for space personnel and identifying and defining critical positions and education overlaps and gaps. In addition, the strategy recognized external factors that should be considered departmentwide and by the services in developing implementation actions. Such factors include increasing reliance on space for critical capabilities in the future, the need for more space-qualified people, and the need to develop new systems and technologies to sustain the United States as a world leader in space. The space human capital strategy also identified objectives necessary to achieve the strategy’s goals in the areas of leadership, policy, career development, education, training, data collection, management, and best practices. The strategy places responsibility for achieving the objectives with each service and component. The objectives include, among others, promoting the development of a cadre of space professionals within each service, enhancing space education and training, creating management processes to meet future programmatic needs, and identifying and implementing best practices. Table 2 shows the strategy’s objectives. DOD Has Not Developed an Implementation Plan for Its Strategy DOD has not developed a detailed implementation plan for the key actions in its space human capital strategy that could include more specific implementing actions, identify responsibilities, set specific time frames for completion, and establish performance measures. As previously mentioned, a results-oriented management framework would include a plan with detailed implementation actions and performance measurements, in addition to incorporating performance goals, resources needed, performance indicators, and an evaluation process. DOD’s strategic approach, as outlined in its strategy, identifies key actions to meet the space human capital strategy’s objectives and indicates three time phases for implementing the actions. However, DOD has not started to develop an implementation plan for its strategy. A DOD official said the department plans to complete an implementation plan by November 2004, while it is implementing the key actions that have been identified in the strategy. Until an implementation plan is developed, the DOD Executive Agent for Space plans to hold meetings of the working group that developed the strategy to discuss space cadre initiatives and integration actions. Before developing an implementation plan, DOD plans to collect information from the services to establish a baseline on their current space cadres, according to a DOD official. Some of the information to be collected includes size, skills, and competencies of the personnel in the services’ space cadres; numbers of space positions and positions that are vacant; promotion and retention rates for space personnel; and retirement eligibility and personnel availability projections. The strategy indicates that collecting this information was one of the key actions in the first phase of the strategy’s implementation and was to have been completed by April 2004. However, DOD has not requested the information from the services because officials had not completely determined what information will be collected, how it will be analyzed, and how it will be used to develop an implementation plan. DOD has begun implementing some actions identified in the strategy as key to helping further develop and integrate the services’ space cadres; however, DOD had not completed any of these actions by the end of our review. Actions currently under way include preparing for an education and training summit; evaluating space cadre best practices; developing policy on human capital development and use; determining the scope, nature, and specialties associated with space personnel certification; and issuing a call for demonstration projects. DOD plans to complete most of the key actions by November 2004, although it has not developed specific plans and milestones for completing each action. Extent of Services’ Initiatives to Develop Space Cadres Varies The military services vary in the extent to which they have identified and implemented initiatives to develop and manage their space cadres. The Air Force and the Marine Corps have completed space human capital strategies and established organizational focal points with responsibility for managing their space cadres, but the Army and the Navy have not completed these important first steps. The services are executing some other actions to develop and manage their space cadres, and the actions have been implemented to varying extents. Some of the actions include determining what types of personnel and specialties to include in their space cadres and developing or revising their education and training. Even though the services have completed some of these initiatives, many are not complete and will require years to fully implement. DOD has established the overall direction for space human capital development and integration, but the services are responsible for defining their unique space cadre goals and objectives, determining the implementing actions required, and creating a management structure to be responsible for implementation. The Space Commission recommended that the Air Force centralize its space cadre management and concluded that without a centralized management authority to provide leadership, it would be almost impossible to create a space cadre. Even though this recommendation was directed to the Air Force, which has the largest numbers of space professionals and responsibility for the most varied range of space operations, the principle that strong leadership is needed to reach space cadre goals also applies to the other military services. Air Force Has Taken Actions in Developing Its Space Cadre The Air Force approved its space cadre strategy in July 2003, and it is implementing the initiatives it has identified to meet the strategy’s goals. The strategy provided guidance on developing and sustaining the Air Force’s space cadre. Further, the Air Force developed an implementation plan with time lines for completion of certain initiatives. The Air Force also designated the Air Force Space Command as the focal point to manage Air Force space cadre issues. The Air Force’s strategy defined the Air Force’s space cadre as the officers, enlisted personnel, reserves, National Guard, and civilians needed to research, develop, acquire, operate, employ, and sustain space systems in support of national security space objectives. The strategy included actions for identifying all space professionals who would make up its space cadre; providing focused career development; and defining career management roles, responsibilities, and tools. Currently, the Air Force has the largest of the services’ space cadres with an estimated 10,000 members identified based on their education and experience. The strategy also identified planned resources to implement space cadre initiatives through fiscal year 2009. For fiscal year 2004, the Air Force Space Command received $9.1 million to develop and manage its space cadre. According to Command officials, $4.9 million went to the Space Operations School to develop new space education courses, and the remainder was designated for other space cadre activities. For fiscal year 2009, the funding level is planned to increase to about $21 million to fund the planned initiatives, especially the efforts related to education and training. After the Air Force issued its space cadre strategy, it developed a detailed plan to implement the strategy, and it is executing the initiatives in accordance with its time lines. This implementation plan focuses on six key initiatives, as shown in table 3. According to the Air Force Space Command, the Air Force plans to implement most of these initiatives by 2006. Initiatives related to the development of a National Security Space Institute will likely not be completed by 2006 because, in addition to developing curriculum and organizational structure issues, the Institute will require funding and facilities. Appointed by the Secretary of the Air Force in July 2003, the Commander, Air Force Space Command, is the focal point for managing career development, education, and training for the Air Force space cadre. To assist in executing this responsibility, the Commander established a Space Professional Task Force within the Command to develop and implement initiatives and coordinate them with the national security space community. According to the Commander, the centralized management function with the authority to develop and implement Air Force policy governing career development of Air Force space personnel has enabled the Command to move forward with implementation activities and fully integrate the Air Force’s strategy with the Air Force’s overall force development program. Marine Corps Developed Space Cadre Strategy and Is Implementing It The Marine Corps has initiated actions to develop its space cadre and has many tasks to implement its initiatives either completed or under way. Although the Marine Corps’ space cadre is the smallest of the services with 61 active and reserve officers who were identified based on their education and experience, the Marine Corps has a space cadre strategy to develop and manage its space cadre and has an implementation plan to track initiatives. The space cadre strategy was issued as a part of the DOD space human capital strategy in February 2004. To implement its strategy, the Marine Corps has identified key tasks and established milestones for completion, and it is implementing them. In addition, the Marine Corps has identified a focal point in Headquarters, U.S. Marine Corps, to manage its space cadre. There is no Marine Corps funding specifically for actions to develop its space cadre. Furthermore, the Marine Corps does not anticipate a need for any such funding, according to a Marine Corps official. The Marine Corps’ strategy specifies 10 objectives for developing and maintaining space professionals: establish an identifiable cadre of space-qualified enlisted and civilian create and staff additional space personnel positions in the operating create and staff additional space positions at national security space organizations; improve space operations professional military education for all Marine Corps officers; focus the graduate education of Marine Corps space operations students to support Marine Corps needs; leverage interservice space training to ensure the development and proficiency of the space cadre; develop a management process through which interested officers can be assigned to multiple space-related positions during their careers and still compete for promotion with their peers; develop a process and structure for space professionals in the Marine Corps reserves through which they can support operations, training, and exercises through augmentation and mobilization; fully participate in the DOD Executive Agent for Space’s efforts to create a space cadre; and incorporate appropriate space professional certification processes into the management of the Marine Corps’ space cadre. The Marine Corps has identified actions to reach these objectives and developed an implementation plan with milestones to monitor the completion of these actions. For example, the Marine Corps established a space cadre working group to address issues associated with the identification, training, and assignment of space cadre officers. The Marine Corps also contracted a study to obtain data to help manage Marine Corps space personnel positions, determine space cadre requirements, and assess other services’ training and education opportunities. According to the Marine Corps’ strategy, the Marine Corps has started integrating joint doctrine for space operations into its professional military education programs and has coordinated with the Naval Postgraduate School to create Marine Corps-specific space systems courses. The Marine Corps has designated the Deputy Commandant for Plans, Policies, and Operations within the Headquarters, U.S. Marine Corps, as the management focal point for space cadre activities. A general officer within this office has overall responsibility for space matters. The focal point for the space cadre is responsible for coordinating and tracking actions to implement the strategy. Army Has Taken Some Actions to Develop Its Space Cadre, but It Does Not Have a Strategy or Focal Point The Army has taken some actions to develop its space cadre, but it does not have clear goals and objectives for the future because it has not developed a space cadre strategy or identified a focal point to manage its space cadre. Until it adopts a strategy that encompasses a total force of officers, enlisted personnel, and civilians, the Army may not be able to develop sufficient numbers of qualified space personnel to satisfy requirements within the Army and in joint organizations. However, according to Army officials, the Army does not intend to issue a strategy until it decides whether its space cadre should include space officers, enlisted personnel, and civilians because the strategy would be different if the cadre is expanded beyond space operations officers. In 1999, the Army created a career path for its space operations officers and issued career development guidance for them. The Army considers these officers, currently numbering about 148 on active duty, to be its space cadre. The Army’s intent in creating the career path was to provide space expertise and capabilities to develop space doctrine, training, personnel, and facilities where they are needed throughout DOD in support of military operations. Since 1999, the Army has developed a specialized training course to provide space operations officers with the essential skills needed to plan and conduct space operations. However, it has not determined the critical positions for space officers or the number of officers needed to enable it to effectively accomplish its goals of supporting Army and DOD-wide operations. Thus, the Army may be training too many or too few space operations officers, and space operations officers may not be placed in the most critical positions to support Army interests in space. The Army is considering whether to expand its definition of its space cadre to include other personnel beyond the space operations officers. The Army is conducting two studies that Army officials said would provide a basis for this decision. In 2001, the Army began a 5-year study to help it determine whether enlisted personnel should be added to its space cadre and, if so, how this would be accomplished. The study is intended to determine how to recruit, train, and develop enlisted space personnel and to assess the possibility of creating a space career management field for them. In June 2004, the Army began a separate 15-month study to provide additional information that would help it decide whether to expand its space cadre definition. A decision on whether to expand the cadre to include additional personnel is not expected until 2005. The Army has not designated a permanent organizational focal point to develop and manage its space cadre. According to Army officials, the Army has to decide whether to expand its space cadre before it can designate a permanent management focal point because these decisions have implications as to which organization should have overall responsibility. Currently, three different organizations have various responsibilities for Army space cadre issues. Operations and Plans within Army headquarters has broad responsibility for policy, strategy, force management, and planning. Two other organizations have management responsibilities for the space operations officers that comprise the current Army space cadre: Army Space and Missile Defense Command provides personnel oversight for the space operations officers and Army Human Resources Command manages space operations officer assignments. According to Army officials, management of space personnel has not been centralized because the Army is a user of space and has integrated its space capabilities into various Army branches. As a result, no single office is charged with providing leadership on space issues and ensuring that the Army’s space initiatives are having the desired results. Navy Has Initiated Steps in Developing Its Space Cadre, but It Has No Strategy or Focal Point The Navy has initiated steps in identifying and developing its space cadre and has designated an advisor for space cadre issues. However, actions have been limited because it has not developed a space human capital strategy to provide direction and guidance for Navy actions. In addition, the Navy has not provided centralized leadership to develop the strategy and oversee implementation because it does not have a permanent management focal point. The Navy has taken some actions to strengthen space cadre management, including providing funding for the space cadre advisor, an assistant advisor, and contract support in the fiscal year 2005 budget. In addition, the Navy has issued guidance requiring personnel placement officials to coordinate with the space cadre advisor before assigning space cadre personnel to increase the likelihood that they can be placed in appropriate positions to effectively use and develop their space expertise. The Navy has also developed guidance that directs promotion boards to consider space experience when assessing candidates for promotion. Also, senior Navy leaders are engaged in space cadre activities, according to DOD officials. Currently, the Navy has designated 711 active duty officers and about 300 officer and enlisted reserve members as its space cadre, based on their previous education and experience in space activities. Space cadre members serve in positions throughout the different functional areas in the Navy, such as surface warfare and naval aviation. The Navy has not identified active duty enlisted and civilians with space education and experience, although it is in the process of identifying such personnel. The Navy has not completed a strategy for developing and managing its space cadre, even though the requirement for a strategy has been recognized in official guidance. In March 2002, the Navy issued a memorandum requiring the development of a space cadre strategy to guide the Navy in identifying its space requirements. A Navy official said that it was not possible to complete a space cadre strategy without an overall Navy space policy that revised roles and responsibilities for space in the Navy. The Navy published its space policy in April 2004, which reiterated the need for a strategy for developing and managing Navy space personnel. With the policy in place, the Navy plans to complete its strategy by October 2004, according to Navy officials. Lacking a strategy, the Navy has not identified what key actions are needed to build its space cadre, how it intends to implement these actions, and when it expects the key actions to be completed. For example, the Navy has not determined the critical positions it needs to fill with space-qualified personnel, the numbers of personnel it has that should be in its space cadre to meet future needs for Navy and joint operations, or the funding required to implement any planned actions. Further, without an implementation plan that specifies actions, assigns responsibility, provides performance measures, and identifies resources needed, the Navy may not be able to develop and manage its space cadre so that it can effectively participate in Navy and joint space programs. The Navy also lacks a permanent organizational focal point to develop and manage its space cadre and provide centralized leadership on space issues and ensure that the Navy’s space initiatives are implemented and having the desired results. Further, the Navy views space as integrated throughout Navy operations and has not created a separate career field for space personnel. In 2002, the Navy appointed a space cadre advisor to enhance career planning and management of space cadre members; however, the position is advisory to members of the space cadre or others interested in working in space issues. Although the space cadre advisor plans to draft the Navy’s space cadre strategy, the advisor has had no official responsibility for identifying or implementing actions needed to ensure the development and management of space professionals to meet DOD’s future space requirements because the position has not been funded. For example, the space cadre advisor reports to two different offices in the Chief of Naval Operations on various space cadre issues. Conclusions The United States’ increasing reliance on space-based technologies for the success of military operations highlights DOD’s need to develop and maintain a cadre of space professionals who are well educated, motivated, and skilled in the demands of space activities. Although DOD has issued a space human capital strategy, the department does not have a plan that explains how it intends to achieve the goals in its strategy. Without such an implementation plan, developed jointly by the DOD Executive Agent for Space and the military services, DOD will not be in a sound position to effectively monitor and evaluate implementation of the strategy. Further, without clear performance measures, DOD and the services would be unable to assess whether actions intended to meet departmentwide goals and objectives are effective. Therefore, it is not clear that DOD can achieve the strategy’s purpose of integrating the services’ space personnel, to the extent practicable, into an integrated total force of well-qualified military and civilian personnel. Failure to achieve this could jeopardize U.S. primacy in this critical and evolving national security area. The military services’ efforts to implement initiatives to develop their space cadres vary and not all initiatives are linked to service strategies and integrated with DOD’s overall strategy. Further, some of the initiatives are not fully developed and will require several years to complete. Because the Army and the Navy lack a strategy to provide direction and focus for their efforts to develop their space cadres and provide a basis to assess the progress of their initiatives, it is unclear whether they will have sufficient numbers of space-qualified professionals to meet future requirements in joint and service space planning, programming, acquisition, and operations. Furthermore, without an organizational focal point with responsibilities for managing and coordinating space cadre efforts, the Army and the Navy may not have the ability to develop and retain the appropriate number of personnel with the right skills to meet both their needs and the joint requirements of the national security space community. Until the Army and the Navy develop strategies synchronized with the department’s overall strategy and establish a management approach to implementing their strategies, they may not be able to support the department’s strategic goals and objectives and thus may undermine efforts to strengthen this important mission area. Recommendations for Executive Action We recommend that the Secretary of Defense take the following five actions: Direct the DOD Executive Agent for Space, in conjunction with the military services, to develop an implementation plan for the DOD space human capital strategy. The plan should include performance goals, milestones, resources needed, performance indicators, and an evaluation process. Direct the Secretary of the Army to develop a strategy for the Army’s space cadre that incorporates long-term goals and approaches and is consistent with the DOD space human capital resources strategy. Direct the Secretary of the Army to establish a permanent organizational focal point for developing and managing the Army’s space cadre. Direct the Secretary of the Navy to develop a strategy for the U.S. Navy’s space cadre that incorporates the Navy’s long-term goals and approaches and is consistent with the DOD space human capital resources strategy. Direct the Secretary of the Navy to establish a permanent organizational focal point in the U.S. Navy for developing and managing the service’s space cadre. Agency Comments and Our Evaluation In commenting on a draft of this report, DOD generally agreed with our report and our recommendations. DOD’s comments are reprinted in their entirety in appendix II. DOD also provided technical comments that we have incorporated as appropriate. DOD partially concurred with our recommendation for the Army to establish a permanent organizational focal point for developing and managing the Army’s space cadre. DOD stated that two different entities are involved with managing the Army’s space cadre and the Army is in the process of determining whether a single organization will manage its space cadre. During our review, Army officials had differing views on the need to establish a single organizational focal point. They told us that the Army wants to decide whether to expand its space cadre beyond military officers before it designates management responsibilities for the space cadre. We believe that the Army should establish a single organizational focal point to develop its space cadre in a timely manner. This would help the Army to develop and retain the appropriate number of personnel with the right skills to meet Army and joint needs. We are sending copies of this report to interested congressional committees; the Secretary of Defense; the DOD Executive Agent for Space; the Secretaries of the Army, the Navy, and the Air Force; and the Commandant of the Marine Corps. We will also make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions, please contact me at (202) 512-4300. Key contributors to this report are listed in appendix III. Appendix I: Scope and Methodology To determine whether the Department of Defense’s (DOD) space human capital strategy and management approach to implementing the strategy promote the development and integration of the military services’ space cadres, we reviewed and analyzed the strategy and compared it to other human capital strategies, the human capital models in our prior reports, and the management principles contained in the Government Performance and Results Act of 1993. We discussed the strategy and its implementation with officials in the Office of the Under Secretary of Defense for Personnel and Readiness and the Assistant Secretary of Defense for Networks and Information Integration. We also discussed the strategy and its implementation with DOD’s Executive Agent for Space and the officials from his office who led the development of the strategy. We assessed the actions taken to date to implement the strategy. We also discussed whether the strategy would effectively integrate the services’ efforts with officials in each of the military services and at the National Reconnaissance Office. Specifically, for the military services, we interviewed officials and gathered information at the Air Force Space Command, Peterson Air Force Base, Colorado; the Army Office of the Deputy Chief of Staff for Operations and Plans, Arlington, Virginia; the Army Space and Missile Defense Command, Arlington, Virginia; the Navy Space Cadre Advisor, Arlington, Virginia; and the Office of Plans, Policies, and Operations, Headquarters, U.S. Marine Corps, Arlington, Virginia. To assess the extent to which the military services have planned and implemented actions to develop and manage their space cadres, we analyzed documentation on strategies, initiatives, and other implementing actions at each service and discussed them with service officials. Locations visited to accomplish this objective were the Air Force Space Command, Peterson Air Force Base, Colorado; the Air Force Space Operations School, Colorado Springs, Colorado; the Army Office of the Deputy Chief of Staff for Operations and Plans, Arlington, Virginia; the Army Space and Missile Defense Command, Arlington, Virginia; the Army Force Development and Integration Center, Colorado Springs, Colorado; the Navy Space Cadre Advisor, Arlington, Virginia; and the Office of Plans, Policies, and Operations, Headquarters, U.S. Marine Corps, Arlington, Virginia. We also met with officials from the National Reconnaissance Office, but we did not assess its workforce plan because military personnel assigned to the office are drawn from the space cadres of the military services. We conducted our review from October 2003 through June 2004 in accordance with generally accepted government auditing standards. We did not test for data reliability because we did not use DOD generated data in our analysis of DOD’s management approach. Appendix II: Comments from the Department of Defense Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the individual named above, Alan M. Byroade, John E. Clary, Raymond J. Decker, Linda S. Keefer, Renee S. McElveen, and Kimberly C. Seay also made key contributions to this report. GAO’s Mission The Government Accountability Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through GAO’s Web site (www.gao.gov). Each weekday, GAO posts newly released reports, testimony, and correspondence on its Web site. To have GAO e-mail you a list of newly posted products every afternoon, go to www.gao.gov and select “Subscribe to Updates.” Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Congressional Relations Public Affairs
The Department of Defense (DOD) relies on space for many critical capabilities, and its continued success in space operations depends on having sufficient space-qualified personnel. Space-qualified personnel are needed to develop technology, doctrine, and concepts and operate complex systems. In the National Defense Authorization Act for Fiscal Year 2004, Congress required DOD to develop a strategy for developing and integrating national security space personnel. DOD completed it in February 2004. Congress also required GAO to assess DOD's space human capital strategy and the military services' efforts to develop their space personnel. In the first of two required reports, GAO assessed (1) whether DOD's space human capital strategy and management approach promote development and integration of the services' space personnel and (2) the extent of the services' initiatives to develop and manage their space personnel. DOD's space human capital strategy is a significant first step that promotes the development and integration of DOD's space personnel by providing strategic goals and objectives; however, DOD does not have a complete results-oriented management approach to implement the strategy because it does not include an implementation plan that details specific actions, time frames, and evaluation measures. The space human capital strategy provides general direction for developing and integrating DOD space personnel, and it identified key actions needed for implementation. DOD has not completed any of these actions. Without an implementation plan, DOD will not be in a sound position to effectively monitor and evaluate implementation of the strategy and achieve the strategy's purpose of integrating the services' space personnel into a cohesive DOD total force. The military services vary in the extent to which they have identified and implemented initiatives to develop and manage their space personnel. The Air Force and the Marine Corps have taken significant actions in developing and managing their space personnel, including developing space human capital strategies and designating organizational focal points. The Air Force, which has the largest number of space personnel, approved its space human capital strategy in July 2003, and it is implementing its initiatives. The other services are working on similar initiatives and have completed some, but many will take years to fully implement. The Army's and the Navy's actions in developing their space personnel have been limited because they do not have clear goals and objectives for developing their space personnel or organizational focal points to manage them. Without these tools, the Army and the Navy may not be able to determine their requirements for space personnel and develop sufficient numbers of space personnel with the necessary training, education, and experience to meet service and joint needs.
GAO_GAO-17-348
Background WMATA’s Metrorail system has experienced a variety of serious safety incidents in recent years (see fig. 1 below). On June 22, 2009, one Metrorail train struck the rear of a second train stopped near the Fort Totten station on the Red Line, resulting in nine deaths and over 50 injuries. The NTSB report on the incident found that WMATA failed to institutionalize, and employ system-wide, an enhanced track-circuit verification test procedure that was developed following near-collisions in 2005. NTSB also found evidence of an ineffective safety culture within WMATA. More recently, WMATA has experienced smoke and fire incidents involving the electrical cables and other components supporting its third-rail system. On January 12, 2015, a Metrorail train stopped after encountering heavy smoke in the tunnel between the L’Enfant Plaza station and the Potomac River Bridge on the Yellow Line caused by electrical arcing resulting from a short circuit on the third rail power system, causing one passenger’s death and numerous injuries. In a report on this incident, the NTSB again found a lack of a safety culture within WMATA. NTSB specifically noted deficiencies in WMATA’s response to smoke reports, tunnel ventilation, railcar ventilation, emergency response, as well as the oversight and management of WMATA. In November 2015, WMATA’s new General Manager began his tenure and initiated a variety of efforts to address WMATA’s Metrorail safety issues. On March 14, 2016, an electrical fire occurred near the McPherson Square station involving the same kind of power cable that caused the L’Enfant Plaza smoke incident. Following this fire, WMATA closed the entire Metrorail system for a day for emergency inspections of the system’s third-rail power cables. On May 19, 2016, WMATA announced SafeTrack, “a massive, comprehensive, holistic effort to address safety recommendations and rehabilitate the Metrorail system on an accelerated basis by expanding all available maintenance windows.” The primary focus of SafeTrack is rehabilitating Metrorail’s track infrastructure by replacing over 45,000 crossties, which are the wooden beams that lay across the railroad bed on above ground sections of the track, and 35,000 fasteners, which secure rails directly to concrete on underground or aerial sections of the track where wooden crossties are not used. SafeTrack is being carried out through a series of “surges” that involve intensive work on specific areas of track that are either shut down to normal traffic or have only one of the two tracks open, a type of operation known as “single tracking” (see fig. 2 below). SafeTrack also involves the reduction of operating hours to allow additional work to be carried out overnight and on weekends in non- surge areas. Although the primary focus of SafeTrack is track assets, WMATA is also using the extended outages to address other safety concerns, such as concerns regarding power cables and other electrical components raised by NTSB and FTA. According to WMATA’s initial announcement, the project was designed to bring Metrorail’s track infrastructure to a “state of good repair,” which WMATA defines as the condition at which individual railroad assets can be sustained at ongoing, annual replacement rates under normal maintenance cycles. WMATA estimates that SafeTrack will cost approximately $120 million. According to WMATA and FTA officials, these costs will be covered by about $48 million in federal funding, which includes two FTA formula grants as well as funding authorized by the Passenger Rail Investment and Improvement Act of 2008 (PRIIA). The PRIIA funding is also matched by over $30 million in local funds from the three jurisdictions that help fund WMATA. Beyond the almost $80 million in federal and local matching funds, SafeTrack will require an additional $40 million in fiscal year 2017 funding; according to WMATA, the sources of this funding are yet to be finalized. Although SafeTrack was not specifically included in WMATA’s approved fiscal year 2016 or 2017 budgets, WMATA amended and its board approved its fiscal year 2017 budget, in November 2016, to include additional funding for the project. WMATA’s track rehabilitation projects and other capital investments are made through a 6-year Capital Improvement Program, with the current version covering fiscal years 2017 through 2022. Other transit agencies with aging infrastructure like WMATA have also undertaken, or plan to carry out, large-scale rehabilitation projects that involve extended disruptions to normal revenue service. For example, in 2013 CTA shut down the southern half of one of its lines for 5 months to completely rebuild the railroad and renovate rail stations on the branch. Additionally, NYCT is planning to shut down the Canarsie subway tunnel connecting Manhattan and Brooklyn to facilitate extensive repairs from damage caused by Hurricane Sandy in 2012. Similarly, the Port Authority of New York and New Jersey is rehabilitating tunnels used by its PATH service between New Jersey and Manhattan and installing positive train control technology, which can reduce the risk of accidental collision between trains on the same track. In recent years, FTA has been provided with an expanded role in overseeing public transportation safety within WMATA and in other transit agencies. The Moving Ahead for Progress in the 21st Century Act (MAP- 21) expanded FTA’s safety oversight role over public transportation systems and established the public transportation safety program, providing FTA with new authority to inspect and audit a public transportation system. This new authority also required FTA to promulgate regulations requiring states to establish state safety oversight programs and agencies for states’ public transportation systems. Additionally, MAP-21 provided FTA with more safety oversight authority and more options for enforcement when transit agencies were found to be out of compliance with federal safety laws. For example, in response to concerns regarding WMATA’s safety performance over the last decade, FTA conducted a Safety Management Inspection of the WMATA rail and bus systems. The Safety Management Inspection evaluated WMATA’s operations and maintenance programs, safety management capabilities, and organizational structures. FTA found that, in recent years, WMATA has implemented new management initiatives and programs to address safety concerns, but organizational deficiencies and operational concerns continue to limit WMATA’s effectiveness in recognizing and resolving safety issues. For example, FTA found that WMATA work crews do not have sufficient access to the rail right-of-way to perform critical inspection, testing, and maintenance activities. FTA also found serious safety lapses in the rail operations control center. More broadly, FTA also reported that in key areas, WMATA’s organization is not effectively balancing safety- critical operations and maintenance activities with the demand for passenger service. In response to WMATA safety incidents, FTA assumed temporary and direct safety oversight of WMATA in October 2015. Specifically, as part of its investigation of the January 2015 smoke and fire incident near the L’Enfant station, NTSB found that the Tri-State Oversight Committee’s safety oversight of WMATA was deficient and recommended that DOT seek an amendment to federal law so that the Federal Railroad Administration within DOT could exercise regulatory oversight over the WMATA rail system. DOT agreed that the Tri-State Oversight Committee was deficient and ineffective, but disagreed with NTSB that the most urgent and effective solution was to transfer safety oversight of WMATA’s rail transit system to the Federal Railroad Administration. Instead, in October 2015, DOT directed FTA to take direct and temporary control of safety oversight at WMATA from the Tri-State Oversight Committee. To perform direct safety oversight of WMATA, FTA established the FTA WMATA Safety Oversight (FWSO) office, which is currently comprised of FTA personnel, inspectors on detail from FRA, and contractor support staff, according to FTA officials. In February 2016, FTA found the Tri-State Oversight Committee was incapable of enforcing its safety findings and thus, using new authority provided by the Fixing Americas Surface Transportation (FAST) Act, FTA determined that it would continue with its direct safety oversight of WMATA. FTA’s FWSO and region three office, which includes the Washington metropolitan area, have jointly managed oversight of SafeTrack. When WMATA announced SafeTrack in May 2016, the FWSO was in place and performed initial safety oversight of the project. However, FTA’s various regional offices exercise project management oversight over “major capital projects,” which include, among other things, projects that involve the rehabilitation or modernization of an existing fixed guideway with a total project cost in excess of $100 million. Using this project management oversight authority for major capital projects, FTA can monitor the project’s progress to determine whether a project is on time, within budget, in conformance with design criteria, constructed to approved plans and specifications, and is efficiently and effectively implemented. According to FTA officials, FTA designated SafeTrack as a major capital project based upon WMATA’s decision to group together funding from multiple FTA formula grants, as well as funding authorized by PRIIA, and to manage those activities as a discrete project estimated to cost more than $100 million. FTA’s region three office provides project management oversight of SafeTrack. FTA has other efforts to improve the safety and performance of public transportation systems. For example, in July 2016, FTA issued its final rule establishing a National Transit Asset Management System in accordance with section 20019 of MAP–21. Transit agencies are required to have an initial transit asset management plan completed by October 2, 2018. Transit agencies’ plans must include an inventory of the number and type of capital assets and a condition assessment of those inventoried assets for which a provider has direct capital responsibility, among other elements. In August 2016, FTA also issued a Public Transportation Safety Program final rule establishing rules to support its administration of the public transportation safety program. The rule provides the framework for FTA to monitor and enforce transit safety. WMATA Did Not Fully Follow Leading Practices When Planning SafeTrack Because It Wanted to Address Safety Issues Immediately, but Future Projects Could Benefit from Additional Planning WMATA’s planning of SafeTrack did not fully align with leading project management practices, including some that are focused on projects for rehabilitating transit assets. Specifically, while WMATA’s efforts to coordinate with local stakeholders after SafeTrack began have generally been in line with such practices, WMATA did not (1) comprehensively collect and assess data on its assets, (2) analyze alternatives, or (3) develop a project management plan before starting work. WMATA did not follow these practices because it believed it needed to start work immediately to address critical safety issues. However, by not following these leading practices, WMATA lacks assurance that the accelerated approach taken with SafeTrack is the most effective way to identify and address safety issues. WMATA also lacks a policy that requires, and relevant procedures specifying how, it follow these leading planning practices for large-scale rehabilitation projects. Without such a policy and procedures in place, WMATA lacks a framework to comprehensively plan future large-scale rehabilitation projects to meet their objectives. WMATA’s Collection and Use of Data in Planning SafeTrack Did Not Align with Leading Practices, Though New Asset Inventory Is Being Developed Leading management practices for transit rehabilitation projects state that transit agencies should collect and use data on assets when planning projects. Public transit agencies have a wide variety of assets to maintain, including track and third rail infrastructure. The TCRP report on prioritizing the rehabilitation of capital assets states that transit agencies should collect detailed information on assets, including data on the age and condition of infrastructure. The TCRP report also states that agencies should use data to assess the conditions of assets. This assessment should then form the basis of prioritizing rehabilitation work. Indeed, according to TCRP, “the process of evaluating and prioritizing rehabilitation and replacement work starts with collecting data on existing transit capital assets.” Though WMATA collected data on its track assets through inspections when planning SafeTrack, those inspections were not comprehensive because they focused on specific items like rail crossties and did not cover all track-related infrastructure. Specifically, in 2015, WMATA conducted inspections of its Metrorail track to collect data on the condition of its track infrastructure and identify the work necessary to bring the track to a state of good repair. The inspections were carried out by a contractor for WMATA’s Track and Structures department as part of WMATA’s Track Quality Improvement Program (TQIP). According to WMATA officials we spoke with, these inspections were necessary under TQIP because they could not rely solely on track condition data in WMATA’s existing asset database. Indeed, WMATA’s OIG recently found that WMATA’s asset database does not have adequate controls and oversight in place to properly manage assets, among other concerns. WMATA used the data collected in its 2015 inspections as the primary source for identification of the most degraded areas of track, which would be subject to SafeTrack surges. However, the data collected during the inspections focused on the rail crossties and did not cover all infrastructure in the Metrorail track area. For example, according to WMATA officials, the inspections did not include an examination of all interlockings or of all track power systems, including the electrical cables that power the third rail system. According to WMATA officials we spoke with, these systems were not included in the inspections because the Track and Structures department leading the TQIP effort is not responsible for the maintenance of other systems. Electrical cables, for example, are managed by WMATA’s Power Engineering department. Data on the condition of assets in non-track systems has generally been collected by the responsible department, but according to WMATA officials, these data were not used to identify areas for SafeTrack work. Officials with other transit agencies we spoke with said that accurate and comprehensive data on assets are crucial to identifying and prioritizing rehabilitation efforts. For example, NYCT officials told us that they rely on data from their transit asset management database to identify track sections with the greatest number of defects, or areas in need of repair, to prioritize sections of tracks for rehabilitation activities. MBTA officials we spoke with said that their agency has developed a state-of-good-repair database that includes an inventory of the age of assets that managers can use to prioritize rehabilitation and replacement projects. Officials from CTA said they use a new asset management system, which has detailed information on the condition of CTA’s assets, to better identify and prioritize capital projects. WMATA’s planning of SafeTrack relied on limited data regarding the condition of Metrorail assets, in part because the agency lacks internal requirements governing the collection and use of asset information in planning projects. More specifically, WMATA does not have a policy or procedures requiring it to collect and use asset data, and coordinate with other departments on the collection of such data when planning large- scale rehabilitation projects. To ensure that such proper management practices are consistently carried out, the COSO internal control framework used by WMATA states that management should set policies establishing what is expected and relevant procedures specifying the necessary actions to carry out the policy. As we reported recently, asset management can help transit agencies optimize limited funding so that they receive the “biggest bang for their buck” when rehabilitating and replacing assets. By not gathering and using detailed data on all aspects of the track infrastructure when planning SafeTrack, WMATA decision-makers may not have had sufficient information to develop project objectives and properly prioritize SafeTrack work. Indeed, serious safety incidents have continued to occur on the Metrorail system during SafeTrack on assets that were not being addressed in the project. On July 29, 2016, a train derailed near the East Falls Church station. This derailment occurred on an interlocking, a part of track not scheduled at that time for rehabilitation under SafeTrack. As a result of this incident, WMATA modified the scope of future SafeTrack surges to include the rehabilitation of interlockings. Additionally, FTA has directed WMATA to complete safety critical work both prior to starting and during SafeTrack, resulting in changes to the scope and schedule of SafeTrack, as discussed later in this report. Though WMATA did not utilize comprehensive asset information in planning SafeTrack, it is developing a new inventory, as required by FTA’s 2016 Transit Asset Management final rule. More specifically, WMATA is currently conducting a Transit Asset Inventory and Condition Assessment, and is working with FTA to develop its new transit asset inventory. According to WMATA, this effort will help ensure that it has a complete, consistent, accurate, and centralized repository of relevant asset-related data. A reliable repository of asset data can then facilitate data-driven maintenance and capital investment decision making. WMATA has completed the first of two phases for this assessment. In the first phase, WMATA sought to conduct an initial asset inventory and condition assessment. In the second phase, WMATA plans to further develop how it will manage its assets and collect additional data, among other things. WMATA’s Analysis of Alternatives for Improving the State of Repair of the Track Did Not Align with Leading Practices Leading management practices for transit rehabilitation projects state that transit agencies should have a policy in place for evaluating project alternatives. The TCRP report on prioritizing the rehabilitation of capital assets states that agencies should generate alternative plans for achieving a state of good repair and quantify the costs and impacts of those alternatives. As noted above, the COSO internal controls framework used by WMATA also states that management should establish policies and procedures to help ensure that proper practices are carried out. Though WMATA considered different plans for improving the state of repair of its track infrastructure, it did not quantify the costs and impacts of each alternative. WMATA currently lacks a policy requiring alternatives analysis for large-scale rehabilitation projects. After collecting data from track inspections in 2015, WMATA developed three alternatives for improving the state of repair of its track infrastructure. These alternatives included 8, 10, and 22-month work schedules. According to WMATA officials, these alternatives included different levels of service disruptions, including extensive single-tracking and track section closures, but generally included the same work tasks. According to WMATA officials, they ultimately settled on the initially announced 10-month plan, dubbed SafeTrack, because it best balanced rider disruption with addressing the urgent safety needs of the system. Additionally, they said that WMATA’s ability to make effective and efficient use of time on the track was also a primary consideration. However, WMATA did not fully assess the alternatives to improving the state of its track infrastructure. In particular, WMATA did not quantify the effects of the various alternatives on extending the life of the track assets, on reducing maintenance costs, and on Metrorail ridership. WMATA also did not quantify the costs or establish a detailed budget for its alternatives and still has not determined the final funding sources for its selected alternative. Although WMATA estimates that the SafeTrack project will cost approximately $120 million to complete, it has identified funding sources for about $80 million of these costs and has yet to determine how it will fund the remaining $40 million. Other transit agencies we spoke with described detailed considerations of alternatives to carrying out large-scale rehabilitation projects. For example, CTA officials we spoke with said they developed and assessed different plans to rehabilitate its Red Line South track, including estimates of the costs and impacts of each alternative. Further, officials at PATH told us that they selected a particular approach to upgrading a tunnel they use for trains that travel from New Jersey to Manhattan, New York, because it balanced rider disruption and work efficiency. The PATH officials told us that they conducted approximately one year of planning in advance of this project and developed seven different scenarios before finally settling on the current approach. WMATA did not fully assess alternatives to rehabilitate its track assets because it believed it needed to start work immediately to address critical safety issues. At the time SafeTrack was planned, according to officials we spoke with, WMATA leadership was making critical decisions on how to address systemic deferred maintenance. Indeed, according to WMATA’s Chief Safety Officer, in a call on May 10, 2016, a senior official within FTA’S FWSO office notified WMATA that FTA was considering taking action to “shut down” the entire Metrorail system due to safety concerns. According to WMATA officials, SafeTrack was conceived as WMATA’s unique and necessary response to the state of its track infrastructure. Further, WMATA officials noted that the agency is committed to devoting the resources necessary to bring the track to a state of good repair, and to developing preventative maintenance programs that would prevent similar safety-critical situations in the future. Nevertheless, by not having a policy and procedures in place requiring analysis of alternatives for future large-scale rehabilitation projects, WMATA lacks a framework to comprehensively plan such projects to meet their objectives. WMATA plans to spend over $56 million a year on track rehabilitation projects alone starting in fiscal year 2018. If WMATA were to make decisions about the scope and prioritization of these projects without full information about the various alternatives, it may not select an approach that best balances costs and impacts. SafeTrack Work Began Before Key Elements of a Project Management Plan Were in Place, Inconsistent with Leading Practices Leading project management practices emphasize the importance of developing project management plans. The PMI PMBOK® Guide states that a comprehensive project management plan should be developed before a project begins so that it is clear how the project will be executed, monitored, and controlled. More specifically, the plan should include the critical information for managing a project’s scope, schedule, and cost, according to established baselines and in consideration of project risks, quality standards, and other items. As discussed below, federal law also requires that recipients of federal financial assistance for major capital projects prepare a project management plan. According to WMATA officials, WMATA did not develop a comprehensive project management plan before beginning SafeTrack because they believed a project management plan was not appropriate for such a project. WMATA considers SafeTrack to consist of accelerated but normal maintenance activities. According to WMATA officials, a project management plan is best suited for new construction projects. WMATA therefore chose to manage SafeTrack using tools that they considered more appropriately suited for managing coordinated maintenance tasks. For example, WMATA uses detailed “march charts” to plan and coordinate its various maintenance tasks within surge work areas. However, according to WMATA officials, they did not develop a plan that clearly defined the budget, execution, monitoring, and control of the project before beginning SafeTrack. According to FTA officials we spoke with, FTA has discretion regarding when it determines a project is major and when a project management plan must be submitted. As discussed later in this report, WMATA developed a project management plan during the initial months of SafeTrack implementation, though FTA has not yet approved WMATA’s plan. Other transit agencies we spoke with said that they generally developed extensive plans for their large-scale rehabilitation projects. For example, CTA officials we spoke with said that they conducted extensive planning, and developed a project management plan, for their Red Line South reconstruction project, even though they did not use federal funds and therefore were not required by FTA to develop such a plan. Additionally, in planning for the shutdown and rehabilitation of the Canarsie tunnel, NYCT is developing a detailed plan that reflects its risk assessments and analysis of lessons learned from previous work, according to the officials we spoke with. Though WMATA developed SafeTrack as a unique response to the state of its track infrastructure, future large-scale rehabilitation projects undertaken by the agency would benefit from the development of a comprehensive project management plan prior to the start of the project. As discussed above, WMATA officials told us that they implemented SafeTrack to respond to a critical safety situation and that they could not postpone this track work to develop a project management plan. SafeTrack, though, involves an unprecedented amount of track work performed over an extended period, significantly disrupts ridership, and is estimated to cost well over $100 million. WMATA currently lacks a policy and procedures requiring the development of a project management plan for large-scale rehabilitation projects like SafeTrack, according to WMATA officials, regardless of whether the work is to be completed in response to an emergency situation or within WMATA’s normal state of good repair efforts. The COSO internal controls framework used by WMATA states that management should have policies establishing what is expected of management and employees, to help mitigate risks to achieving goals. Although WMATA told us that it has a manual on project implementation that is focused on the implementation and close-out phases of a project, it does not yet cover the planning phase. Further, although a project management plan is required for public-transportation-related major capital projects receiving federal financial assistance, WMATA may undertake future large-scale rehabilitation projects that do not meet the major capital project definition or that do not use federal funds at all. Such projects could still benefit from having a project management plan in place before beginning the project—consistent with leading practices—to manage the project’s scope, schedule, costs, and other factors. Without a policy and procedures that require the development of a plan for future large-scale rehabilitation projects, WMATA lacks a key tool to ensure its projects are completed on-time, on-budget, and according to quality standards. WMATA Provided Little Notice of SafeTrack to Local Stakeholders but Communication and Coordination during Surges Generally Align with Leading Practices Leading management practices and other transit agencies we spoke with state that identifying and coordinating with external stakeholders is part of proper project planning. The PMI PMBOK® Guide states that agencies should identify stakeholders for their projects, communicate and work with stakeholders to meet their needs, address issues as they occur, and foster stakeholder engagement in project activities. FTA project guidelines also note that communication with the public can be crucial for receiving the necessary buy-in to move a project forward. Other transit agencies we spoke with said that they generally began stakeholder engagement weeks, if not months, prior to the beginning of projects. For example, the PATH officials we spoke with said that they began communicating with government officials and the public about the proposed tunnel weekend shutdowns 2 months before the project started. Similarly, according to NYCT officials, they presented various schedule alternatives for rehabilitating the Canarsie tunnel to the local communities directly affected by the tunnel’s closing to explain NYCT’s rationale for completing the work, as well as to discuss the benefits and challenges of different plans. According to WMATA officials, urgent safety concerns necessitated an accelerated planning process, which precluded advanced notice of SafeTrack to local jurisdictions, other regional transit agencies, and the public. Officials from one local county we spoke with said that they had about a month between when they first heard about SafeTrack and when the first surge began in June 2016. According to local officials we spoke with, little advanced notice of SafeTrack caused some miscommunication between local jurisdictions as well as difficulty identifying funding for mitigation efforts. Specifically, one local county official told us that the county had to quickly develop a plan to bring 25 recently retired buses back into service to provide options to Metrorail riders affected by SafeTrack surges. The county estimated that it incurred approximately $1 million in bus driver labor and other costs as a result of SafeTrack. The county official told us that it expects to be compensated by the state for these expenses. Nonetheless, as the SafeTrack project has progressed, WMATA’s efforts to coordinate with local stakeholders have generally been in line with leading practices. WMATA officials identified stakeholders for the SafeTrack project, including local transit agencies and elected officials. WMATA utilized a variety of methods to communicate and coordinate with local transit agencies and jurisdictions during SafeTrack. For instance, one local official we spoke with said that WMATA’s Joint Coordinating Committee—which brings local officials together to plan for major events affecting regional transportation—is an effective mechanism for sharing information, such as local plans for the use of shuttle buses in areas affected by surges. Local officials also said that communication and coordination between WMATA and jurisdictions has been effective, especially after the first few months of SafeTrack. For example, one local official told us that WMATA has provided the jurisdiction with prompt information about the upcoming surges through weekly planning meetings at WMATA headquarters, as well as through informal coordination with WMATA staff on specific surges. WMATA has also effectively communicated with the public, according to the local officials we spoke with. WMATA officials told us that they have used a variety of measures to communicate SafeTrack plans to the public including press releases issued to local news media outlets; postings on social media, such as Facebook, YouTube, and Twitter; and a SafeTrack web page that includes details about the overall project and each surge. Officials from one jurisdiction said that WMATA has provided good information on its website and that having additional WMATA staff at SafeTrack-affected stations and bus areas has also been useful. As a result of such efforts, one local official told us that Metrorail riders have demonstrated a high level of awareness about SafeTrack. WMATA Is Using Several Leading Practices to Implement SafeTrack and Improve the Quality of Completed Work WMATA’s implementation of SafeTrack generally aligns with leading project management practices. Specifically, during the course of each SafeTrack surge, WMATA officials collect and document information about the work performed and the condition of assets. WMATA officials also develop lessons learned during and after each surge period, and use those lessons during subsequent maintenance and planning efforts. Last, WMATA developed a new organization-wide quality control and assurance framework that it is implementing for the first time through SafeTrack. WMATA Has Consistently Collected and Monitored Work Performance Data and Information Leading project management practices emphasize the importance of collecting and monitoring work performance data and information. The PMI PMBOK® Guide states that throughout the lifecycle of a project, organizations will generate a significant amount of work performance data and work performance information that is collected, analyzed, documented, and shared with stakeholders. This data and information is typically created and documented after a project begins, and is a key element in controlling a project’s scope, schedule, cost, and risk. Organizations can collect work performance data and information to identify trends and process improvements. Work performance data are also a key factor in an organization’s overall quality management for projects, as they provide a foundation for implementing quality control and quality assurance practices, as well as stakeholder engagement, since they inform discussions on project performance. The TCRP report on prioritizing the rehabilitation of capital assets also states that transit agencies should define data collection and inspection protocols, and ensure the data are detailed and current enough to support decisions on asset rehabilitation or replacement. Based on procedures WMATA has established, officials have collected and documented information about the work performed and the condition of WMATA’s assets in SafeTrack surge areas, consistent with leading project management practices. Prior to each surge, WMATA officials from relevant departments have conducted inspections on the conditions of both the track infrastructure and other non-track assets. WMATA officials have used this pre-surge inspection data to develop the overall scope of work for each surge, as well as to identify each component planned for maintenance or replacement. According to WMATA officials, although prior to SafeTrack inspections focused solely on track assets such as the condition of crossties, pre-surge inspections have since included assessments on the condition of both track and non-track assets, such as power cables. However, the number of assets planned for maintenance or replacement in each surge varies depending on the conditions of the assets in question. WMATA officials told us that during each surge, they regularly discuss progress with departments that are responsible for ensuring completion of scheduled work, as well as monitoring teams’ work quality and site safety. At the end of each surge, WMATA officials have compiled totals for all work completed, after verification and completion of the various departments’ quality control processes. WMATA has then compared the completed work against the pre-surge work plan. WMATA has used the completed work data to develop its surge progress reports, which it issues to stakeholders and makes available to the public at the end of each surge. However, although WMATA is collecting information on the condition of assets repaired through SafeTrack, WMATA does not have a policy or procedures requiring it to use asset data when planning future large-scale rehabilitation projects, as previously discussed. WMATA has also used work performance data and information to identify the amount of rehabilitation work that can be performed during a given maintenance window. For example, WMATA is not replacing all crossties within a given SafeTrack segment; rather, its goal is to ensure that 75 percent of the ties in a surge area are in good condition so that it will not need to replace all of them at the same time in the future. WMATA officials stated they believe that this approach will allow them to move to a more sustainable crosstie replacement model, eliminate maintenance backlog, and achieve a state of good repair for those assets. WMATA has also incorporated other types of data, such as logistical constraints for available work crews and equipment, to inform its assessment of how work will be accomplished during each surge. See figure 3 for select track assets that have undergone repair or replacement during SafeTrack. The work performance data collected by WMATA demonstrate that WMATA has renewed or replaced a substantial amount of track infrastructure, as well as other non-track assets, during the course of SafeTrack. According to WMATA officials, SafeTrack work crews have been able to complete work more efficiently than is possible during normal, shorter, maintenance windows. For example, WMATA reported that by limiting service for 13 days on the Red Line from Shady Grove to Twinbrook, it was able to replace over 3,500 crossties; this work would have taken more than 2 years to complete if performed only after the end of the rail system’s service each day. As shown in table 1, through the first 10 surges, WMATA has replaced more than 26,000 crossties, with its goal being to replace over 45,000 crossties when the project is complete. Through surge 10, WMATA has also replaced more than 4,300 insulators, which support the third rail. WMATA plans to replace more than 11,800 insulators through SafeTrack, and has replaced over 700 power cables as well. WMATA Has Collected and Implemented Lessons Learned throughout SafeTrack The collection of lessons learned is a key project management step that helps inform an organization’s planning and evaluation of its projects, programs, and portfolios, as well as supports process improvements. The PMI PMBOK® Guide states that organizations should identify and collect lessons learned during the course of executing a project to complement their overall knowledge base, particularly with respect to project selection, performance, and risk management. FTA Quality Management Systems Guidelines also state that corrective actions for nonconforming work common to most projects should be recorded as lessons learned and disseminated throughout an organization. In accordance with leading project management practices, WMATA officials have developed lessons learned during and after each surge period, and have used those lessons during subsequent maintenance and planning efforts. For example, WMATA officials said that over the course of the initial SafeTrack surges, they evaluated their work procedures and refined their approach to replacing rail crossties. In particular, they acquired new machines to remove ties and install rail spikes, and implemented better scheduling of the machines and work crews to facilitate more efficient crosstie replacement. As another example, during the course of the initial SafeTrack surges, WMATA officials learned to define clear work limits prior to each surge to improve work efficiency. More specifically, during the initial planning of SafeTrack, officials did not clearly define surge work areas by specific chain markers and instead labeled the ends of the surge areas by Metrorail station. Furthermore, WMATA officials recognized the need to make detailed scope and work plan documents available before the start of each surge in order to prevent confusion regarding expectations, work inefficiencies, and unachieved objectives. WMATA has also used project meetings to capture and disseminate lessons learned among its work teams. Before every surge period, WMATA stakeholders have met to discuss the intended scope of work for the surge, prioritize work tasks, and agree upon a work plan. WMATA officials have then incorporated this information into a 90-day “look ahead” schedule that is used to plan material purchases and verify track rights for work crews. According to WMATA officials, once a surge has ended, WMATA holds “closeout” meetings with its internal stakeholders (including quality assurance officials) to discuss the work performed and lessons learned, which are then included in an official closeout report. The use of closeout meetings after a project work period ends is also consistent with leading project management practices. WMATA Is Establishing and Implementing Policies to Improve the Quality of Work Performed through SafeTrack Leading project management practices emphasize the importance of the management, assurance, and control-of-quality issues. The PMI PMBOK® Guide states that organizations should establish policies and procedures that govern quality management for their projects and deliverables. Quality management refers to key processes that comprise a quality framework, including identifying quality requirements and standards, performing quality audits, and monitoring and recording results of executing quality activities in order to assess performance and recommend necessary changes. PMI notes that having a quality management framework in place can ensure that project requirements are met and process improvement initiatives are supported. WMATA has developed an agency-wide quality control and assurance framework that is in line with best practices. According to WMATA officials, the agency is implementing a new quality assurance framework for the first time during SafeTrack. In March 2016, WMATA officials established a new, independent quality team called Quality Assurance, Internal Compliance, and Oversight (QICO) that reports directly to the WMATA General Manager. In addition to serving as an independent reviewer of the SafeTrack project, the QICO team is responsible for developing and implementing a new quality framework for the entire organization. According to WMATA officials, this framework has three levels of review for work performed by maintenance groups. Maintenance groups are to provide the first level of review, with managers assessing the quality of the work completed by crews, such as installation and maintenance of assets, and documenting their findings on quality control checklists. Second, QICO is responsible for assessing the overall quality of completed work by reviewing a sample of work tasks competed during the surge, and providing feedback for work teams on quality and safety concerns. This feedback includes preparing surge closeout reports that document any quality, safety, or other concerns and reporting them to WMATA leadership and the relevant work teams involved. The work teams must then address and close out any quality deficiencies through ongoing maintenance activities. Last, the WMATA OIG and Board of Directors are responsible for monitoring internal performance at the agency and approving manager-level decisions regarding quality control and assurance. In addition to this review structure, the QICO team is also developing an enterprise-wide Quality Management System, in accordance with the FTA’s Quality Management Systems Guidelines, that is intended to clearly define WMATA’s organizational objectives with respect to quality assurance. The QICO team has also developed training programs for maintenance supervisors as well as certification requirements for quality assurance staff. In implementing procedures, WMATA’s QICO team has identified a number of work-related issues (referred to as “discrepancies”) during its quality control and quality assurance inspections of SafeTrack work, discrepancies that WMATA is working to address. Specifically, according to SafeTrack surge’s closeout reports through the first eight surges, QICO inspectors identified a total of 413 discrepancies for WMATA teams to address. Officials are to document these discrepancies in “punch lists” of work tasks that WMATA workers must complete during the course of upcoming routine maintenance. FTA officials told us that the QICO closeout reports are useful in order to see work completed during each surge as well as to inform post-surge inspections. Through surge 8, WMATA has closed 231 of the 413 discrepancies identified by QICO, including 93 percent of the safety concerns, 57 percent of the quality concerns, and 53 percent of site condition concerns (see table 2). FTA Has Used Inspections and Other Tools to Direct WMATA to Make Safety Repairs and Oversee SafeTrack FTA Has Conducted Inspections and Directed WMATA to Make Safety Critical Repairs before and during SafeTrack Prior to WMATA’s announcement of SafeTrack in May 2016, FTA conducted many inspections of WMATA’s track infrastructure and internal inspection program. These have informed its oversight of the project. As previously discussed, FTA established the FWSO office in October 2015 to provide temporary and direct safety oversight of WMATA in the absence of an effective state oversight authority, according to FTA. When WMATA first notified FTA of its plans to implement the SafeTrack project on May 6, 2016, the FWSO had been conducting inspections on the integrity of WMATA’s track since March 2016. The FWSO was in the process of developing a directive requiring WMATA to take corrective actions to address concerns with its track construction, maintenance, and inspection resources, among other issues. According to FTA, FWSO inspectors conducted 76 inspections of WMATA’s Metrorail system from October 2015 through May 2016. For example, in March and April of 2016, FTA inspected over 60 miles of track on all six Metrorail lines, with additional follow-up inspections between late April and June. FTA found that WMATA’s track inspection program did not fully account for differences in track types, locations, and train traffic volume when WMATA prioritized its inspections. In addition, FTA found that WMATA maintenance departments did not jointly review inspection results to develop coordinated mitigations and assign limited resources to highest priority issues. Since the start of SafeTrack work in June 2016, FTA has conducted additional inspections and observations of SafeTrack work for each surge. According to FTA inspection data from June 2016 through September 2016, FTA inspectors conducted a total of 102 individual inspections of WMATA rail assets, including 49 inspections that covered SafeTrack-related work. For example, FTA officials told us that from the first SafeTrack surge, FTA officials accompanied WMATA staff on pre- surge inspections to identify repair items and observe work tasks during each surge to assess the quality of the repairs. FTA officials said they also conferred with WMATA staff after each surge to identify work not completed, which WMATA compiles into a prioritized “punch list” of critical repairs to be completed during the course of upcoming routine maintenance. FTA officials told us they have been monitoring WMATA’s completion of punch list items and were working with WMATA to ensure progress in completing these tasks. As a result of its inspections, FTA directed WMATA to complete safety critical work both prior to starting and during SafeTrack, specifically: In response to WMATA’s initial SafeTrack plan provided to FTA on May 6, 2016, FTA sent a letter on May 11, 2016, directing WMATA to make urgent repairs to reduce the risk of smoke and fire events and the occurrence of arcing insulators on certain sections of the rail system. FTA’s letter directed WMATA to repair power cables, insulators, and the electrified third-rail system on certain portions of the Red, Blue, Orange, and Silver lines, before beginning SafeTrack. In response to WMATA’s proposed schedule changes after the July 29, 2016 derailment of a Silver Line train near the East Falls Church station, FTA sent WMATA another letter on September 1, 2016, encouraging WMATA to also include additional safety-related work in SafeTrack, including: (1) prioritizing additional repairs to arcing insulators on the Red Line; (2) completing unfinished track work from the third surge on the Blue and Yellow lines; and (3) addressing poor tie and fastener conditions on certain sections of the Orange and Blue lines, including a section of the Orange Line that was not originally part of WMATA’s SafeTrack surge plan. WMATA took several actions to address FTA’s concerns. First, WMATA adjusted the order of early surges in its initial SafeTrack plan and has replaced insulators, repaired power cables and third-rail components, and assigned a dedicated work crew to improve drainage on the sections of the Red Line between Medical Center and Van Ness stations, as cited in FTA’s May 11 and September 1 letters. Second, WMATA officials told us that Metrorail completed unfinished work from the third surge on the Blue and Yellow lines during an additional single-tracking event. Finally, in January 2017, WMATA scheduled an additional surge from May to June 2017 to address FTA’s concerns regarding poor track condition on a certain section of the Orange Line. FTA reported that WMATA’s actions taken in response to FTA’s concerns have helped reduce safety incidents. According to an FTA report, WMATA has reduced the prevalence of electrical arcing incidents on the Red Line between Medical Center and Van Ness station as a result of WMATA’s additional maintenance activity in that section of track. Specifically, FTA reported that between March 1, 2016, and June 14, 2016, WMATA had experienced 18 electrical arcing incidents between Medical Center and Van Ness, including 4 major events at the end of April and early May. Since taking additional maintenance actions, WMATA experienced 8 arcing events over the 4 month period from mid- June 2016 through mid-October 2016, and FTA has characterized these events as relatively minor. FTA Has Required WMATA to Prepare and Refine Its SafeTrack Project Management Plan In addition to FWSO inspections of WMATA infrastructure and safety procedures, FTA has also exercised its project management oversight authority over SafeTrack since July 2016. FTA’s project management oversight includes monitoring a major capital project’s progress to determine whether a project is on time, within budget, and in conformance with design criteria, and whether it is constructed to approved plans and specifications, and is efficiently and effectively implemented. As noted previously, major capital projects include, among other things, projects involving the rehabilitation or modernization of an existing fixed guideway with a total project cost in excess of $100 million. FTA found that SafeTrack met the $100-million criteria for a major capital project when it approved an additional $20 million in safety- related federal funding for the project in mid-June 2016, during the first surge. As a result, FTA announced that it would exercise its project management oversight authority over SafeTrack in a July 1, 2016, letter to WMATA. After FTA designated SafeTrack as a major capital project based on criteria established in law, WMATA became subject to the statutory requirement to complete a project management plan. Federal law requires that recipients of federal financial assistance for a major capital project related to public transportation prepare a project management plan approved by the Secretary of Transportation, and carry out the project in accordance with the project management plan. FTA guidelines state that a project management plan provides a functional, financial, and procedural road map for the project sponsor to effectively and efficiently manage a project on-time, within-budget, and at the highest quality and safety. According to federal regulations, as a general rule, a major capital project’s project management plan must be submitted during the grant review process and is part of FTA’s grant application review. These regulations also state that if FTA determines that a project is major under its discretionary authority after the grant has been approved, FTA will inform the recipient of its determination as soon as possible. In the case of SafeTrack, due to WMATA’s desire to begin SafeTrack work immediately, and FTA’s determination of SafeTrack as a major capital project after work had already commenced, WMATA did not submit its project management plan to FTA until 4 months into the project. On July 1, 2016, FTA requested that WMATA submit its project management plan to FTA by July 29, 2016. WMATA requested and was granted an extension, and submitted its project management plan to FTA on September 30, 2016. As of January 2017, FTA has yet to approve WMATA’s project management plan because key elements lacked sufficient detail. FTA officials told us that WMATA’s plan did not provide adequate information on the SafeTrack budget and costs of the work being conducted, as well as information to identify and manage project risks, or assess the performance of the project against defined metrics. FTA provided WMATA with detailed comments on WMATA’s plan covering these and other issues. As previously noted, WMATA officials told us that they do not consider the project management plan to be the most appropriate tool to manage SafeTrack tasks, which are primarily maintenance activities. However, WMATA officials also told us that they were working closely with FTA to improve the quality and level of detail in the plan. Conclusions WMATA’s recent record of significant safety incidents demonstrates that its Metrorail system faces serious safety and infrastructure challenges. Through SafeTrack, WMATA has accomplished a substantial amount of repair work to bring its track infrastructure closer to a state of good repair. WMATA is also learning some important lessons in implementing SafeTrack that could better equip it to identify and address issues in future large-scale rehabilitation projects. Perhaps more importantly, SafeTrack indicates that WMATA is now committed to preventative maintenance, including the repairing of track assets before they break and cause more cost and safety impacts on Metrorail riders. Though SafeTrack consists largely of routine maintenance work, the intensity, length, cost, and disruption of the effort distinguishes it from normal maintenance work. As a result of the urgent need for work on the track infrastructure and the unique nature of SafeTrack, WMATA’s planning of SafeTrack did not fully align with leading practices, and WMATA likely experienced some early challenges as a result. These challenges highlight the importance of comprehensive planning and project management for large-scale rehabilitation projects to minimize the impacts on riders and ensure work is completed efficiently and according to quality standards. Indeed, SafeTrack is not a comprehensive approach to addressing WMATA’s safety needs and additional efforts will be needed to bring the entire Metrorail system to a state of good repair. Without a policy requiring planning processes that are more consistent with leading project management practices, which call for thorough analysis, planning, and informed decision-making, WMATA’s ability to effectively address future infrastructure challenges may be limited. This situation is particularly true for future large-scale rehabilitation projects that may not be designated as major capital projects and subject to FTA’s project management oversight authority, but which could still benefit from having a project management plan in place before beginning the project, consistent with leading practices. Furthermore, documenting these planning requirements, and the relevant procedures for carrying them out, would help ensure that they are carried out consistently, in order that staff and management can be held accountable for them. Recommendations To ensure future large-scale rehabilitation projects are in line with leading project management practices, WMATA should develop a policy that requires and includes relevant procedures specifying that the following three actions be taken prior to starting large-scale projects: use detailed data on the conditions of assets to develop project evaluate and compare alternative ways of accomplishing the project objectives, including estimates for the alternatives’ costs and impacts; and develop a comprehensive project management plan for the selected alternative—to include key elements such as detailed plans for managing the project’s scope, schedule, and cost—for those projects that may not be designated major capital projects. Agency Comments We provided a draft copy of this report to DOT, NTSB, and WMATA for review and comment. In written comments, reproduced in appendix I, DOT said that, since exercising oversight authority, FTA has guided and examined WMATA’s work toward improving its safety culture, infrastructure, and operations. DOT also said that FTA will continue to provide safety oversight of WMATA and help it build upon improvements made in the last year. In comments provided in an e-mail, NTSB noted that it shares our concern that WMATA’s interlockings, and other track work, were not fully considered in planning SafeTrack. NTSB also said that FTA’s public transportation safety oversight approach lacks the necessary standards, expertise, and resources. This report focused on FTA’s oversight of the SafeTrack project specifically, so we did not evaluate FTA’s overall public transportation safety model. We do, however, have planned work to examine FRA and FTA safety oversight programs. In written comments, reproduced in appendix II, WMATA agreed with our findings and conclusions, and said that it is working to address the recommendations. WMATA also said that the draft report did not reflect the urgent safety state of the Metrorail system prior to beginning SafeTrack, which precluded comprehensive project planning. We acknowledge throughout the report that, at the time SafeTrack was being developed, WMATA faced significant safety issues and leadership was making critical decisions on how to address systemic deferred maintenance. Nevertheless, by not fully carrying out leading project management practices, WMATA lacked assurance that SafeTrack was the most efficient and least disruptive approach to accomplishing the track repair objectives. Having a policy and procedures in place requiring these project management practices for future large-scale rehabilitation projects will ensure that WMATA plans such projects so they best meet their objectives. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to the Secretary of Transportation, Chairman of NTSB, General Manager of WMATA, WMATA Board of Directors, and the appropriate congressional committees. In addition, the report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Individuals that made key contributions to this report are listed in appendix I. Appendix I: Comments from the Department of Transportation Appendix II: Comments from the Washington Metropolitan Area Transit Authority Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Matt Barranca (Assistant Director), Kyle Browning (Analyst in Charge), Jason Blake, Lacey Coppage, Hannah Laufe, Sara Ann Moessbauer, Malika Rice, and Michelle Weathers made key contributions to this report.
Recent inquiries into WMATA's Metrorail system have revealed a range of serious safety issues. In response to some of these issues, as well as a backlog of track maintenance, WMATA announced in May 2016 that it was undertaking SafeTrack, a large-scale rehabilitation project. The SafeTrack project is overseen by FTA. GAO was asked to review a range of safety and oversight issues regarding WMATA. This report examines the extent to which WMATA's (1) planning and (2) implementation of SafeTrack was consistent with leading project management practices as well as (3) the steps taken by FTA to oversee SafeTrack. GAO reviewed documentation on WMATA's planning and project implementation, and FTA's oversight of SafeTrack. GAO also interviewed officials from WMATA, FTA, and local jurisdictions, and compared WMATA's planning and implementation of SafeTrack to leading project management practices developed by professional organizations. The Washington Metropolitan Area Transit Authority's (WMATA) planning of SafeTrack did not fully align with leading project management practices. While WMATA generally followed leading practices to coordinate with stakeholders, it did not comprehensively collect and use data on the condition of its assets, analyze project alternatives, and develop a project management plan before starting work. WMATA did not follow these practices because it believed it needed to start work immediately to address critical safety issues. Although WMATA inspected its track assets when planning SafeTrack, those inspections were not comprehensive and did not collect detailed data on the condition of all track infrastructure, such as all “interlockings,” where trains cross from one track to another. As a result, WMATA's decision makers may not have used sufficient information to develop project objectives and to properly prioritize SafeTrack work. Though WMATA developed three alternatives for SafeTrack, it did not determine the costs and impacts of each alternative, or assess them to determine which approach may have resulted in greater efficiencies, lower costs, or less disruption for riders and local jurisdictions. Before WMATA began SafeTrack, it lacked a comprehensive project management plan, which is a key tool to ensure a project is completed on-time, within-budget, and according to quality standards. WMATA does not have a policy that requires, and includes relevant procedures for how to carry out, these planning activities for large-scale rehabilitation projects. Without such a policy and procedures, WMATA lacks a framework to plan future rehabilitation projects so that they achieve their objectives. WMATA's implementation of SafeTrack generally aligned with leading project management practices. Specifically, WMATA officials collected information on the work performed and the condition of assets repaired during SafeTrack. WMATA officials also collect lessons learned during and after each surge, and use those lessons during subsequent maintenance and planning efforts. Additionally, WMATA developed a new organization-wide quality control and assurance framework and is implementing it for the first time through SafeTrack. The Federal Transit Administration (FTA) has used safety inspections and other tools to oversee SafeTrack and direct WMATA to undertake safety-critical work. FTA has relied on two different authorities to oversee SafeTrack: (1) FTA's public transportation safety oversight authority, and (2) its project management oversight authority. Prior to the start of SafeTrack and during the project, FTA conducted safety inspections and directed WMATA to make repairs to reduce the risk of smoke and fires on the rail system. After SafeTrack work began and estimated project costs exceeded $100 million, FTA determined SafeTrack to be a major capital project, triggering the statutory requirement that WMATA prepare a project management plan. WMATA did not submit its project management plan until 4 months into SafeTrack. FTA found the plan lacked sufficient detail, and WMATA told GAO it is working to improve the plan.
GAO_GAO-03-512
Background Elderly households occupied about 25 percent (26 million) of the approximately 106 million housing units in the U.S. in 2001, according to the Housing Survey. A large majority of these elderly households were homeowners. The homeownership rate was considerably higher for elderly households than for nonelderly households (fig.1). A smaller share of elderly households (19 percent) rented their homes. These elderly renter households comprised about 15 percent of all renter households nationwide. The Housing Act of 1959 (P.L. 86-372) established the Section 202 program, which began as a direct loan program that provided below-market interest rate loans to private nonprofit developers, among others, to build rental housing for the elderly and people with disabilities. In 1990, the Cranston- Gonzalez National Affordable Housing Act (P.L. 101-625) modified Section 202 by converting it from a direct loan program into a capital advance program. In addition, the 1990 act created Section 811, another capital advance program, to produce housing specifically for people with disabilities and limited Section 202 to housing for the elderly. In its current form, Section 202 provides capital advances—effectively grants—to private nonprofit organizations (usually referred to as sponsors or owners) to pay for the costs of developing elderly rental housing. As long as rents on the units remain within the program’s guidelines for at least 40 years, the sponsor does not have to pay back the capital advance. HUD calculates capital advances in accordance with development cost limits that it determines annually. These limits must account for several factors, including the costs of construction, reconstruction, or rehabilitation of supportive housing for the elderly that meets applicable state and local housing and building codes. HUD must, by statute, use current data that reflect these costs for each market area. HUD’s policy is that these limits should cover the reasonable and necessary costs of developing a project of modest design that complies with HUD’s minimum property standards, accessibility requirements, and project design and cost standards. Once HUD calculates a capital advance, the amount is placed on reserve, and the funds are made available to the sponsor. To be eligible to receive Section 202 housing assistance, tenants must have (1) one household member who is at least 62 years old and (2) household income that does not exceed the program’s income limits. HUD has established general income categories that it and other federal agencies use to determine eligibility for many federal rental housing assistance programs (table 1). These amounts are subject to adjustments in areas with unusually high or low incomes or housing costs and are published. Only very low income households—those with incomes below 50 percent of the area’s median income—are eligible for the Section 202 program. Very low income households in Section 202 projects generally pay 30 percent of their income for rent. Because tenants’ rent payments are not sufficient to cover the property’s operating costs, the project sponsor receives an operating subsidy from HUD, called a project rental assistance contract. Under the project rental assistance contract, HUD pays the difference between the property’s operating expenses (as approved by HUD) and total tenant rental receipts. Section 202 rental assistance is a project-based subsidy and, as such, is tied to rental units. The households receiving assistance can benefit from a project-based subsidy only while living in Section 202 units. For fiscal year 2002, Congress appropriated about $783 million for the Section 202 program to fund the construction of over 6,000 new units as well as new multiyear rental assistance contracts, service coordinators, renewals of expiring rental assistance contracts, and other activities as authorized by Section 202. From year to year, the Section 202 program has carried balances of unexpended appropriated dollars. According to HUD, in fiscal year 2002, the unexpended balance for Section 202 was approximately $5.2 billion. About 41 percent of this balance was for capital advance funds and 59 percent for rental assistance funds. Generally, some of the program’s unexpended funds have not yet been awarded to projects, and others are attributable to projects that have not begun construction. Once construction begins, funds are expended over several years during the construction phase and during the term of the project rental assistance contract. See appendix II for additional budgetary data for the Section 202 program. Section 202 Is an Important Source of Housing for Elderly Households with Very Low Incomes Section 202 is the only federal housing program that targets all of its rental units to very low income elderly households. Because these households often have difficulty affording market rents, program funding is directed to localities based in part on their proportions of elderly renter households that have a housing affordability problem—that is, that pay over 30 percent of their income for rent and do not receive housing assistance. Nationwide, about 1.7 of the 3.3 million elderly renter households with very low incomes have a housing affordability problem. Section 202 insulates tenants in housing units subsidized by the program from increases in housing costs by limiting rents to 30 percent of household income. The program is a significant source of new and affordable housing for very low income elderly households: in 2001, 1.3 million such households received government housing assistance (about 40 percent of the total), and Section 202 provided housing for roughly one-fifth of them. Even with the program’s exclusive focus on the very low income elderly, Section 202 has reached only a small share of eligible households. Though some other federal programs provide more housing for the elderly, they do not focus exclusively on these renter households. Section 202 Targets Very Low Income Elderly Households and Makes Supportive Services Available Congress specifically intended the Section 202 program to serve very low income elderly households and to expand the supply of affordable housing that can accommodate the special needs of this group. HUD takes into account the level of need for the kind of housing Section 202 provides when allocating program funds to the field offices. Thus, the criteria for allocating funds to the offices include, among other things, the total number of very low income elderly renters in the area and the number in this group that pay more than 30 percent of their incomes for rent. HUD’s allocation formula takes into account the amount of rent households pay in relation to their income. According to the American Housing Survey, in 2001 about 1.7 million households paid over 30 percent of their income for rent. HUD classified the “rent burden” these households face as either “moderate”—between 31 and 50 percent of household income—or “severe”—more than 50 percent of household income. As figure 2 illustrates, about 35 percent (over 1 million) of all elderly renter households with very low incomes had severe rent burdens, and about 15 percent (about 500,000) had moderate rent burdens. For detailed data on housing needs of these households, including data for metropolitan and nonmetropolitan areas, see appendix III. Since Section 202 provides projects with rental assistance payments that cover a portion of the rent for each unit, the tenants themselves pay rents that equal a percentage of their household incomes—generally 30 percent. This percentage remains constant, so the amount of rent tenants pay increases only when household income rises, protecting them from rent increases that might be imposed in the private housing market when, for example, market conditions change. In contrast, low income elderly renter households that do not receive this type of assistance—especially those with very low incomes—are vulnerable to high rent burdens and increases in housing costs. Most of these households have few or no financial resources, such as cash savings and other investments, and rely primarily on fixed incomes that may not increase at the same rate as housing costs. Section 202 serves another important function, potentially allowing households to live independently longer by offering tenants a range of services that support independent living—for example, meal services, housekeeping, personal assistance, and transportation. HUD ensures that sponsors have the managerial capacity to assess residents’ needs, coordinate the provision of supportive services, and seek new sources of assistance to ensure long-term support. HUD pays a small portion of the costs of providing these services through its rental assistance payments. Section 202 Provides an Estimated One-fifth of All Government-subsidized Housing for Very Low Income Elderly Renters Section 202 is an important source of housing for elderly households with very low incomes. Between 1998 and 2001, Section 202 approved the construction of from 3,890 to 7,350 assisted units annually, for an average of about 5,690 units. According to the American Housing Survey, in 2001 about 1.3 million, or 40 percent, of elderly renter households with very low incomes received some form of rental assistance in 2001 from a government housing program, including Section 202, public housing, or housing vouchers (fig. 2). According to our analysis of HUD program data, about 260,000 Section 202 units with rental assistance contracts (assisted units) generally served very low income elderly households through 2001. Taken together, these two sources of data suggest that around one-fifth of the 1.3 million assisted households identified in the American Housing Survey received assistance from Section 202. Although Section 202 is an important source of affordable elderly housing, the program reached a relatively small fraction of very low income elderly renter households. Between 1985 and 2001 the number of units assisted under the Section 202 program grew by about 4 percent annually, while the number of very low income elderly renter households declined by almost 1 percent annually. Yet at any given point in this period, Section 202 had reached no more than about 8 percent of these households that were eligible for assistance under the program (fig. 3). Also, during this period, many of these elderly renter households with very low incomes—ranging from about 45 to 50 percent—had housing affordability problems. Other federal programs that develop rental housing generally target different income levels, serve other populations in addition to the elderly (including families with children and people with disabilities) and do not require housing providers to offer supportive services for the elderly. For example, the Low-Income Housing Tax Credit Program, the largest of all current production programs, subsidizes the construction of about 86,000 units annually. However, according to one source, only around 13,200 of these units are intended for the elderly—and, unlike Section 202, not all of these units serve very low income elderly renter households. In addition, these programs also do not have specific requirements ensuring that supportive services be available to elderly tenants. Appendix IV provides additional information on other federal housing programs. Section 202 Projects Reviewed Generally Did Not Meet Guidelines for Timeliness According to HUD policy, Section 202 projects should complete project processing and be approved to start construction within 18 months after they are funded. Overall, 73 percent of Section 202 projects funded between fiscal years 1998 and 2000 did not meet this processing time guideline. However, about 55 percent of the projects were approved within 24 months. Projects located in metropolitan areas were about twice as likely as projects in nonmetropolitan areas to take more than 18 months to be approved. The percentage of projects approved within the specified time frame differed widely across HUD’s field offices, with field offices located in the northeast and west approving the lowest percentages. As well as taking longer to complete than other projects—thus delaying benefits to very low income elderly tenants—projects that were not approved for construction after the 18-month time frame accounted for 14 percent of the Section 202 program’s balance of unexpended appropriations. HUD Expects Projects to Be Approved to Start Construction within 18 Months Once HUD has made a funding award for a Section 202 project, HUD field office staff and project sponsors must complete various tasks, meetings, and paperwork before construction can commence (fig. 4). In this report, we refer to the tasks that take place between (1) the date when HUD sends a funding award letter to the sponsor and (2) the date that HUD authorizes the sponsor both to begin construction and to start drawing down the capital advance amount (initial closing) as project processing. The duration of the project processing period depends, in part, on project sponsors’ timeliness in submitting the required documentation to HUD’s field office reviewers. For example, sponsors must create owner corporations, hire consultants, obtain local permits and zoning approval, and design architectural and cost plans, among other things. HUD field offices must review all documentation before projects can be approved for construction. As figure 4 illustrates, HUD’s current time guideline for project processing is 18 months. Individual field offices have the discretion to extend processing for up to 6 more months without approval from HUD headquarters, but all extensions beyond those additional 6 months (that is, 24 months after the funding award) require approval from headquarters. After construction is authorized to begin, HUD gradually expends capital advance funds to cover development costs incurred by the sponsor. When construction is completed, HUD approves the final costs, and sponsors can begin leasing to eligible tenants. Over time, sponsors draw down funds from the reserved rental assistance amounts to support operating costs. To help assure that field office staff and project sponsors could complete project processing requirements within the 18-month time guideline, HUD adopted changes in 1996 that were intended to streamline procedures. One of the key changes included requiring field office staff to accept sponsor-provided certifications of architectural plans, cost estimates, and land appraisals. Previously, field office staff performed detailed technical reviews of these items. According to HUD policy, these streamlined procedures should have been used to process all projects in our analysis, which were funded between fiscal years 1998 and 2000. HUD Took Longer Than 18 Months to Approve Most Projects for Construction Most Section 202 projects that received funding awards did not receive approval to begin construction within the 18-month guideline set out by HUD. Altogether, 73 percent of projects funded from fiscal years 1998 through 2000 did not meet the 18-month guideline. These projects accounted for 79 percent of the nearly $1.9 billion in funding awarded to projects during this period. The percentage of projects exceeding the guideline remained relatively stable over the years at around 72 percent (fiscal year 1998) to 75 percent (fiscal year 2000). During this period, the projects located in metropolitan areas (72 percent of all projects) were about twice as likely as projects in nonmetropolitan areas to exceed the 18- month guideline (see app. V for more detail). HUD field offices may grant up to 6-month extensions after the 18-month guideline for projects needing more time to gain approval to start construction, and many projects were approved within that 6-month time frame. HUD approved 55 percent of the projects funded from fiscal years 1998 through 2000 for construction within 24 months of the funding award—27 percent within 18 months and 28 percent within 19 to 24 months. The remaining 45 percent of projects took more than 24 months to be approved. In addition, metropolitan projects were about twice as likely as nonmetropolitan projects to take more than 24 months to gain approval to start construction. Field Offices’ Performance in Meeting the Time Guideline Varied We looked at the performance of the 45 individual HUD field offices that process Section 202 projects and found that they had varying degrees of success in meeting the 18-month guideline. We evaluated their performance by estimating the percentage of projects approved for construction (project approval rate) within 18 months for each field office. Among these offices, the median project approval rate for construction within 18 months was 22 percent (table 2), but field offices’ performance varied widely. Eight field offices had no projects that met the 18-month guideline, while more than 90 percent of projects at one office did (see app. V for a breakdown of approval rates by field office). Field offices’ performance varied by region, with those located in the northeast and west being least likely to approve projects within 18 months of the funding award. Table 2 also shows the rate of projects approved within 24 months. Delayed Projects Affect the Program’s Production Times and Expenditures Meeting processing time guidelines is important because most of the delays in total production time—that is, the time between funding award and construction completion—stem from the project processing phase. When we compared the average total production times for completed projects that did not meet HUD’s 18-month processing guideline and those that did, the delayed projects took 11 months longer than other projects to proceed from funding award to construction completion (fig. 5). Since the average time taken for the construction phase was very similar for all projects, most of the 11-month difference in total production time was attributable to the extra 10 months that delayed projects took to complete the processing phase. Delayed processing of Section 202 projects also affected the Section 202 program’s overall balances of unexpended appropriations. At the end of fiscal year 2002, for example, HUD had a total of $5.2 billion in unexpended Section 202 funds (fig. 6). A relatively small part of these unexpended funds—about 14 percent—was attributable to projects that had not yet been approved to start construction, even though they had exceeded HUD’s 18-month processing time guideline. Consequently, none of the funds reserved for these projects had been expended. By contrast, the remaining 86 percent of unexpended funds were associated with projects for which HUD was in the process of expending funds for construction or rental assistance. For example, almost half of the unexpended balances—about 48 percent—resulted from projects that had already been completed but were still drawing down their rental assistance funds as intended under the multiyear project rental assistance contract between HUD and the project sponsor. (For additional details on unexpended fund balances, see app. II.) Various Factors Can Impede the Timely Processing of Projects Our review of projects funded from fiscal years 1998 through 2000 shows that several factors can prevent Section 202 projects from meeting the 18- month processing time guideline, including: issues related to capital advances, field office practices and the training and guidance that HUD has provided to field office staff, and HUD’s program administration and oversight. First, despite HUD’s intent, capital advances were not always sufficient to meet development costs. According to some sponsors and consultants, this factor often led sponsors to seek funding from other sources, including other HUD programs, which takes time. Second, some field offices, sponsors, and consultants reported that some field office staff had not fully implemented HUD’s streamlined processing procedures and that HUD had offered only limited training and guidance to field office staff on processing policies and procedures. Third, additional time was needed for cases in which HUD headquarters responded to project sponsors’ requests for additional funds or processing time. Fourth, limitations in HUD’s project monitoring system impeded its ability to oversee project processing. Finally, factors external to HUD, such as sponsors’ level of development experience and requirements established by local governments, also hindered processing. Insufficient Capital Advances Caused Some Sponsors to Seek Other Funding Although HUD policy intends for capital advances to fund the cost of constructing a modestly designed project, capital advances have not always been sufficient to cover these expenses. HUD field staff, project sponsors, and consultants reported that program limits on capital advances often kept projects from meeting HUD’s time guideline for approving projects for construction. Most field offices, and every sponsor and consultant that we surveyed, reported that insufficient capital advances negatively affected project processing time, and a substantial majority of respondents indicated that this problem occurred frequently (fig. 7). Many respondents also reported that securing secondary financing to supplement the capital advance amount often added to processing time. According to some sponsors and consultants, the capital advance amounts set by HUD were often inadequate to cover land, labor, and construction costs as well as fees imposed by local government. As a result, sponsors had to seek secondary financing from other federal, state, and local resources— including other HUD programs—or redesign projects to cut costs, or both. Some sponsors and consultants said that the search for secondary financing could add months to the construction approval process because funding application and award cycles for other programs varied and because sponsors had to meet HUD’s documentation requirements for every additional funding source before the agency could authorize construction. HUD has recognized that the development cost limits it uses to calculate capital advances have sometimes been inadequate and that, as a result, a number of sponsors have had to seek additional funding to construct their projects. According to a HUD official, the agency is currently considering initiating a study to determine how to calculate capital advances that can cover project development costs. Our survey and program data showed that field offices that reported problems with insufficient capital advances and sponsors securing secondary financing had a lower percentage of projects that met the 18- month time guideline than other offices (table 3). The median percentage of projects meeting the 18-month guideline was much lower for field offices that reported these problems than those that did not. In addition, field offices in the northeast and west—the regions with the lowest percentage of projects meeting the processing time guideline (see table 2 above)— were more likely than those in the south and midwest to report having problems with these factors. Varying Field Office Practices and Inadequate Staff Training and Guidance Affected Timely Processing Differences in the procedures field offices use to approve projects for construction and the extent of staff training and experience affected project processing time. For example, most consultants and sponsors in our survey responded that the unwillingness of field office staff to implement policy changes that HUD had adopted to streamline processing caused delays, as did insufficient training for and inexperience of field office staff (fig. 8). About 40 percent of them also reported that these problems occurred frequently. In addition, some consultants and sponsors whom we interviewed told us that some field offices continued to conduct much more detailed and time-consuming technical reviews of project plans than HUD’s current policies require. These sponsors and consultants said that field staff departing from program guidelines caused confusion for sponsors about the type of information HUD required and delayed the process of obtaining HUD’s approval to begin construction. A majority of HUD field office representatives also reported that a lack of staff training and experience can have a negative effect on processing time. However, HUD field office staff regarded these problems, as well as staff unwillingness to implement policy changes, as infrequent problems. HUD officials at headquarters acknowledged that some field staff were performing technical reviews contrary to program guidelines, but the officials did not know how many staff were doing so. HUD has provided limited guidance for field office staff on processing policies and procedures, which would ensure that all staff are up to date on the most current guidelines and requirements. In 1999, HUD headquarters issued a memorandum that reminded field office staff to process projects in accordance with streamlined procedures that had been adopted in 1996, such as replacing detailed technical review of project plans by field office staff with sponsor-provided certifications. Yet at the time of our review, most field office staff had not received any formal training on Section 202 project processing. According to HUD, in 2002, the agency required representatives from each field office to attend the first formal training on project processing for field office staff since at least 1992. Although HUD headquarters expected those who attended to relay what they had learned to other staff members in their own offices, our survey showed that by November 2002 no on-site training had occurred at about a quarter of the field offices. Also, only two field offices (5 percent) reported that training was relayed in a formal setting. We also found that HUD’s field office staff was relying on out-of-date program handbooks that did not reflect the streamlined processing procedures. Although HUD stated that the agency intended to issue revised handbooks in order to ensure that all field offices follow current procedures, it had not yet done so at the time of our review. Based on written comments in our survey, some field office staff felt that an updated handbook would aid in the timely processing of Section 202 projects. Administrative and Oversight Weaknesses at HUD Headquarters Contributed to Delays The time that HUD headquarters took to make certain administrative decisions also added to the time taken to process Section 202 projects. HUD headquarters must approve all requests for additional time to complete processing beyond 24 months after funding award and for additional capital advance funds. A HUD official noted that projects must already have exceeded the 18-month time guideline, and the discretionary 6-month extension, before HUD headquarters would be called on to approve a request for a time extension beyond 24 months. However, most of the field office representatives and project sponsors and consultants in our survey agreed that the time HUD headquarters took to make these decisions further prolonged processing time, with many respondents reporting that this issue was a frequent problem (fig. 9). Further, HUD’s project monitoring system was not as effective as it could have been and may have impeded HUD’s oversight of project processing. HUD officials stated that, to monitor project processing, headquarters has periodically used its Development Application Processing (DAP) system to identify projects that exceeded the 18-month processing time guideline. In addition, the officials stated that headquarters contacted field offices on a quarterly basis to discuss the status of these delayed projects. Nevertheless, HUD headquarters officials have acknowledged that there are data inaccuracies in the DAP system, and the agency has instituted efforts to improve the system’s reliability in identifying delayed projects. Furthermore, according to HUD, the DAP system does not collect data that would allow both headquarters and field office staff to follow a project through every stage of development and, as a result, many field offices maintain their own tracking systems to monitor projects through these stages. The lack of reliable, centralized data on the processing of Section 202 projects has limited HUD headquarters’ ability to oversee projects’ status, determine problematic processing stages, and identify field offices that might need additional assistance. HUD officials stated that enhancing the DAP system is a priority, but that a lack of funding has hindered such efforts. Issues External to HUD Caused Some Delays Finally, other factors outside of HUD’s direct control kept some projects from meeting time guidelines. Ninety-five percent of field office representatives and 90 percent of sponsors and consultants surveyed reported that project processing time was negatively affected when project sponsors were inexperienced. Nearly 60 percent of field offices, and almost 40 percent of sponsors and consultants, indicated that this problem occurred frequently. Local government requirements also negatively affected project processing, according to about 60 percent of field offices and about 85 percent of sponsors and consultants. About 35 percent of field offices and about 60 percent of sponsors and consultants reported that these requirements were frequently a problem. Also about 70 percent of field offices, sponsors, and consultants reported that, specifically, the local zoning process had a negative effect on project processing time, with about 40 percent of field offices and about 50 percent of sponsors and consultants indicating that this problem was frequent. Most field offices, sponsors, and consultants reported that other factors, such as community opposition and environmental issues, affected processing times but were not frequent problems for Section 202 projects. Although about 50 percent of field offices, and about 60 percent of sponsors and consultants, reported that community opposition had a negative effect on project processing time when it occurred, less than 10 percent of field offices, and about 30 percent of sponsors and consultants, reported such opposition to be a frequent problem. Also, about 50 percent of field offices, sponsors, and consultants indicated that environmental problems negatively affect processing when they occur, but only about 20 percent of them considered environmental problems to occur frequently. Appendixes VI and VII provides additional details on the results of our survey of HUD field office staff, sponsors, and consultants. Conclusions The housing affordability problems of very low income elderly renter households—although they represent a small share of all elderly households—are particularly acute. These households represent one of the more vulnerable populations in the nation given their small incomes and need for supportive services. Considering the urgent housing needs of the Section 202 program’s target population, ensuring that its projects are completed as soon as possible is critical. Delays in timely Section 202 processing can prolong project completion, on average, by nearly a year and result in higher balances of unexpended funds. Awarding capital advances that are sufficient to cover project development costs can alleviate delays by averting the need for sponsors to seek secondary financing or request approval from HUD headquarters for additional funding. While sufficient capital advance funding for projects, absent additional appropriations, can result in fewer units funded annually, it can also result in the prompt delivery of housing assistance to needy households and in the reduction of unexpended balances attributable to delayed projects. In addition, issuing an updated program handbook and providing adequate formal training can help in timely project processing by ensuring that staff are accountable for applying and interpreting HUD policies and procedures in a consistent manner. Finally, HUD’s project monitoring system, in its current form, is not as effective as it can be and may hinder HUD’s oversight. Maintaining reliable, centralized data on the processing of Section 202 projects is essential to overseeing projects’ status as well as determining problematic processing stages. Recommendations To reduce the time required for projects to receive approval to start construction, we recommend that the Secretary of Housing and Urban Development direct the Assistant Secretary for Housing to (1) evaluate the effectiveness of the current methods for calculating capital advances and (2) make any necessary changes to these methods, based on this evaluation, so that capital advances adequately cover the development costs of Section 202 projects consistent with HUD’s project design and cost standards. In addition, to improve the performance of HUD field office and headquarters staff in processing projects in a timely manner, we recommend that HUD provide regular training to ensure that all field office staff are knowledgeable of and held accountable for following current processing procedures, update its handbook to reflect current processing procedures, and improve the accuracy and completeness of information entered in the DAP system by field office staff and expand the system’s capabilities to track key project processing stages. Agency Comments and Our Evaluation We provided a draft of this report to HUD for its review and comment. In a letter from the Assistant Secretary for Housing (see app. VIII), HUD agreed with the report’s conclusions, stating that the report demonstrated an excellent understanding of the importance of the Section 202 program in delivering affordable housing to very low income elderly households. HUD also concurred with the recommendations and provided information on how it intends to implement them. Regarding our recommendations concerning HUD’s capital advance formula, the agency agreed that, in some locations, capital advances may be insufficient to cover project development costs and that delays can result when sponsors must seek additional funds from other sources. However, HUD also noted that increasing the per-unit development cost limits would result in fewer units constructed. Our draft report reached the same conclusion, but also stated that sufficient capital advances yield important benefits, such as the prompt delivery of housing assistance to needy households. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to interested members of Congress and congressional committees. We also will send copies to the HUD Secretary and make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. Please contact me at (202) 512-8678 or Paul Schmidt at (312) 220-7681, if you or your staff have any questions concerning this report. Key contributors to this report were Susan Campbell, Emily Chalmers, Mark Egger, Daniel Garcia-Diaz, Curtis Groves, Ron La Due Lake, Marc Molino, Melissa Roye, William Sparling, and Julianne Stephens. Scope and Methodology We conducted this review to address: (1) the role of the Section 202 program in meeting the housing needs of elderly renter households with very low incomes, (2) the extent to which Section 202 projects meet the Department of Housing and Urban Development’s (HUD) time guidelines for project processing, and (3) the factors that keep Section 202 projects from meeting HUD’s time guidelines for project processing. To determine the role of the Section 202 program in meeting housing needs of elderly households, we analyzed household income and rental housing cost data from the American Housing Survey. The Bureau of the Census performs the survey for HUD every odd-numbered year. Appendix III provides a detailed discussion of the American Housing Survey. We also reviewed studies that involved the housing needs of elderly households. To determine the extent to which HUD’s Section 202 and other housing programs serve elderly households, we used data from HUD’s Real Estate Management System (REMS) as of the beginning of calendar year 2003. Specifically, we analyzed information on the overall number of properties and their associated units under Section 202 and other housing programs that serve the needs of elderly households. Although we did not independently verify the accuracy of the program data, we did perform internal checks to determine (1) the extent to which the data fields were populated, (2) the reasonableness of the values contained in the data fields, and (3) if any aberrations existed in the data we used. We concluded that the REMS data was reliable for purposes of this report. We also reviewed relevant regulations, policies, and procedures for Section 202 and other active federal programs. To explore the issue of timeliness in processing and some of the factors that may impede timely processing, we reviewed HUD program and budget data from HUD’s Development Application Processing (DAP) System as of the end of calendar year 2002. Because HUD headquarters officials told us that program data from this system was not reliable for Section 202 projects funded before fiscal year 1998, we limited our review of Section 202 projects to those funded from fiscal years 1998 to 2000. While we did not independently verify the accuracy of the program data from this system, we periodically discussed the accuracy and interpretation of the data we used with HUD officials. In addition, we compared file records for projects funded since fiscal year 1998 with the data entered in the system for those projects by three HUD field offices that process Section 202 projects and generally found the data to be accurate. Also, we performed internal checks to determine the extent to which the data fields in DAP were populated and the reasonableness of the values contained in these fields. In cases where the data were not reasonable or questions arose, we contacted a HUD official to identify and correct errors. To determine the reasons why HUD awarded time extensions for certain projects listed in the system, we compiled and analyzed HUD’s published notices of these extensions in the Federal Register. We also used a questionnaire to survey of all HUD field offices that process Section 202 projects. About 98 percent (44 out of 45) of the field offices that process Section 202 projects completed the questionnaire. We also conducted site visits at the Greensboro and Richmond field offices to obtain field office staff perceptions on factors that may impede timely processing. In addition, to gain a fuller perspective on these issues, we surveyed sponsors and consultants, identified by HUD and others, that were experienced in working with Section 202 projects. Collectively, these sponsors and consultants worked on approximately 260 projects since fiscal year 1998 representing approximately 40 percent of Section 202 units funded. In addition, we observed a HUD training session on processing Section 202 projects in August 2002. We conducted our work primarily in Washington, D.C., between May 2002 and March 2003, in accordance with generally accepted government auditing standards. Budget Information for the Section 202 Program This appendix provides information on the Housing for Special Populations appropriations account, which provides funding for the Section 202 and Section 811 programs. In fiscal year 2002, Congress appropriated over $1 billion for the Housing for Special Populations account—of which $783 million was earmarked for the Section 202 program. From year to year, the Section 202 program carries significant balances of unexpended appropriated funds. In fiscal year 2002, the unexpended balance for the Section 202 program was $5.2 billion. Section 202 Appropriations In fiscal year 2002, Congress appropriated over $1 billion for the Housing for Special Populations appropriations account, which provides funding for both the Section 202 Supportive Housing for the Elderly and the Section 811 Supportive Housing for Persons with Disabilities Programs. Since fiscal year 1998, a total of $4.6 billion in appropriations were made available for both programs (table 4). In fiscal year 2002, the lion’s share of the appropriations for the Housing for Special Populations account, about $783 million or 76 percent, went to the Section 202 program to fund, among other things, capital advances and project rental assistance contracts (PRACs) for new projects and PRAC renewals for existing projects. Since fiscal year 1998, about $3.6 billion have been appropriated for the Section 202 program. Appropriations for the Section 202 program in nominal dollars (that is, unadjusted for inflation) have increased since fiscal year 1998 at an average annual rate of about 5 percent. However, appropriations for Section 202 in constant 1998 dollars have increased by an average rate of about 2 percent annually. Section 202 Unexpended Balances The Section 202 program carries significant balances of unexpended appropriations from year to year. Unexpended balances include the cumulative amount of budget authority that has not been spent (outlayed) and may consist of either obligated or unobligated funds. Some of the unexpended balances are expected to be carried over annually for various programmatic reasons, including the time required for project sponsors to prepare their application for program funds and finalize plans as well as the time required for HUD’s field offices to review and process them. However, some unexpended funds can also result from problems in the timeliness of project processing. Between fiscal years 1998 and 2002, the program’s unexpended balance increased from about $4.8 billion to $5.2 billion. In nominal dollars, this balance has increased by an average annual rate of about 2 percent between fiscal years 1998 and 2002. In constant 1998 dollars, unexpended balances for Section 202 actually decreased by an average rate of less than 1 percent annually. Table 5 shows the annual balances of unexpended appropriations for the Section 202 program since fiscal year 1998. As table 5 shows, unexpended PRAC funds account for a large share of the total unexpended balances for the Section 202 program as well as for the overall Housing for Special Populations account. Before fiscal year 1997, HUD provided individual projects with PRAC amounts that covered rental assistance payments generally for 20 years. Since fiscal year 1997, HUD provided PRAC amounts that covered rental assistance payments for 5 years. In both cases, PRAC funds are obligated, but remain unexpended, for multiple years after project occupancy—unlike capital advance funds, which are fully expended by project completion. With the reduction of the PRAC term from 20 to 5 years, HUD expects PRAC funds to comprise a declining share of the overall unexpended balance for the Section 202 program. Data Issues Concerning the American Housing Survey In reporting on the housing affordability problems of elderly renter households with very low incomes, this report relies on data from the 2001 American Housing Survey (AHS). We assessed the reliability of the data by reviewing AHS documentation, performing electronic testing of the data files to check for completeness of data files, and replicating published tables. We determined that the data are reliable enough for the purposes of this report. AHS is a probability sample of about 55,700 housing units interviewed between August and November 2001. Because this sample is based on random selections, the specific sample selected is only one of a large number of samples that might have been drawn. Since each sample could have provided different estimates, we express our confidence in the precision of this sample’s results as 95 percent confidence intervals (for example, +7 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples that could have been drawn. As a result, we are 95 percent confident that each of the confidence intervals in this report will include the true values in the study population. In the following section, we provide 95 percent confidence intervals for the estimates used in this report. We calculated these confidence intervals by adding and subtracting the sampling error for each estimate to or from the estimate itself. Estimates from the survey are also subject to certain nonsampling errors, such as incomplete data and wrong answers. According to the survey documentation, errors due to incomplete data and wrong answers can be greater than sampling errors for some survey questions. Of the survey questions we rely upon for our analysis (age, tenure, income, housing costs, rent subsidies, and location), the survey question on income was subject to a high level of inconsistency in survey responses. Also relevant to this report, AHS is known to underreport income when compared to the Current Population Survey and other independent sources. However, our analysis concentrates on elderly renters with very low income, for which this should be less of an issue. According to a Census study based on relatively older data (from the early 1980s), much of the underreporting of income in the survey seems to derive from interest and dividend income as well as wages and salary. Consequently, the underreporting of income may be less of a problem among very low income elderly households who do not tend to rely on these sources of income. Generally, HUD’s own internal analysis suggests that very low income renters in AHS tend to report their income more accurately than other groups. For example, in an unpublished analysis, HUD found that the income reported by very low income renters in the 1989 AHS was about 2 percent greater than the income reported in the 1990 Decennial Census. Nonetheless, current information on the extent of underreporting, especially among elderly renter households with very low incomes, is not available. The survey also collects data on the type of government housing assistance the household receives. For example, it asks if the household lives in a unit owned by a public housing authority or receives vouchers. However, households surveyed may misreport their specific programs. As a result, the survey does not provide sufficient and reliable detail on the specific housing assistance program that is serving the household. According to the survey documentation, units requiring income verification are usually subsidized. Table 6 shows the distribution of units that are occupied by homeowners and renters in 2001. A great majority of elderly households were homeowners. About 21 million (± 460,000) of 26 million (± 498,000) elderly households owned their homes. Elderly renter households consisted of about 5 million (± 242,000) households. Table 7 provides details on the estimated number of households who owned or rented their homes by income category (very low income and low income) in 2001. About 3.7 million (± 208,000) elderly renter households have very low incomes. About 4.3 million (± 223,000) elderly renter households have low incomes. These figures include households that do not pay cash rent. Based on the data from tables 6 and 7, over four-fifths (85 ± 2 percent) of elderly renter households have low incomes and approximately three-quarters (73 ± 3 percent) have very low incomes. Table 8 shows the number of units occupied by elderly renter households with very low incomes by subsidy status and rent burden. About 1.7 million (± 141,000) elderly renter households with very low incomes have moderate or severe rent burdens. The majority of these actually have severe rent burdens. About 1.3 million (± 125,000) renter households with very low incomes receive some form of government assistance. Households that do not pay cash rent appear in the tables above in this appendix for informational purposes. However, since they do not pay cash rents, we exclude these households from our estimates of rent burdens in this report. Table 9 looks at unassisted elderly renter households with rent burdens. Of the 1.7 million (± 141,000) households with rent burdens, about 60 percent are located either in the northeast or the south regions. The northeast and south contained about 542,000 (± 81,000) and 477,000 (± 76,000), respectively, of the nation’s rent burdened elderly renter households with very low incomes. The following four tables show the number and proportion of units occupied by elderly renter households with very low incomes by subsidy status and rent burden in metropolitan areas (tables 10 and 11) and nonmetropolitan areas (tables 12 and 13). About 1.4 million (± 131,000) elderly renter households with very low incomes in metropolitan areas and 234,000 (± 53,000) in nonmetropolitan areas have moderate or severe rent burden (tables 10 and 12). The proportion of households with rent burdens was generally higher in metropolitan areas than in nonmetropolitan areas (tables 11 and 13). In addition, households in nonmetropolitan areas were less likely than those in metropolitan areas to have severe rent burdens. Excluded from these estimates are the housing affordability needs of very low income homeowners. Although homeowners can experience housing affordability problems, homeowners and renters face different challenges in affording their homes. Unlike renters, homeowners have equity in their homes—about 68 percent (± 1 percent) of elderly homeowners own their homes free and clear. In addition, elderly homeowners face certain challenges in maintaining their housing, such as paying for property maintenance and accessibility modification. As a result, rental programs, such as Section 202, do not directly address the problems homeowners experience. Federal Housing Programs and the Elderly The federal government has multiple housing programs that subsidize the development of rental properties. Many of these programs also subsidize the development of properties that are intended to serve primarily elderly households. Unlike Section 202, most federal housing programs do not target a single type of household. Rather, they serve many different types of households, such as families with children, people with disabilities, and the elderly, and they produce units with rents that are affordable to households at different income levels. Housing Production Programs That Develop Elderly Housing In addition to Section 202, the federal government has multiple active housing production programs that continue to expand the number of assisted households by subsidizing the development of new rental housing. These federal programs, described below, can also subsidize individual rental properties that are intended primarily to serve elderly households. Active Housing Production Programs Low-Income Housing Tax Credits and Tax-Exempt Multifamily Housing Bonds provide federal tax incentives for private investment and are often used in conjunction with other federal and state subsidies in the production of new and rehabilitated rental housing. HOME Investment Partnerships provides formula-based grants to states and localities to build, acquire, or rehabilitate affordable rental housing or provide tenant-based rental assistance. Section 515/521 Rural Rental Assistance provides below-market loans and rental assistance to support the development of rental housing in rural areas. Multifamily mortgage insurance programs provide mortgage insurance for the development of rental housing without federally- funded interest rate subsidies or project-based rental assistance. The Housing Choice Voucher program (housing vouchers) is another important source of assistance for elderly households. The program supplements tenants’ rental payments in privately owned, moderately priced apartments chosen by the tenants. Currently, about 260,000 of the approximately 1.5 million voucher households are elderly. However, unlike the Section 202 or other programs discussed, housing vouchers is not a production program and does not directly subsidize the development of new or rehabilitated housing. In addition to the active housing production programs, the federal government also has programs that no longer subsidize the development of rental properties but, in some cases, continue to provide operating subsidies, rental assistance payments, or other subsidies for rental properties that were developed under these programs in the past. Over the years, these inactive housing production programs, described in the next section, subsidized many rental properties that were intended primarily to serve elderly households. Inactive Housing Production Programs Public Housing financed the development and operation of properties managed and owned by local housing authorities. Section 236 and Section 221(d)(3) Below Market Interest Rate provided mortgage insurance for the development of rental housing with federally funded interest rate subsidies. Section 8 project-based rental assistance programs provided project- based rental assistance to properties that were financed with Department of Housing and Urban Development (HUD) mortgage insurance, tax exempt bonds, and below-market interest rate loans. Target Households Unlike Section 202, most active federal housing programs do not target a single type of household. Rather, they serve many different types of households, such as families with children, persons with disabilities, and the elderly. Furthermore, most federal housing programs target households at different income levels, not just households with very low incomes (50 percent or less of area media income) as does Section 202. Table 14 provides information on targeted household types and rent levels of the active housing production and insurance programs. Low-Income Housing Tax Credits (tax credits), Tax-Exempt Multifamily Housing Bonds (tax-exempt bonds), and HOME set aside some of their units for very low-income households and can provide housing for the elderly (table 14). Congress has granted considerable latitude to state and local agencies that administer these programs in deciding who will be served with federal housing resources. In addition, mortgage insurance programs for multifamily rental properties under HUD’s Federal Housing Administration (FHA) currently do not have any specific age or income requirements for tenants. However, since rents for newly developed FHA- insured properties are often set at market levels, these programs may not be able to reach very low-income households without the use of other subsidies. Annual Housing Production Levels Although Section 202’s annual production levels are small when compared to the total production levels of other housing programs, such as tax credits—the largest of all current production programs—Section 202, nonetheless, is a relatively important source of subsidized rental housing units for the elderly. Table 15 presents the volume of new production by rental housing production program. The volume of housing production illustrates individual program activity but, due to limitations in the data, it is not possible to accurately estimate what percentage of elderly units produced through federal housing programs is from Section 202 because units produced through these programs can overlap with each other. For example, HOME funding can be used in conjunction with programs such as tax credits, tax-exempt bonds, or HUD mortgage insurance programs to finance new production. As a result, adding units together for any of the programs in table 15 will likely result in double counting. Section 202 Program Data This appendix provides additional information on the extent to which Section 202 projects meet the Department of Housing and Urban Development’s (HUD’s) 18-month processing time guideline. In particular, we present data on projects’ status in meeting the guideline, HUD field offices’ rate of success in meeting the guideline, and the factors cited by HUD in its approvals of processing time extensions. Table 16 profiles the projects funded in fiscal years 1998 through 2000 according to the projects’ status in gaining HUD’s approval to start construction. Table 17 compares the status of projects located in metropolitan and nonmetropolitan areas in gaining approval to start construction within either 18 or 24 months. In both cases, metropolitan projects were about twice as likely as projects in nonmetropolitan areas to take more than either 18 or 24 months to be approved. That is, the odds of a metropolitan project taking more than 18 or 24 months to be approved for construction were about twice the odds of a nonmetropolitan project taking more than 18 or 24 months, respectively. Tables 18, 19, and 20 present the rate of project approvals within either 18 or 24 months for all field offices that have responsibility for processing Section 202 projects. Table 18 shows the results for all projects, table 19 shows the results only for projects located in metropolitan areas, and table 20 shows the results for projects located in nonmetropolitan areas. The rate of project approvals for each field office is the percentage of projects, funded between fiscal years 1998 and 2000, that HUD approved for construction within the 18-month processing time guideline or within the 24-month period after the funding award—that is, 18 months plus the 6- month discretionary extension. Table 21 shows the average number of months that projects took to complete various stages of the development process between Congress’s appropriation of funds for the Section 202 program and completion of construction. For projects funded between fiscal years 1998 and 2000 that had been approved to start construction at the time of our analysis, the average time taken from appropriation to approval to start construction was 36 months. Projects that had also completed construction took another 11 months, on average, from beginning to end of construction. From appropriation to end of construction, the average time taken was 47 months or almost 4 years. Table 22 summarizes the factors that HUD cited in extending the processing time for projects beyond 24 months after the funding award. This table draws on extension waivers approved between January 1998 and June 2002 for projects funded between fiscal years 1998 and 2000, showing the number and percentage of extended projects affected by each factor. Survey of HUD Field Office Representatives The official or officials in your office who are administration of the Section 202 Supportive Housing responsible for the day-to-day management of Section for the Elderly program. The Senate Special Committee 202 processing should complete this survey. Please complete this survey by November 18, 2002 advances. and fax it to (202) 512-2502. We are interested in obtaining your valuable insights Daniel Garcia-Diaz by phone at (202) 512-4529 or by interested in learning more about the implementation of email at [email protected]. HUD Notice H 96 - 102, which was designed to facilitate project processing. 1. In case we would like to clarify any of your responses, please provide the name, title, office/location, telephone number, and e-mail address of the individual primarily responsible for gathering the information requested in this survey. FUND RESERVATION AND PROJECT MONITORING 2. While HUD does not require systematic tracking of Section 202 project progress from fund reservation to initial closing, we are interested in learning about any steps you may take to monitor project progress from fiscal year 1998 through the present. a. Was every Section 202 Sponsor/Owner contacted to schedule a project planning conference within 30 to 45 days of the sponsor’s acceptance of fund reservation award letter? (N=44) 1. Yes, for all Section 202 projects. (81.8%) 2. Yes, but only for projects needing special attention (i.e., for new sponsors or projects facing major obstacles). (13.6%) 3. No, project planning conferences were not scheduled for all projects within 30 to 45 days. (4.6%) Section 202 Supportive Housing for the Elderly: Development Process Survey b. From fiscal year 1998 through the present, how frequently has your office monitored the progress of the project Sponsor/Owners between fund reservation and initial closing? For each category below, please indicate the frequency that best describes your contact. (Please check one box for each row) (N=44) (1) (2) (3) (4) a. For all Section 202 projects? (please specify) b. For Section 202 Projects needing special attention? (6.8% did not respond) (please specify) 3. a. Does your office currently develop internal monitoring reports to track project progress of Section 202 fund reservations (other than the Aged Pipeline Report prepared at HUD Headquarters)? (N=44) 1. Yes (86.4%) 2. No Please skip to question 4. (13.6%) b. How often are these reports prepared? c. Who receives these internal monitoring (check all that apply) (N=38) reports in your office? (check all that apply) (N=38) 1. Weekly (31.6%) 2. Biweekly (18.4%) 3. Monthly (36.8%) 4. Quarterly (0.0%) 5. Semi-annually (0.0%) 6. Annually (0.0%) 7. Other (Please specify) (18.4%) 1. Hub Director (57.9%) 2. Program Center Director (73.7%) 3. Project Manager(s) (81.6%) 4. Technical staff (71.1%) 5. Program Center Assistant (39.5%) 6. Other (Please specify title) (42.1%) 7. Other (Please specify title) (18.4%) 4. HUD Notice H 96-102 revised the Section 202 Handbook to bypass the conditional commitment application stage. It also directed that HUD technical staff must (1) accept Sponsor/Owner certifications (i.e., architecture and engineering final plans) rather than conduct detailed technical reviews; and (2) conduct detailed reviews only under specified circumstances. (N=44) a. Does your office require submission of a conditional commitment application? 1. Yes (0.0%) 2. No (100.0%) b. Does your office have written standards for time spent by its technical staff on technical reviews? 1. Yes (9.1%) 2. No (86.4%) (4.6% did not respond) any written standards.) Section 202 Supportive Housing for the Elderly: Development Process Survey 5. HUD Notice H 96-102 stresses the importance of conducting a comprehensive project planning conference and includes a suggested agenda to be used at the conference. The agenda includes items such as project development, legal considerations, project design/contractor/construction issues, and project development schedule. We are interested in obtaining the following information on project planning conferences held at your office for fund reservations from fiscal year 1998 through the present. (Please check one box for each row) (Unless otherwise noted, N=44) (1) (2) (3) (4) (5) a. How frequently have planning conferences been held within 30 to 45 days of the sponsor’s acceptance of fund reservation award letter? b. How frequently have all relevant agenda items identified in section 3-1 of HUD Notice H 96-102 been covered during each planning conference? c. How frequently have Sponsor/Owners, their consultant (if used), design architect, and attorney all participated in the project planning conferences? d. How frequently have all HUD technical experts (design architect, cost analyst, attorneys, etc.), responsible for reviewing project paperwork participated in each project planning conference? e. Were there instances when specific HUD technical experts who were responsible for project paperwork did not participate in project planning conferences? Yes Continue to question 5f. (50.0%) No Please read introduction below, then answer question 6 on next page. (47.7%) (2.3% did not respond) f. When they did not participate in the planning conference, how frequently did these technical experts contact Sponsor/Owners directly to offer technical assistance? (N=23) We are interested in identifying factors that may contribute to the untimely processing of Section 202 projects from fund reservation to initial closing. We understand that there are three basic factors that can add to project processing time. These factors may include (1) the actions or characteristics of Project Sponsors/Owners; (2) HUD staff, funding, and policies; and (3) State, local, and/or other requirements. Your responses to the following questions (6, 7, 8) will provide valuable insight into the significance of these factors. 6. Based on your experience with all projects receiving fund reservations in your office since fiscal year 1998: Part A: For each factor related to Sponsors or Owners, select a single box that most commonly describes the factor’s impact on the overall processing Part B: Indicate the frequency of each factor’s influence on the timely processing of Section 202 projects in your office by selecting a single box that most commonly describes the frequency of the factor’s impact on the overall processing time. (For example, the factor ‘Seldom if ever’ prevents timely processing, ‘Sometimes’ prevents timely processing, etc.) Sponsor / Owner Factors That May Negatively Influence Timely Processing of Section 202 Projects (N=44) B. Frequency Of Factor Preventing Timely Processing (check one box for each factor) (check one box for each factor) (1) (2) (3) (4) (1) (2) (3) (4) (5) a. Doesn’t attend pre-application workshop (2.3% did not respond in part A and 4.6% b. Lacks experience in Section 202 c. Does not effectively manage e. Has difficulty designing project f. Lacks sufficient funds for pre- advance (e.g., environmental reviews, site control, etc.) g. Doesn’t fulfill requirements in a timely fashion (e.g., set up complete required forms, etc.) h. Other (Please specify) (84.1% did not respond in parts A/B) Section 202 Supportive Housing for the Elderly: Development Process Survey 7. Based on your experience with all projects receiving fund reservations in your office since fiscal year 1998: Part A: For each factor related to HUD staff, funding, or policies, select a single box that most commonly describes the factor’s impact on the overall processing time. Part B: Indicate the frequency of each factor’s influence on the timely processing of Section 202 projects in your office by selecting a single box that most commonly describes the frequency of the factor’s impact on the overall processing time. (For example, the factor ‘Seldom if ever’ prevents timely processing, ‘Sometimes’ prevents timely processing, etc.) HUD Factors That May Negatively Influence Timely Processing of Section 202 Projects (N=44) B. Frequency Of Factor Preventing Timely Processing (check one box for each factor) (check one box for each factor) (1) (2) (3) (4) (1) (2) (3) (4) (5) c. Section 202 workload (e.g., funded projects) d. FHA loan processing can be, at certain times, higher priority than e. Some staff unwilling to fully implement HUD Notice H 96-102 (including turnover in project coordinator position) (2.3% did not respond in parts A/B) g. Capital advance insufficient to fund projects (2.3% did not respond in part B) h. Award letters not mailed during i. Availability of HUD amendment funds (after other funding sources exhausted) (2.3% did not respond in parts A/B) j. Time spent by HUD HQ (extensions, amendment funds) k. Other (Please specify) (90.9% did not respond in parts A/B) Section 202 Supportive Housing for the Elderly: Development Process Survey 8. Based on your experience with all projects receiving fund reservations in your office since fiscal year 1998: Part A: For each factor related to State, Local, and/or Other requirements, select a single box that most commonly describes the factor’s impact on the overall processing time. Part B: Indicate the frequency of each factor’s influence on the timely processing of Section 202 projects in your office by selecting a single box that most commonly describes the frequency of the factor’s impact on the overall processing time. (For example, the factor ‘Seldom if ever’ prevents timely processing, ‘Sometimes’ prevents timely processing, etc.) Factors Related to State, Local, or Other Requirements That May Negatively Influence Timely Processing of Section 202 Projects (N=44) Factors Related to State, Local, or (check one box for each factor) (check one box for each factor) (1) (2) (3) (4) (1) (2) (3) (4) (5) a. Project is new construction (2.3% did not respond in part B) b. Project involves rehabilitation (4.6% did not respond in parts c. Project site zoning approval (2.3% did not respond in part A) d. Local permits (i.e., obtaining and/or cost of permits) e. State and local historic (2.3% did not respond in part A) g. Securing secondary financing (e.g., time needed to secure additional funding and obtain approval of financing documents) i. General local opposition to project j. Other (Please specify) (86.4% did not respond in parts A/B) 9. What are the three most important factors (from those listed in the tables above) that can negatively impact timely processing of Section 202 projects? 10. a. Did any staff members from your office attend HUD’s Section 202/811 field office staff training titled “The Process Imperative: Moving Quickly from Fund Reservation to Initial Closing” held this past summer in St. Louis, Missouri or Washington, D.C.? (N=44) 1. Yes (100.0%) 2. No Please skip to question 11 (0.0%) b. How many staff members attended from your office? (Mean = 1.9 persons)_ c. How many staff members in your office process Section 202 projects (full time or part-time)? (Mean = d. Have those who attended shared the content of the training with staff who did not attend? 1. Yes (75.0%) 2. No Please skip to question 11. (22.7%) (2.3% did not respond) Section 202 Supportive Housing for the Elderly: Development Process Survey e. How was the content of the training shared with staff members in your office who did not attend the training?(Unless otherwise noted, N=34) Formal training 1. Yes (5.9%) a. Training session held (at least 1 full day) 2. No (79.4%) (N=3) (14.7% did not respond) Informal training b. Meeting or information session held (less than 1. Yes (64.7%) 1 full day) 2. No (20.6%) (N=19) (14.7% did not respond) c. Trained staff answer project processing 1. Yes (82.4%) questions and provide guidance to other staff 2. No (11.8%) (N=18) (5.9% did not respond) d. Trained staff provided a written summary of 1. Yes (17.7%) 2. No (55.9%) (N=6) (26.5% did not respond) 1. Yes (14.7%) e. Other (please explain) 2. No (0.0%) (N=1) (85.3% did not respond) 1. Yes (2.9%) 2. No (0.0%) (N=1) (97.1% did not respond) 11. Please identify up to three policy changes within HUD’s control that you believe would aid the timely processing of Survey of Section 202 Sponsors and Consultants The United States General Accounting Office is contacting sponsors and consultants who have significant experience with housing development under the Section 202 Supportive Housing for the Elderly program. The Senate Special Committee on Aging asked GAO to explore the issues involved in the processing of projects that have been awarded capital advances. We are interested in obtaining your valuable insights into the processing of Section 202 projects from fund reservation to initial closing. As you complete the survey, please consider your experience since 1998 with the Section 202 program only. Please complete this survey by December 13, 2002 and fax it to (202) 512-2502. If you have any questions about this survey or have problems submitting your response, please contact Melissa A. Roye by phone at (202) 512-6426 or by email at [email protected]. 1. In case we would like to clarify any of your responses, please provide your sponsor or consultant name, respondent name and title, location, telephone number, and e-mail address of the individual primarily responsible for gathering the information requested in this survey. Name of Sponsor or Consultant: E-mail address: 2. Based on your experience with all Section 202 projects (not Section 811) receiving fund reservations since 1998, please list the states in which you have sponsored or consulted on at least one project per year OR a total of at least three projects since 1998. 3. Approximately how many Section 202 projects have you sponsored or consulted on in total since 1998 _Mean=12.3_ (N=21), since 1992 _Mean=25.6_ (N=21)? We are interested in identifying factors that may contribute to the untimely processing of only Section 202 projects from fund reservation to initial closing. We understand that there are three basic factors that can add to project processing time. These factors may include (1) the actions or characteristics of Project Sponsors/Owners; (2) HUD staff, funding, and policies; and (3) State, local, and/or other requirements. Your responses to the following questions (4, 5, 6, 7) will provide valuable insight into the significance of these factors. 4. Based on your experience with all projects you have sponsored or consulted on that have received fund reservations since 1998: Part A: For each factor related to Sponsors or Owners, select a single box that most commonly describes the factor’s impact on the overall processing Part B: Indicate the frequency of each factor’s influence on the timely processing of Section 202 projects by selecting a single box that most commonly describes the frequency of the factor’s impact on the overall processing time. (For example, the factor ‘Seldom if ever’ prevents timely processing, ‘Sometimes’ prevents timely processing, etc.) Sponsor / Owner Factors That May Negatively Influence Timely Processing of Section 202 Projects (N=21) B. Frequency Of Factor Preventing Timely Processing (check one box for each factor) (check one box for each factor) (1) (2) (3) (4) (1) (2) (3) (4) (5) a. Doesn’t attend pre-application workshop (9.5% did not respond for part B) b. Lacks experience in Section 202 program/ multi-family project development (9.5% did not respond for part B) c. Does not effectively manage project development process (9.5% did not respond for part B) d. Lacks effective consultant (4.8% did not respond for part A and 19.1% for part B) e. Has difficulty designing project within fund reservation amount (9.5% did not respond for part B) f. Lacks sufficient funds for pre-construction costs required before receipt of capital advance (e.g., environmental reviews, site control, etc.) (9.5% did not respond for g. Doesn’t fulfill requirements in a timely fashion (e.g., set up Owner corporation, submit complete required forms, etc.) (14.3% did not respond to part B) h. Other (Please specify) (71.4% did not respond to parts A/B) 9.5% 5. Based on your experience with all projects you have sponsored or consulted on that have received fund reservations since 1998: Part A: For each factor related to HUD staff, funding, or policies, select a single box that most commonly describes the factor’s impact on the overall processing time. Part B: Indicate the frequency of each factor’s influence on the timely processing of Section 202 projects by selecting a single box that most commonly describes the frequency of the factor’s impact on the overall processing time. (For example, the factor ‘Seldom if ever’ prevents timely processing, ‘Sometimes’ prevents timely processing, etc.) HUD Factors That May Negatively Influence Timely Processing of Section 202 Projects (N=21) B. Frequency Of Factor Preventing Timely Processing (check one box for each factor) (check one box for each factor) (1) (2) (3) (4) (1) (2) (3) (4) (5) a. Staff lack Section 202 experience b. Staff lack Section 202 training c. Section 202 workload (e.g., simultaneously reviewing new applications and paperwork for funded projects) (14.3% did not respond for part B) d. FHA loan processing can be, at certain times, higher priority than Section 202 project processing (14.3% did not respond for parts A/B) e. Some staff unwilling to fully implement HUD Notice H 96-102 (4.8% did not respond for part A) Insufficient project coordination (including turnover in project coordinator position) g. Capital advance insufficient to fund projects h. Award letters not mailed during fiscal year i. Availability of HUD amendment funds (after other funding sources exhausted) (4.8% did not respond for part A and 19.1% for part B) j. Time spent by HUD HQ considering waiver requests (extensions, amendment funds) k. Other (Please specify) (71.4% did not respond for parts A/B) 14.3% 6. Based on your experience with all projects you have sponsored or consulted on that have received fund reservations since 1998: Part A: For each factor related to State, Local, and/or Other requirements, select a single box that most commonly describes the factor’s impact on the overall processing time. Part B: Indicate the frequency of each factor’s influence on the timely processing of Section 202 projects by selecting a single box that most commonly describes the frequency of the factor’s impact on the overall processing time. (For example, the factor ‘Seldom if ever’ prevents timely processing, ‘Sometimes’ prevents timely processing, etc.) Factors Related to State, Local, or Other Requirements That May Negatively Influence Timely Processing of Section 202 Projects (N=21) Factors Related to State, Local, or (check one box for each factor) (check one box for each factor) (1) (2) (3) (4) (1) (2) (3) (4) (5) a. Project is new construction b. Project involves rehabilitation (14.3% did not respond in part A and 19.1% in part B) c. Project site zoning approval (9.5% did not respond in part B) d. Local permits (i.e., obtaining and/or cost of permits) e. State and local historic (4.8% did not respond in part A and 14.3% in part B) g. Securing secondary financing (e.g., time needed to secure additional funding and obtain approval of financing documents) (4.8% did not respond in part A and 9.5% h. Legal challenges (4.8% did not respond in part A and 14.3% in i. General local opposition to project (9.5% did not respond in part B) j. Other (Please specify) (81.0% did not respond in parts A/B) 7. What are the three most important factors (from those listed in the tables above) that can negatively impact timely processing of Section 202 projects? a) b) c) 8. Please identify up to three policy changes within HUD’s control that you believe would aid the timely processing of Section 202 projects from fund reservation to initial closing: a) b) c) Thank you very much for your time. Comments from the Department of Housing and Urban Development GAO’s Mission The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to GAO Mailing Lists” under “Order GAO Products” heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs
According to the Department of Housing and Urban Development (HUD), the most widespread and urgent housing problem facing elderly households is affordability. About 3.3 million elderly renter households in the United States have very low incomes (50 percent or less of median area income). The Section 202 Supportive Housing for the Elderly Program provides capital advances (grants) to nonprofit organizations to develop affordable rental housing exclusively for these households. GAO was asked to determine the role of the Section 202 program in addressing the need for affordable elderly housing and the factors affecting the timeliness of approving and constructing new projects. HUD's Section 202 program provides a valuable housing resource for very low income elderly households. Although they represent a small share of all elderly households, very low income elderly renters have acute housing affordability problems because of their limited income and the need for supportive services. The Section 202 program, which offers about 260,000 rental units nationwide and ensures that residents receive rental assistance and access to services that promote independent living, is the only federal program devoted exclusively to providing this type of housing. However, even with the program's exclusive focus, Section 202 has reached only about an estimated 8 percent of very low income elderly households. About three-quarters of Section 202 projects in GAO's analysis did not meet HUD's time guideline for gaining approval to start construction. These delays held up the delivery of housing assistance to needy elderly households by nearly a year compared with projects that met HUD's guideline. Several factors contributed to these delays, in particular capital advances that were not sufficient to cover development costs. Project sponsors reported that insufficient capital advances often forced them to spend time seeking additional funds from HUD and other sources. Although HUD's policy is to provide sufficient funding to cover the cost of constructing a modestly designed project, HUD has acknowledged that its capital advances for the Section 202 program sometimes fall short. Other factors affecting the timeliness of the approval process include inadequate training and guidance for field staff responsible for the approval process, inexperienced project sponsors, and local zoning and permit requirements.
GAO_GAO-11-920
Background The AEA, as amended, sets forth the procedures and requirements for the U.S. government’s negotiating, proposing, and entering into nuclear cooperation agreements with foreign partners. The AEA, as amended, requires that U.S. peaceful nuclear cooperation agreements contain the following nine provisions: 1. Safeguards: Safeguards, as agreed to by the parties, are to be maintained over all nuclear material and equipment transferred, and all special nuclear material used in or produced through the use of such nuclear material and equipment, as long as the material or equipment remains under the jurisdiction or control of the cooperating party, irrespective of the duration of other provisions in the agreement or whether the agreement is terminated or suspended for any reason. Such safeguards are known as “safeguards in perpetuity.” 2. Full-scope IAEA safeguards as a condition of supply: In the case of non-nuclear weapons states, continued U.S. nuclear supply is to be conditioned on the maintenance of IAEA “full-scope” safeguards over all nuclear materials in all peaceful nuclear activities within the territory, under the jurisdiction, or subject to the control of the cooperating party. 3. Peaceful use guaranty: The cooperating party must guarantee that it will not use the transferred nuclear materials, equipment, or sensitive nuclear technology, or any special nuclear material produced through the use of such, for any nuclear explosive device, for research on or development of any nuclear explosive device, or for any other military purpose. 4. Right to require return: An agreement with a non-nuclear weapon state must stipulate that the United States has the right to require the return of any transferred nuclear materials and equipment, and any special nuclear material produced through the use thereof, if the cooperating party detonates a nuclear device, or terminates or abrogates an agreement providing for IAEA safeguards. 5. Physical security: The cooperating party must guarantee that it will maintain adequate physical security for transferred nuclear material and any special nuclear material used in or produced through the use of any material, or production or utilization facilities transferred pursuant to the agreement. 6. Retransfer rights: The cooperating party must guarantee that it will not transfer any material, Restricted Data, or any production or utilization facility transferred pursuant to the agreement, or any special nuclear material subsequently produced through the use of any such transferred material, or facilities, to unauthorized persons or beyond its jurisdiction or control, without the consent of the United States. 7. Restrictions on enrichment or reprocessing of U.S.-obligated material: The cooperating party must guarantee that no material transferred, or used in, or produced through the use of transferred material or production or utilization facilities, will be reprocessed or enriched, or with respect to plutonium, uranium-233, HEU, or irradiated nuclear materials, otherwise altered in form or content without the prior approval of the United States. 8. Storage facility approval: The cooperating party must guarantee not to store any plutonium, uranium-233, or HEU that was transferred pursuant to a cooperation agreement, or recovered from any source or special nuclear material transferred, or from any source or special nuclear material used in a production facility or utilization facility transferred pursuant to the cooperation agreement, in a facility that has not been approved in advance by the United States. 9. Additional restrictions: The cooperating party must guarantee that any special nuclear material, production facility, or utilization facility produced or constructed under the jurisdiction of the cooperating party by or through the use of transferred sensitive nuclear technology, will be subject to all the requirements listed above. In addition, the United States is a party to the Treaty on the Non- Proliferation of Nuclear Weapons (NPT). The NPT binds each of the treaty’s signatory states that had not manufactured and exploded a nuclear weapon or other nuclear explosive device prior to January 1, 1967 (referred to as non-nuclear weapon states) to accept safeguards as set forth in an agreement to be concluded with IAEA. Under the safeguards system, IAEA, among other things, inspects facilities and locations containing nuclear material, as declared by each country, to verify its peaceful use. IAEA standards for safeguards agreements provide that the agreements should commit parties to establish and maintain a system of accounting for nuclear material, with a view to preventing diversion of nuclear energy from peaceful uses, and reporting certain data to IAEA. IAEA’s security guidelines provide the basis by which the United States and other countries generally classify the categories of protection that should be afforded nuclear material, based on the type, quantity, and enrichment of the nuclear material. For example, Category I material is defined as 2 kilograms or more of unirradiated or “separated” plutonium or 5 kilograms of uranium-235 contained in unirradiated or “fresh” HEU and has the most stringent set of recommended physical protection measures. The recommended physical protection measures for Category II and Category III nuclear materials are less stringent. Appendix III contains further details on the categorization of nuclear material. DOE, NRC, and State Are Not Able to Fully Account for U.S. Nuclear Material Located at Foreign Facilities DOE, NRC, and State are not able to fully account for U.S. nuclear material overseas that is subject to nuclear cooperation agreement terms because the agreements do not stipulate systematic reporting of such information, and there is no U.S. policy to pursue or obtain such information. Section 123 of the AEA, as amended, does not require nuclear cooperation agreements to contain provisions stipulating that partners report information on the amount, status, or location (facility) of special nuclear material subject to the agreement terms. However, U.S. nuclear cooperation agreements generally require that partners report inventory information upon request, although DOE and NRC have not systematically sought such data. We requested from multiple offices at DOE and NRC a current and comprehensive inventory of U.S. nuclear material overseas, to include country, site, or facility, and whether the quantity of material was rated as Category I or Category II material. However, neither agency has provided such an inventory. NMMSS does not contain the data necessary to maintain an inventory of U.S. special nuclear material overseas. DOE, NRC, and State have not pursued annual inventory reconciliations of nuclear material subject to U.S. cooperation agreement terms with all foreign partners that would provide the U.S. government with better information about where such material is held. Furthermore, according to DOE, NRC, and State officials, no U.S. law or policy directs U.S. agencies to obtain information regarding the location and disposition of U.S. nuclear material at foreign facilities. U.S. Nuclear Cooperation Agreements Generally Require That Partners Report Inventory Information upon Request, but DOE and NRC Have Not Systematically Sought Such Data Section 123 of the AEA, as amended, does not require nuclear cooperation agreements to contain provisions stipulating that partners report information on the amount, status, or location (facility) of special nuclear material subject to the agreement terms. However, the texts of most U.S. nuclear cooperation agreements contain a provision calling for each partner to maintain a system of material accounting and control and to do so consistent with IAEA safeguards standards or agreements. In addition, we found that all agreements, except three negotiated prior to 1978 and the U.S.-China agreement, contain a provision that the other party shall report, or shall authorize the IAEA to report, inventory information upon request. However, according to DOE and NRC officials, with the exception of the administrative arrangements with five partners, the United States has not requested such information from all partners on an annual or systematic basis. Nonetheless, the AEA requires U.S. nuclear cooperation agreements to include terms that, among other things, obligate partners to obtain U.S. approval for the transfer, retransfer, enrichment and reprocessing, and the storage of U.S.-obligated uranium-233, HEU, or other nuclear materials that have been irradiated. In addition, according to DOE and NRC officials, the United States obtains written assurances from partners in advance of each transfer of U.S. nuclear material that commits them to maintain the transferred nuclear material according to the terms of its nuclear cooperation agreement with the United States. DOE and NRC officials told us these assurances help the United States ensure that partner countries comply with the terms of the nuclear cooperation agreement. In addition, IAEA, DOE, NRC, and State officials told us that IAEA’s safeguards activities provide a level of assurance that nuclear material is accounted for at partner facilities. The safeguards system, which has been a cornerstone of U.S. efforts to prevent nuclear proliferation, allows IAEA to independently verify that non-nuclear weapons states that signed the NPT are complying with its requirements. Under the safeguards system, IAEA, among other things, inspects facilities and locations containing nuclear material declared by countries to verify its peaceful use. Inspectors from IAEA’s Department of Safeguards verify that the quantities of nuclear material that these non-nuclear weapons states declared to IAEA are not diverted for other uses. IAEA considers such information confidential and does not share it with its member states, including the United States, unless the parties have agreed that IAEA can share the information. IAEA’s inspectors do not verify nuclear material by country of origin or associated obligation. DOE, State, and IAEA officials told us that, because IAEA does not track the obligation of the material under safeguards, IAEA may notice discrepancies in nuclear material balances through periodic reviews of countries’ shipping records. However, these officials said that IAEA does not have the ability to identify whether and what volume of nuclear material at partner country facilities is U.S.- obligated and therefore subject to the terms of U.S. nuclear cooperation agreements. DOE and NRC Do Not Have a Current Comprehensive Inventory of U.S. Material Overseas DOE and NRC do not have a comprehensive, detailed, current inventory of U.S. nuclear material overseas that would enable the United States to identify material subject to U.S. nuclear cooperation agreement terms. We requested from multiple offices at DOE and NRC a current and comprehensive inventory of U.S. nuclear material overseas, to include country, site, or facility, and whether the quantity of material was Category I or Category II. However, the agencies have not provided such a list. DOE officials from the Office of Nonproliferation and International Security told us that they have multiple mechanisms to account for the amount of U.S.-obligated nuclear material at foreign facilities. They stated that they use NMMSS records to obtain information regarding U.S. nuclear material inventories held in other countries. However, NMMSS officials told us that NMMSS was an accurate record of material exports from the United States, but that it should not be used to estimate current inventories. In addition, NMMSS officials stated that DOE’s GTRI program has good data regarding the location of U.S. nuclear material overseas and that this information should be reconciled with NMMSS data. However, when we requested information regarding the amount of U.S. material at partner facilities, GTRI stated that they could not report on the amount of U.S. nuclear material remaining at facilities unless it was scheduled for GTRI to return. In addition, in February 2011 written comments to us, GTRI stated it was not responsible for acquiring or maintaining inventory information regarding U.S. nuclear material overseas. A long-time contract employee for DOE’s Office of Nonproliferation and International Security stated he has tried to collect information regarding U.S. nuclear material overseas from various sources including a list of countries eligible for GTRI’s fuel return program, NMMSS, and other sources, but it is not possible to reconcile information from the various lists and sources and consequently there is no list of U.S. inventories overseas. According to public information, the United States has additional measures known as administrative arrangements with five of its trading partners to conduct annual reconciliations of nuclear material amounts. In addition, for all partners, DOE and NRC officials told us that an exchange of diplomatic notes is sent prior to any transfer to ensure that U.S. nuclear material is not diverted for non-peaceful purposes, and which binds the partner to comply with the terms of the nuclear cooperation agreement. However, the measures cited by DOE are not comprehensive or sufficiently detailed to provide the specific location of U.S. nuclear material overseas. NRC and DOE could not fully account for U.S. exports of HEU in response to a congressional mandate that the agencies report on the current location and disposition of U.S. HEU overseas. In 1992, Congress mandated that NRC, in consultation with other relevant agencies, submit to Congress a report detailing the current status of previous U.S. exports of HEU, including its location, disposition (status), and how it had been used. The January 1993 report that NRC produced in response to the mandate stated it was not possible to reconcile this information from available U.S. sources of data with all foreign holders of U.S. HEU within the 90-day period specified in the act. The report further states that a thorough reconciliation of U.S and foreign records with respect to end use could require several months of additional effort, assuming that EURATOM would agree to participate. According to DOE and NRC officials, no further update to the report was issued, and the U.S. government has not subsequently attempted to develop such a comprehensive estimate of the location and status of U.S. HEU overseas. The 1993 report provided estimated material balances based on the transfer, receipt, or other adjustments reported to the NMMSS and other U.S. agencies. The report stated that the estimated material balances should match partners’ reported inventories. However, the report did not compare the balances or explain the differences. Our analysis of other documentation associated with the report shows that NRC, in consultation with U.S. agencies, was able to verify the location of 1,160 kilograms out of an estimated 17,500 kilograms of U.S. HEU remaining overseas as of January 1993. NRC’s estimates matched partner estimates in 22 cases; did not match partner estimates in 6 cases; and, in 8 cases, partners did not respond in time to NRC’s request. The 1993 report noted that, in cases where U.S. estimates did not match partners’ inventory reports, “reconciliation efforts are underway.” However, DOE, NRC, and NMMSS officials told us that no further report was issued. In addition, NMMSS officials told us that they were unaware of any subsequent efforts to reconcile U.S. estimates with partners’ reports, or update the January 1993 report. In addition, we found no indication that DOE, NMMSS, or NRC officials have updated the January 1993 report, or undertaken a comprehensive accounting of U.S. nuclear material overseas. NMMSS Does Not Contain Data Necessary to Identify Where U.S. Material Is Located Overseas We found that NMMSS does not contain the data necessary to maintain an inventory of U.S. nuclear material overseas subject to U.S. nuclear cooperation agreements. According to NRC documents, NMMSS is part of an overall program to help satisfy the United States’ accounting, controlling, and reporting obligations to IAEA and its nuclear trading partners. NMMSS, the official central repository of information on domestic inventories and exports of U.S. nuclear material, contains current and historic data on the possession, use, and shipment of nuclear material. It includes data on U.S.-supplied nuclear material transactions with other countries and international organizations, foreign contracts, import/export licenses, government-to-government approvals, and other DOE authorizations such as authorizations to retransfer U.S. nuclear material between foreign countries. DOE and NRC officials told us that NMMSS contains the best available information regarding U.S. exports and retransfers of special nuclear material. DOE and NRC do not collect data necessary for NMMSS to keep an accurate inventory of U.S. nuclear material overseas. According to NRC officials, NMMSS cannot track U.S. nuclear material overseas because data regarding the current location and status of U.S. nuclear material, such as irradiation, decay, burn up, or production, are not collected. NMMSS only contains data on domestic inventories and transaction receipts from imports and exports reported by domestic nuclear facilities and some retransfers reported by partners to the United States and added to the system by DOE. Therefore, while the 1995 Nuclear Proliferation Assessment Statement accompanying the U.S.-EURATOM agreement estimated 250 tons of U.S.-obligated plutonium are planned to be separated from spent power reactor fuel in Europe and Japan for use in civilian energy programs in the next 10 to 20 years, our review indicates that the United States would not be able to identify the European countries or facilities where such U.S.-obligated material is located. DOE, NRC, and State Have Not Pursued Annual Reconciliations of Inventories of Nuclear Material Subject to U.S. Nuclear Cooperation Agreement Terms with All Partners DOE, NRC, and State have not pursued annual inventory reconciliations of nuclear material subject to U.S. nuclear cooperation agreement terms with all partners that would provide the U.S. government with better information about where such material is held overseas. Specifically, once a nuclear cooperation agreement is concluded, U.S. government officials—generally led by DOE—and partner country officials may negotiate an administrative arrangement for an annual inventory reconciliation to exchange information regarding each country’s nuclear material accounting balances. Inventory reconciliations typically compare the countries’ data and material transfer and retransfer records, and can help account for material consumed or irradiated by reactors. Government officials from several leading nuclear material exporting and importing countries told us that they have negotiated with all their other partners to exchange annual inventory reconciliations to provide a common understanding of the amount of their special material held by another country or within their country. For example, Australia, which exports about 13 percent of the world’s uranium each year, conducts annual reconciliations with each of its partners, and reports annually to the Australian Parliament regarding the location and disposition of all Australian nuclear material. NRC officials told us that Australia has some of the strictest reporting requirements for its nuclear material. The United States conducts annual inventory reconciliations with five partners but does not conduct inventory reconciliations with the other partners it has transferred material to or trades with. According to DOE officials, for the five reconciliations currently conducted, NMMSS data are compared with the partner’s records and, if warranted, each country’s records are adjusted, where necessary, to reflect the current status of U.S special nuclear material. As of February 2011, the United States conducted bilateral annual exchanges of total material balances for special nuclear materials with five partners. Of these partners, the United States exchanges detailed information regarding inventories at each specific facility only with one partner. DOE officials noted that they exchange information with particular trading partners on a transactional basis during the reporting year and work with the partners at that time to resolve any potential discrepancies that may arise. In the case of EURATOM, material information is reported as the cumulative total of all 27 EURATOM members. For the purposes of nuclear cooperation with the United States, EURATOM is treated as one entity rather than its 27 constituent parts. None of the 27 EURATOM member states have bilateral nuclear cooperation agreements in force with the United States. According to a 2010 DOE presentation for NMMSS users, the difference in reporting requirements results in a 69-page report for Japan and a 1-page report for EURATOM. In addition, information exchanged with other trading partners also is not reported by facility. DOE and NRC officials told us that the United States may not have accurate information regarding the inventories of U.S. nuclear material held by its 21 other partners. DOE officials told us that, in addition to benefits, there were costs to pursuing facility-by-facility reconciliations and reporting. In particular, DOE officials told us they have not pursued facility-by-facility accounting in annual reconciliations with other partners because it would be difficult for the United States to supply such detailed information regarding partner material held in U.S. facilities. DOE and NRC officials told us this would also create an administrative burden for the United States. According to DOE officials, the relative burden with which the United States can perform facility-by-facility accounting by foreign trading partner varies greatly based on the amount of material in the United States that is obligated to such partners. For example, the United States can perform facility-by-facility accounting with one country, because U.S. officials told us there is not much of that country’s nuclear material in the United States. However, if the United States were to conduct facility-by-facility accounting with Australia, it would create burdensome reporting requirements. Specifically, according to DOE officials, Australia would have to report to the United States on the status of a few facilities holding U.S. nuclear material, but the United States would be required to report on hundreds of U.S. facilities holding Australian nuclear material. Without information on foreign facilities, however, it may be difficult to track U.S. nuclear materials for accounting and control purposes. No U.S. Law or Policy Directs U.S. Agencies to Obtain Information Regarding the Location and Disposition of U.S. Nuclear Material at Foreign Facilities DOE, NRC, and State officials told us neither U.S. law nor U.S. policy explicitly requires the United States to track U.S. special nuclear material overseas. Moreover, U.S. law does not require peaceful nuclear cooperation agreements to require cooperating parties to provide reports to the United States of nuclear material on a facility-by-facility basis. A March 2002 DOE Inspector General’s audit raised concerns about the U.S. government’s ability to track sealed sources, which could contain nuclear or radioactive material. In response to the audit’s findings, NNSA’s Associate Administrator for Management and Administration wrote that “While it is a good idea to be aware of the locations and conditions of any material, it is not the current policy of the U.S. government.” Furthermore, the Associate Administrator asserted that various U.S. government agencies, including State, DOE, and NRC, would need to be involved should DOE change its policy and undertake an initiative to track the location and condition of U.S. sealed sources in foreign countries. Similarly, DOE, NRC, and State officials told us that if it became the policy of the U.S. government to track nuclear material overseas—and in particular, by facility—then requirements would have to be negotiated into the nuclear cooperation agreements or the associated administrative arrangements. NMMSS officials told us that NMMSS is currently capable of maintaining information regarding inventories of U.S. nuclear material overseas. However, as we reported in 1982, NMMSS information is not designed to track the location (facility) or the status—such as whether the material is irradiated or unirradiated, fabricated into fuel, burned up, or reprocessed. As a result, NMMSS neither identifies where U.S. material is located overseas nor maintains a comprehensive inventory of U.S.- obligated material. In addition, NMMSS officials emphasized that this information would need to be systematically reported. According to these officials, such reporting is not done on a regular basis by other DOE offices and State. In some instances, State receives a written notice of a material transfer at its embassies and then transmits this notice to DOE. Officials from DOE’s Office of Nonproliferation and International Security told us that, while they could attempt to account for U.S. material overseas on a case-by-case basis, obtaining the information to systematically track this material would require renegotiating the terms of nuclear cooperation agreements. DOE has recently issued proposed guidance clarifying the role of DOE offices for maintaining and controlling U.S. nuclear material. An October 2010 draft DOE order states that DOE “Manages the development and maintenance of NMMSS by: (a) collecting data relative to nuclear materials including those for which the United States has a safeguards interest both domestically and abroad; (b) processing the data; and (c) issuing reports to support the safeguards and management needs of DOE and NRC, and other government organizations, including those associated with international treaties and organizations.” However, we did not find any evidence that DOE will be able to meet those responsibilities in the current configuration of NMMSS without obtaining additional information from partners and additional and systematic data sharing among DOE offices. DOE, NRC, and State Do Not Have Access Rights to Monitor and Evaluate That U.S. Nuclear Material Located at Foreign Facilities Is Adequately Protected Nuclear cooperation agreements do not contain specific access rights that enable DOE, NRC, or State to monitor and evaluate the physical security of U.S. nuclear material overseas, and the United States relies on partners to maintain adequate security. In the absence of specific access rights, DOE, NRC, and State have jointly conducted interagency physical protection visits to monitor and evaluate the physical security of nuclear material when given permission by the partner country. However, the interagency physical protection teams have neither systematically visited countries believed to be holding Category I quantities of U.S. nuclear material, nor have they systematically revisited facilities determined to not be meeting IAEA security guidelines in a timely manner. U.S. Agencies’ Ability to Evaluate the Security of U.S. Nuclear Material Overseas Is Limited by Lack of Access Rights, and the United States Relies on Partners to Maintain Adequate Security DOE’s, NRC’s, and State’s ability to monitor and evaluate whether material subject to U.S. nuclear cooperation agreement terms is physically secure is contingent on partners granting access to facilities where such material is stored. Countries, including the United States, believe that the physical protection of nuclear materials is a national responsibility. This principle is reflected both in IAEA’s guidelines on the “Physical Protection of Nuclear Material and Nuclear Facilities” and in pending amendments to the Convention on the Physical Protection of Nuclear Material. Our review of section 123 of the AEA and all U.S. nuclear cooperation agreements currently in force found that they do not explicitly include a provision granting the United States access to verify the physical protection of facilities or sites holding material subject to U.S. nuclear cooperation agreement terms. However, in accordance with the AEA, as amended, all nuclear cooperation agreements, excepting three negotiated prior to 1978, contain provisions requiring both partners to maintain adequate physical security over transferred material. The AEA, as amended, requires that the cooperating party must guarantee that it will maintain adequate physical security for transferred nuclear material and any special nuclear material used in or produced through the use of any material, or production, or utilization facility transferred pursuant to the agreement. However, it does not specify that State, in cooperation with other U.S. agencies, negotiates agreement terms that must include rights of access or other measures for the United States to verify whether a partner is maintaining adequate physical security over U.S. material. Our review of the texts of all 27 U.S. nuclear cooperation agreements in force found that most of them contain a provision providing that the adequacy of physical protection measures shall be subject to review and consultations by the parties. However, none of the agreements include specific provisions stipulating that the United States has the right to verify whether a partner is adequately securing U.S. nuclear material. As a result, several DOE and State officials told us the United States’ ability to monitor and evaluate the physical security of U.S. nuclear material overseas is contingent on partners’ cooperation and access to facilities where U.S. material is stored. State, DOE, and NRC officials told us that they rely on partners to comply with IAEA’s security guidelines for physical protection. However, the guidelines, which are voluntary, do not provide for access rights for other states to verify whether physical protection measures for nuclear material are adequate. IAEA’s security guideline document states that the “responsibility for establishing and operating a comprehensive physical protection system for nuclear materials and facilities within a State rests entirely with the Government of that State.” In addition, according to the guidelines, member states should ensure that their national laws provide for the proper implementation of physical protection and verify continued compliance with physical protection regulations. For example, according to IAEA’s security guidelines, a comprehensive physical protection system to secure nuclear material should include, among other things, technical measures such as vaults, perimeter barriers, intrusion sensors, and alarms;  material control procedures; and  adequately equipped and appropriately trained guard and emergency response forces. In addition, according to DOE and State officials, key international treaties, including the Convention on the Physical Protection of Nuclear Material—which calls for signatory states to provide adequate physical protection of nuclear material while in international transit—do not provide states the right to verify the adequacy of physical protection measures. A senior official from IAEA’s Office of Nuclear Security told us that physical security is a national responsibility and that governments may choose to organize their various physical security components differently, as long as the components add up to an effective regime. Despite these constraints on access, the U.S. government can take certain actions to protect U.S. nuclear material located at foreign facilities. For example, NRC licensing for the export of nuclear equipment and material is conditioned on partner maintenance of adequate physical security. NRC officials stated that, when an export license application for nuclear materials or equipment is submitted, the U.S. government seeks confirmation, in the form of peaceful use assurances, from the foreign government that the material and equipment, if exported, will be subject to the terms and conditions of that government’s nuclear cooperation agreement with the United States. In addition, NRC officials stated that this government-to-government reconfirmation of the terms and conditions of the agreement meets the “letter and spirit” of the AEA and Nuclear Non-Proliferation Act of 1978 (NNPA) and underscores that the partner is aware of and accepts the terms and conditions of the agreement. NRC officials also noted that the NNPA amendments to the AEA were designed and intended to encourage foreign governments to agree to U.S. nonproliferation criteria in exchange for nuclear commodities. However, the AEA does not empower the U.S. government through inspections or other means to enforce foreign government compliance with nuclear cooperation agreements once U.S. nuclear commodities are in a foreign country. Importantly, according to NRC, the onus is on the receiving country as a sovereign right and responsibility and consistent with its national laws and international commitments, to adequately secure the nuclear material. According to DOE and State, as well as foreign government officials, the United States and the partner share a strong common interest in deterring and preventing the misuse of nuclear material, as well as an interest in maintaining the rights afforded to sovereign countries. The partner’s interest in applying adequate security measures, for instance, is particularly strong because the nuclear material is located within its territory. Moreover, specific physical security needs may often depend on unique circumstances and sensitive intelligence information known only to the partner. In addition, the AEA requires that U.S. nuclear cooperation agreements with non-nuclear weapon states contain a stipulation that the United States shall have the right to require the return of certain nuclear material, as well as equipment, should the partner detonate a nuclear device or terminate or abrogate its safeguards agreements with IAEA. However, DOE, NRC, and State officials told us that the U.S. government has never exercised the “right to require return” provisions in its nuclear cooperation agreements. In addition, the United States typically includes “fall-back safeguards”—contingency plans for the application of alternative safeguards should IAEA safeguards become inapplicable for any other reason. DOE and State officials told us, however, that the United States has not exercised its fall-back safeguards provisions, because the United States has not identified a situation where IAEA was unable to perform its safeguards duties. U.S. Agencies Have Visited Foreign Sites to Monitor and Evaluate the Physical Security of U.S. Nuclear Material U.S. agencies have, over time, made arrangements with partners to visit certain facilities where U.S. nuclear material is stored. As we reported in August 1982 and in December 1994, U.S. interagency physical protection teams visit partner country facilities to monitor and evaluate whether the physical protection provided to U.S. nuclear material meets IAEA physical security guidelines. In 1974, DOE’s predecessor, the Energy Research and Development Administration, began leading teams composed of State, NRC, and DOE national laboratory officials to review the partner’s legal and regulatory basis for physical protection and to ensure that U.S. nuclear material was adequately protected. In 1988, the Department of Defense’s Defense Threat Reduction Agency began to participate in these visits, and officials from other agencies and offices, such as GTRI, have participated. The visits have generally focused on research reactors containing HEU but have also included assessments, when partners voluntarily grant access, of other facilities’ physical security, including nuclear power plants, reprocessing facilities, and research and development facilities containing U.S. nuclear material. According to DOE documents and DOE, NRC, and State officials, the primary factors for selecting countries for visits are the type, quantity, and form of nuclear material, with priority given to countries with U.S. HEU or plutonium in Category I amounts. In addition, in 1987, NRC recommended that countries possessing U.S. Category I nuclear material be revisited at least every 5 years. DOE and NRC officials told us this has become an official goal for prioritizing visits. According to DOE, interagency physical protection visits are also made whenever the country has had or expects to have a significant change in its U.S. nuclear material inventory, along with other factors, such as previous findings that physical protection was not adequate. These criteria and other factors are used to help U.S. agencies prioritize visits on a countrywide basis and also supplement other information that is known about a partner’s physical protection system and the current threat environment. Moreover, while the U.S. physical protection program assesses physical security conditions on a site-specific basis, NRC’s regulations permit the determination of adequacy of foreign physical protection systems on a countrywide basis. Therefore, DOE, NRC, and State officials told us that the results of the interagency physical protection visits, combined with other sources of information such as country threat assessments, are used as a measure of the physical security system countrywide. The U.S. teams visit certain facilities where U.S. nuclear material is used or stored to observe physical protection measures after discussing the relevant nuclear security regulatory framework with the partner government. DOE and State officials told us these physical protection visits help U.S. officials develop relationships with partner officials, share best practices and, in some cases, recommend physical security improvements. We visited four facilities that hold U.S.-obligated nuclear material. The partner officials and facility operators we met shared their observations regarding the U.S. physical protection visits. Representatives from one site characterized a recent interagency physical protection visit as a “tour.” These officials told us the U.S. government officials had shared some high-level observations regarding their visit with government officials and nuclear reactor site operators but did not provide the government or site operators with written observations or recommendations. On the other hand, government officials from another country we visited told us that a recent interagency physical protection visit had resulted in a useful and detailed exchange of information about physical security procedures. These government officials told us they had learned “quite a lot” from the interagency physical protection visit and that they hoped the dialogue would continue, since security could always be improved. In February 2011, DOE officials told us they had begun to distribute the briefing slides they use at the conclusion of a physical protection visit to foreign officials. State officials told us that the briefings are considered government-to-government activities, and it is the partner government’s choice on whether to include facility operators in the briefings. In addition, we reviewed U.S. agencies’ records of these and other physical protection visits and found that, over the 17-year period from 1994 through 2010, U.S. interagency physical protection teams made 55 visits. Of the 55 visits, interagency physical protection teams found the sites met IAEA security guidelines on 27 visits, did not meet IAEA security guidelines on 21 visits, and the results of 7 visits are unknown because the physical protection team was unable to assess the sites, or agency documentation was missing. According to DOE, State, and NRC officials, the visits are used to encourage security improvements by the partner. For example, based on the circumstances of one particular facility visited in the last 5 years, the physical protection team made several recommendations to improve security, including installing (1) fences around the site’s perimeter, (2) sensors between fences, (3) video assessment systems for those sensors, and (4) vehicle barriers. According to DOE officials, these observations were taken seriously by the country, which subsequently made the improvements. When we visited the site as part of our review, government officials from that country told us the U.S. interagency team had provided useful advice and, as a result, the government had approved a new physical protection plan. These government officials characterized their interactions with DOE and other U.S. agency officials as positive and told us that the government’s new physical protection plan had been partly implemented. Moreover, although we were not granted access to the building, we observed several physical protection upgrades already implemented or in progress, including: (1) the stationing of an armed guard outside the facility holding U.S. Category I material; (2) ongoing construction of a 12- foot perimeter fence around the facility; and (3) construction of a fence equipped with barbed wire and motion detectors around the entire research complex. We were also told that, among other things, remote monitoring equipment had been installed in key areas in response to the interagency visit. The Central Alarm Station was hardened, and the entrance to the complex was controlled by turnstiles and a specially issued badge, which entrants received after supplying a passport or other government-issued identification. Private automobiles were not allowed in the facility. Not all U.S. physical protection visits proceed smoothly. In some cases, U.S. agencies have attempted repeatedly to convince partner officials of the seriousness of meeting IAEA security guidelines and to fund improvements. For example, a U.S. interagency physical protection team in the early 2000s found numerous security problems at a certain country’s research reactor. The site supervisor objected to the interagency team’s assessment because physical security was a matter of national sovereignty, and IAEA security guidelines were subject to interpretation. The site supervisor also objected to some of the U.S. team’s recommendations. In some instances, under U.S. pressure, countries have agreed to make necessary improvements with DOE technical and material assistance. Our review of agency records indicates that, in recent years, as the number of countries relying on U.S. HEU to fuel research reactors has continued to decline, U.S. agencies have succeeded in using a partner’s pending export license for U.S. HEU or expected change in inventory of U.S. special nuclear material as leverage for a U.S. interagency physical protection visit. For example, we identified two cases since 2000 where a partner country applied for a license to transfer U.S. HEU, and a U.S. interagency team subsequently visited those two sites. In addition, we identified a recent situation where a partner country’s inventory of U.S. plutonium at a certain site was expected to significantly increase, and a U.S. interagency team visited the site to determine whether the site could adequately protect these additional inventories. According to DOE officials, requests for U.S. low enriched uranium (LEU) export licenses have increased in recent years. In response, DOE officials told us that U.S. agencies have begun to prioritize visits to countries making such requests, and our review of agency documentation corroborates this. For example, physical protection visit records we reviewed state that recent interagency physical protection visits were made to two sites to evaluate the facilities’ physical security in advance of pending U.S. LEU license applications. In addition, a DOE contractor and State official told us that a U.S. team planned to visit another partner country site in late 2011 in order to verify the adequacy of physical protection for U.S.-obligated LEU. U.S. Agencies Do Not Have a Formal Process for Coordinating and Prioritizing U.S. Physical Protection Visits DOE, NRC, and State do not have a formal process for coordinating and prioritizing U.S. interagency physical protection visits. In particular, DOE, which has the technical lead and is the agency lead on most visits has neither (1) worked with NRC and State to establish a plan and prioritize interagency physical protection visits, nor (2) measured performance in a systematic way. Specifically:  Establishing a plan and prioritizing and coordinating efforts. A U.S. agency formal plan for which countries or facilities to visit has not been established, nor have goals for the monitoring and evaluation activities been formalized. In October 2009, DOE reported to us that it had formulated a list of countries that contained U.S. nuclear material and were priorities for U.S. teams to visit. However, in a subsequent written communication to us, a senior DOE official stated that DOE had not yet discussed this list with State, NRC, or other agency officials. As a result, the list of countries had not been properly vetted at that time and did not represent an interagency agreed-upon list. In February 2011, DOE officials told us that U.S. agencies will be considering a revised methodology for prioritizing physical protection visits. NRC officials told us they thought the interagency coordination and prioritization of the visit process could be improved. A State official, who regularly participates in the U.S. physical protection visits, told us that interagency coordination had improved in the past 6 months, in response to a recognized need by U.S. agencies to be prepared for an expected increase in requests for exports of U.S. LEU.  Measuring performance. The agencies have not developed performance metrics to gauge progress in achieving stated goals related to physical protection visits. Specifically, DOE, NRC, and State have not performed an analysis to determine whether the stated interagency goal of visiting countries containing U.S. Category I nuclear material within 5 years has been met. In addition, although DOE has stated U.S. physical protection teams revisit sites whenever there is an indication that security does not meet IAEA security guidelines, DOE has not quantified its efforts in a meaningful way. In response to our questions about metrics, DOE officials stated that there is no U.S. law regarding the frequency of visits or revisits and that the agency’s internal goals are not requirements. These officials told us that DOE, NRC, and State recognize that the “number one goal” is to ensure the physical security of U.S. nuclear material abroad. DOE officials stated that the best measure of the U.S. physical protection visits’ effectiveness is that there has not been a theft of U.S. nuclear material from a foreign facility since the 1970s, when two LEU fuel rods were stolen from a certain country. However, officials reported to us that, in 1990, the facility was determined to be well below IAEA security guidelines. Our review of DOE documentation shows that other U.S. LEU transferred to the facility remains at the site. In July 2011, in conjunction with the classification review for this report, DOE officials stated that while DOE, NRC, and State work together on coordinating U.S. government positions regarding priorities and procedures for the interagency physical protection program, no updated document exists that formalizes the process for planning, coordinating, and prioritizing U.S. interagency physical protection visits. We note that the documents that DOE refers to are internal DOE documents presented to us in 2008 and 2009 in response to questions regarding nuclear cooperation agreements. These documents are not an interagency agreed-upon document, but reflect DOE’s views on determining which countries and facilities interagency physical protection teams should visit. Further, DOE officials in July 2011 stated that DOE, NRC, and State do not have an agreed-upon way to measure performance in a systematic way, and that while the goals for the monitoring and evaluation activities have not yet been formalized through necessary updated documents, a prioritized list of countries to visit does exist. These officials noted that the U.S. government is working to update its planning documents and is examining its methodology for prioritizing physical protection visits. Any changes will be included in these updated documents. DOE and U.S. agencies’ activities for prioritizing and coordinating U.S. interagency physical protection visits and measuring performance do not meet our best practices for agency performance or DOE’s standards for internal control. We have reported that defining the mission and desired outcomes, measuring performance, and using performance information to identify performance gaps are critical if agencies are to be accountable for achieving intended results. In addition, DOE’s own standards for internal control call for “processes for planning, organizing, directing, and controlling operations designed to reasonably assure that programs achieve intended results… and decisions are based on reliable data.” However, DOE, NRC, and State have neither established a plan nor measured performance to determine whether they are meeting internal goals and whether U.S. agencies’ activities are systematic. DOE and U.S. Agencies Do Not Systematically Visit Countries with Category I U.S. Nuclear Material or Revisit Foreign Facilities Not Meeting Security Guidelines U.S. agencies have not systematically evaluated the security of foreign facilities holding U.S. nuclear material in two key ways. First, U.S. interagency physical protection teams have not systematically visited countries holding Category I quantities of U.S. nuclear material. Second, interagency teams have not revisited sites that did not meet IAEA security guidelines in a timely manner. U.S. interagency physical protection teams have not systematically visited countries believed to be holding Category I quantities of U.S. special nuclear material at least once every 5 years—a key programmatic goal. In a December 2008 document, DOE officials noted that, in 1987, NRC recommended that countries possessing Category I nuclear material be revisited at least once every 5 years. This recommendation was adopted as a goal for determining the frequency of follow-on visits. In addition, DOE, NRC, and State officials told us that they aim to conduct physical protection visits at each country holding Category I quantities of U.S. nuclear material at least once every 5 years. We evaluated U.S. agencies’ performance at meeting this goal by reviewing records of U.S. physical protection visits and other information. We found that the United States had met this goal with respect to two countries by conducting physical protection visits at least once every 5 years since 1987 while they held Category I quantities of U.S. nuclear material. However, we estimated that 21 countries held Category I amounts of U.S. nuclear material during the period from 1987 through 2010 but were not visited once every 5 years while they held such quantities of U.S. nuclear material. In addition, U.S. interagency physical protection teams have not visited all partner facilities believed to contain Category I quantities of U.S. special nuclear material to determine whether the security measures in place meet IAEA security guidelines. Specifically, we reviewed physical protection visit records and NMMSS data and identified 12 facilities that NMMSS records indicate received Category I quantities of U.S. HEU that interagency physical protection teams have never visited. We identified four additional facilities that GTRI officials told us currently hold, and will continue to hold, Category I quantities of U.S. special nuclear material for which there is no acceptable disposition path in the United States. In addition, these facilities have not been visited by a U.S. interagency physical protection team, according to our review of available documentation. Moreover, U.S. interagency physical protection teams have not systematically visited partner storage facilities for U.S. nuclear material. The AEA, as amended, requires that U.S. nuclear cooperation agreements contain a stipulation giving the United States approval rights over any storage facility containing U.S. unirradiated or “separated” plutonium or HEU. DOE and NRC officials told us there is no list of such storage facilities besides those listed in a U.S. nuclear cooperation agreement with a certain partner. They stated—and our review of available documents corroborated—that a number of the U.S. physical protection visits have included assessments of overseas storage sites for U.S. nuclear material, since such sites are often collocated with research reactors. However, our review also found two instances where partner storage areas containing U.S. HEU or separated plutonium did not meet IAEA guidelines or were identified as potentially vulnerable. DOE and U.S. agencies do not have a systematic process to revisit or monitor security improvements at facilities that do not meet IAEA security guidelines. Based on our analysis of available documentation, we found that, since 1994, U.S. interagency physical protection teams determined that partner country sites did not meet IAEA security guidelines on 21 visits. We then examined how long it took for a U.S. team to revisit the sites that did not meet IAEA security guidelines and found that, in 13 of 21 cases, U.S. interagency teams took 5 years or longer to revisit the facilities. According to DOE, NRC, and State officials, the interagency physical protection visits are not the only way to determine whether partner facilities are meeting IAEA security guidelines. For example, the United States is able to rely on information provided by other visits and U.S. embassy staff to monitor physical security practices. These visits include DOE-only trips and trips by DOE national laboratory staff and NRC physical protection experts who worked with the host country to improve physical security at the sites. NRC officials also stated that, in some cases, the partner’s corrective actions at the site are verified by U.S. officials stationed in the country, and a repeat physical protection visit is not always required. IAEA officials told us that U.S. technical experts often participate in voluntary IAEA physical security assessments at IAEA member states’ facilities. Specifically, IAEA created the International Physical Protection Advisory Service (IPPAS) to assist IAEA member states in strengthening their national security regime. At the request of a member state, IAEA assembles a team of international experts who assess the member state’s system of physical protection in accordance with IAEA security guidelines. As of December 2010, 49 IPPAS missions spanning about 30 countries had been completed. DOE Seeks to Increase Security or Remove Vulnerable U.S. Nuclear Material at Partner Facilities but Faces Challenges DOE has taken steps to improve security at a number of facilities overseas that hold U.S. nuclear material. DOE’s GTRI program removes nuclear material from vulnerable facilities overseas and has achieved a number of successes. However, DOE faces a number of constraints. Specifically, GTRI can only bring certain types of nuclear material back to the United States that have an approved disposition pathway and meet the program’s eligibility criteria. In addition, obtaining access to the partner facilities to make physical security improvements may be difficult. There are a few countries that are special cases where the likelihood of returning the U.S. nuclear material to the United States is considered doubtful. DOE’s Office of Nonproliferation and International Security and GTRI officials told us that when a foreign facility with U.S.-obligated nuclear material does not meet IAEA security guidelines, the U.S. government’s first response is to work with the partner country to encourage physical security improvements. In addition, the GTRI program was established in 2004 to identify, secure, and remove vulnerable nuclear material at civilian sites around the world and to provide physical protection upgrades at nuclear facilities that are (1) outside the former Soviet Union, (2) in non-weapon states, and (3) not in high-income countries. According to GTRI officials, the U.S. government’s strategy for working with partner countries to improve physical security includes: (1) encouraging high-income countries to fund their own physical protection upgrades with recommendations by the U.S. government and (2) working with other- than-high-income countries to provide technical expertise and funding to implement physical protection upgrades. If the material is excess to the country’s needs and can be returned to the United States under an approved disposition pathway, GTRI will work with the country to repatriate the material. According to GTRI officials, GTRI was originally authorized to remove to the United States, under its U.S. fuel return program, only U.S.-obligated fresh and spent HEU in Material Test Reactor fuel, and Training Research Isotope General Atomics (TRIGA) fuel rod form. According to GTRI officials, GTRI has also obtained the authorization to return additional forms of U.S. fresh and spent HEU, as well as U.S. plutonium from foreign countries, so long as there is no alternative disposition path. The material must (1) pose a threat to national security, (2) be usable for an improvised nuclear device, (3) present a high-risk of terrorist theft, and (4) meet U.S. acceptance criteria. To date, GTRI has removed more than 1,240 kilograms of U.S. HEU from Australia, Argentina, Austria, Belgium, Brazil, Canada, Chile, Colombia, Denmark, Germany, Greece, Japan, the Netherlands, Philippines, Portugal, Romania, Slovenia, South Korea, Spain, Sweden, Switzerland, Taiwan, Thailand, and Turkey. It has also performed security upgrades at reactors containing U.S. nuclear material that were not meeting IAEA security guidelines in 10 partner countries. As we reported in September 2009, GTRI has improved the security of research reactors, and GTRI officials told us in April 2011 that they plan to continue to engage other countries to upgrade security. In a separate report published in December 2010, we noted that GTRI has assisted in the conversion from the use of HEU to LEU or verified the shutdown of 72 HEU research reactors around the world, 52 of which previously used U.S. HEU. GTRI prioritizes its schedule for upgrading the security of research reactors and removing nuclear material based on the amount and type of nuclear material at the reactor and other threat factors, such as the vulnerability of facilities, country-level threat, and proximity to strategic assets. Our review identified several situations where GTRI or its predecessor program removed vulnerable U.S. nuclear material. Notwithstanding these successes, the GTRI program has some limitations. GTRI cannot remove all potentially vulnerable nuclear material worldwide because the program’s scope is limited to only certain types of material that meet the eligibility criteria. GTRI officials told us that, of the approximately 17,500 kilograms of HEU it estimates was exported from the United States, the majority—12,400 kilograms—is currently not eligible for return to the United States. According to GTRI officials, over 10,000 kilograms is contained in fuels from “special purpose” reactors that are not included in GTRI’s nuclear material return program because they were not traditional aluminum-based fuels, TRIGA fuels, or target material. As a result, this material does not have an acceptable disposition pathway in the United States, according to GTRI officials. GTRI officials stated that these reactors are in Germany, France, and Japan, and that the material has been deemed to be adequately protected. GTRI reported that the other approximately 2,000 kilograms of transferred U.S. nuclear material is located primarily in EURATOM member countries and is either currently in use or adequately protected. In addition, the potential vulnerability of nuclear material at certain high- income facilities was raised to us by officials at the National Security Council (NSC)—the President’s principal forum for considering national security and foreign policy matters—and included in a prior report. Specifically, we reported that, there may be security vulnerabilities in certain high-income countries, including three specific high-income countries named by the NSC officials. For sites in these countries, GTRI officials told us the U.S. government’s strategy is to work bilaterally with the countries and to provide recommendations to improve physical protection, and follow up as needed. Our analysis of available agency physical protection visit documents also raises concerns regarding the physical security conditions in these countries, including facilities that did not meet IAEA security guidelines and interagency physical protection teams’ lack of access issues. DOE also works with countries to remove material if it is in excess of the country’s needs and meets DOE acceptance criteria. The ability of DOE to return U.S. nuclear material depends, however, on the willingness of the foreign country to cooperate. As we reported in September 2009, because GTRI’s program for physical security upgrades and nuclear material returns is voluntary, DOE faces some challenges in obtaining consistent and timely cooperation from other countries to address security weaknesses. Our report further noted that DOE has experienced situations where a foreign government has refused its assistance to make security upgrades. For example, we reported that one country had refused offers of DOE physical security upgrades at a research reactor for 9 years. However, this situation was subsequently resolved when all HEU was removed from this country, according to GTRI officials. In addition, we reported that DOE had experienced two other situations where the partner country would not accept security assistance until agreements with the United States were reached on other issues related to nuclear energy and security. There are several countries that have U.S. nuclear material that are particularly problematic and represent special cases. Specifically, U.S. nuclear material has remained at sites in three countries where physical protection measures are unknown or have not been visited by an interagency physical protection team in decades. GTRI removed a large quantity of U.S. spent HEU recently from one of these countries. According to NRC and State officials, U.S. transfers to these three countries were made prior to 1978, when the physical protection requirements were added to the AEA. Therefore, these countries have not made the same commitments regarding physical security of U.S.- transferred material. Finally, we identified another country that poses special challenges. All U.S-obligated HEU has been removed from this country, which was one of the GTRI program’s highest priorities. Previous U.S. interagency physical protection visits found a site in this country did not meet IAEA security guidelines. Conclusions The world today is dramatically different than when most U.S. nuclear cooperation agreements were negotiated. Many new threats have emerged, and nuclear proliferation risks have increased significantly. We recognize that the United States and its partners share a strong common interest in deterring and preventing the misuse of U.S. nuclear material— or any nuclear material—and that flexibility in the agreements is necessary to forge strong and cooperative working relationships with our partners. The fundamental question, in our view, is whether nuclear cooperation agreements and their underlying legislative underpinnings need to be reassessed given the weaknesses in inventory management and physical security that we identified. Specifically, we found these agreements may not be sufficiently robust in two areas—inventories and physical security. Without an accurate inventory of U.S. nuclear materials—in particular, weapon-usable HEU and separated plutonium—the United States does not have sufficient assurances regarding the location of materials. As a result, the United States may not be able to monitor whether the partner country is appropriately notifying the United States and whether the United States is appropriately and fully exercising its rights of approval regarding the transfer, retransfer, enrichment and reprocessing and, in some cases, storage of nuclear materials subject to the agreement terms. NRC and multiple offices within DOE could not provide us with an authoritative list of the amount, location, and disposition of U.S. HEU or separated plutonium overseas. We are particularly concerned that NRC and DOE could not account, in response to a 1992 mandate by Congress, on the location and disposition of U.S. nuclear material overseas—and that they have not developed such an inventory in the almost two decades since that mandate. We recognize that physical security is a national responsibility. We also recognize that neither the AEA, as amended, nor the U.S. nuclear cooperation agreements in force require that State negotiate new or renewed nuclear cooperation agreement terms that include specific access rights for the United States to verify whether a partner is maintaining adequate physical security of U.S. nuclear material. Without such rights, it may be difficult for the United States to have access to critical facilities overseas—especially those believed to be holding weapon-usable materials—to better ensure that U.S. material is in fact adequately protected while the material remains in the partner’s custody. We note the agreements are reciprocal, with both parties generally agreeing to all conditions specified in them. We acknowledge that any change to the nuclear cooperation framework or authorizing legislation will be very sensitive. Careful consideration should be given to the impact of any reciprocity clauses on U.S. national security when negotiating or reviewing these agreements. However, it may be possible to do so in a way that includes greater access to critical facilities where weapon-usable U.S. nuclear material is stored, without infringing on the sovereign rights of our partners or hampering the ability of the U.S. nuclear industry to remain competitive. In the course of our work, we identified several weaknesses in DOE, NRC, and State’s efforts to develop and manage activities that ensure that U.S. nuclear cooperation agreements are properly implemented. Specifically, the lack of a baseline inventory of U.S. nuclear materials—in particular, weapon-usable materials—and annual inventory reconciliations with all partners limits the ability of the U.S. government to identify where the material is located. Currently, annual reconciliations with five partners are undertaken. However, the information, with the exception of one country, is aggregated and not provided on a facility-by-facility basis. Without such information on facilities, it may be difficult to track U.S. material for accounting and control purposes. No annual reconciliations currently exist for the United States’ other partners that it has transferred material to or trades with. The NMMSS database could be the official central repository of data regarding U.S. inventories of nuclear material overseas if DOE and NRC are able to collect better data. We are concerned that DOE has not worked with NRC and State to develop a systematic process for monitoring and evaluating the physical security of U.S. nuclear material overseas, including which foreign facilities to visit for future physical protection visits. In particular, U.S. interagency physical protection teams have neither met a key programmatic goal for visiting countries containing Category I quantities of U.S. special nuclear material every 5 years, nor have they visited all partner facilities believed to be holding Category I quantities of U.S. nuclear material, nor revisited facilities that were found to not meet IAEA security guidelines in a timely manner. Moreover, relying on reported thefts of U.S. nuclear material as a gauge of security is not the best measure of program effectiveness when accounting processes for inventory of U.S. material at foreign facilities are limited. Improving the U.S. government’s management of nuclear cooperation agreements could contribute to the administration achieving its goal of securing all vulnerable nuclear material worldwide in 4 years. Matters for Congressional Consideration  Congress may wish to consider directing DOE and NRC to complete a full accounting of U.S. weapon-usable nuclear materials—in particular, HEU and separated plutonium—with its nuclear cooperation agreement partners and other countries that may possess such U.S. nuclear material. In addition, Congress may wish to consider amending the AEA if State, working with other U.S. agencies, does not include enhanced measures regarding physical protection access rights in future agreements and renewed agreements, so that U.S. interagency physical protection teams may obtain access when necessary to verify that U.S. nuclear materials have adequate physical protection. The amendment could provide that the U.S. government may not enter into nuclear cooperation agreements unless such agreements contain provisions allowing the United States to verify that adequate physical security is exercised over nuclear material subject to the terms of these agreements. Recommendations for Executive Action We are making seven recommendations to enable agencies to better account for, and ensure the physical protection of, U.S. nuclear material overseas. To help federal agencies better understand where U.S. nuclear material is currently located overseas, we recommend that the Secretary of State, working with the Secretary of Energy and the Chairman of the Nuclear Regulatory Commission, take the following four actions to strengthen controls over U.S. nuclear material subject to these agreements:  determine, for those partners with which the United States has transferred material but does not have annual inventory reconciliation, a baseline inventory of weapon-usable U.S. nuclear material, and establish a process for conducting annual reconciliations of inventories of nuclear material on a facility-by-facility basis;  establish for those partners with which the United States has an annual inventory reconciliation, reporting on a facility-by-facility basis for weapon-usable material where possible; facilitate visits to sites that U.S. physical protection teams have not visited that are believed to be holding U.S. Category I nuclear material; and seek to include measures that provide for physical protection access rights in new or renewed nuclear cooperation agreements so that U.S. interagency physical protection teams may in the future obtain access when necessary to verify that U.S. nuclear materials are adequately protected. Careful consideration should be given to the impact of any reciprocity clauses on U.S. national security when negotiating or reviewing these agreements. In addition, we recommend that the Secretary of Energy, working with the Secretary of State, and the Chairman of the Nuclear Regulatory Commission take the following three actions:  develop an official central repository to maintain data regarding U.S. inventories of nuclear material overseas. This repository could be the NMMSS database, or if the U.S. agencies so determine, some other official database;  develop formal goals for and a systematic process to determine which foreign facilities to visit for future interagency physical protection visits. The goals and process should be formalized and agreed to by all relevant agencies; and  periodically review performance in meeting key programmatic goals for the physical protection program, including determining which countries containing Category I U.S. nuclear material have been visited within the last 5 years, as well as determining whether partner facilities previously found to not meet IAEA security guidelines were revisited in a timely manner. Agency Comments and Our Evaluation We provided a draft of this report to the Secretaries of Energy and State, and the Chairman of the NRC for their review and comment. Each agency provided written comments on the draft report, which are presented in appendixes IV, VI, and V, respectively. All three agencies generally disagreed with our conclusions and recommendations. DOE, NRC, and State disagreed with GAO in three general areas of the report. Specifically, all the agencies (1) disagree with our recommendations to establish annual inventory reconciliations with all trading partners and establish a system to comprehensively track and account for U.S. nuclear material overseas, because the agencies believe this is impractical and unwarranted; (2) maintain that IAEA safeguards are sufficient or an important tool to account for U.S. nuclear material overseas; and (3) assert that any requirement in future nuclear cooperation agreements calling for enhanced physical protection access rights is unnecessary and could hamper sensitive relationships. With regard to the three general areas of disagreement, our response is as follows:  DOE, NRC, and State assert that it is not necessary to implement GAO’s recommendation that agencies undertake an annual inventory reconciliation and report on a facility-by-facility basis for weapon- usable material where possible for all countries that hold U.S.- obligated nuclear material. We stand by this recommendation for numerous reasons. First, as stated in the report, we found—and none of the agencies refuted—that the U.S. government does not have an inventory of U.S. nuclear material overseas and, in particular, is not able to identify where weapon-usable materials such as HEU and separated plutonium that can be used for a nuclear weapon may reside. In fact, NRC commented that “inventory knowledge is very important for high-consequence materials, e.g., high enriched uranium and separated plutonium.” Because DOE, NRC, and State do not have comprehensive knowledge of where U.S.-obligated material is located at foreign facilities, it is unknown whether the United States is appropriately and fully exercising its rights of approval regarding the transfer, retransfer, enrichment, and reprocessing and, in some cases, storage of nuclear materials subject to the agreements’ terms. In addition, the lack of inventory information hampers U.S. agencies in identifying priorities for interagency physical protection visits. We are particularly concerned that NRC and DOE, in response to a 1992 mandate by Congress, could only account for the location and disposition of about 1,160 kilograms out of an estimated 17,500 kilograms of U.S.-exported HEU. Furthermore, the agencies have not developed such an inventory or performed an additional comprehensive review in the almost two decades since that mandate. We believe it is important that DOE, NRC, and State pursue all means possible to better identify where U.S.-obligated material is located overseas—and for weapon-usable HEU and separated plutonium, seek to do so on a facility-by-facility basis. Annual inventory reconciliations with all partners provide one way to do that. The United States has demonstrated it has the ability to conduct such exchanges, which none of the agencies disputed. Our report notes that the United States conducts annual inventory reconciliations with five partners, including one where facility-level information is annually exchanged. We believe the recent signing of nuclear cooperation agreements with India and Russia, as well as the situation where current partners whose agreements are set to expire in coming years must be renegotiated—including Peru and South Korea—provide a convenient and timely opportunity for DOE, NRC, and State to pursue such enhanced material accountancy measures.  DOE, NRC, and State commented that IAEA’s comprehensive safeguards program is another tool to maintain the knowledge of locations of nuclear material in a country, including U.S.-obligated material, and that IAEA inspection, surveillance, and reporting processes are effective tools for material tracking and accounting. We agree that IAEA safeguards are an important nuclear nonproliferation mechanism. However, our report found IAEA’s safeguards have a limited ability to identify, track, and account for U.S.-obligated material. Specifically, as our report notes, and as confirmed to us by senior IAEA officials, IAEA does not track the obligation of the nuclear material under safeguards and, therefore, IAEA may not have the ability to identify whether and what volume of nuclear material at partner country facilities is U.S.-obligated and subject to the terms of U.S. nuclear cooperation agreements. In addition, our report notes that IAEA considers member country nuclear material inventory information confidential and does not share it with its member countries, including the United States. Therefore, IAEA has a limited ability to account for nuclear material subject to the terms of U.S. nuclear cooperation agreements. Importantly, safeguards are not a substitute for physical security and serve a different function. As our report notes, safeguards are primarily a way to detect diversion of nuclear material from peaceful to military purposes but do not ensure that facilities are physically secure to prevent theft or sabotage of such material.  DOE, NRC, and State disagreed with our recommendation that State, working with DOE and NRC, should seek to negotiate terms that include enhanced measures regarding physical protection access rights in future and renewed agreements. They also raised concerns with our Matter for Congressional Consideration to amend the AEA should State not implement our recommendation. We do not agree with agencies’ comments that our recommendation that agencies “seek to include” such measures is impractical. As we note in our report, an enhanced measure for access rights is in place in the recently negotiated U.S.-India arrangements and procedures document. Further, while partner countries pledge at the outset of an agreement that they will physically protect U.S.- obligated material, the results of our work show that they have not always adequately done so. Specifically, our report noted that, of the 55 interagency physical protection visits made from 1994 through 2010, interagency teams found that countries met IAEA security guidelines on only 27 visits; did not meet IAEA security guidelines on 21 visits, and the results of 7 visits are unknown because the U.S. team was unable to assess the sites or agency documentation of the physical protection visits was missing. In addition, we identified 12 facilities that are believed to have or previously had Category I U.S. nuclear material that have not been visited by an interagency physical protection team. We agree with the agencies’ comments that the licensing process for U.S. nuclear material offers some assurances that physical security will be maintained and that an exchange of diplomatic notes at the time of a transfer is designed to ensure the partners maintain the material according to the terms of the agreements. However, these measures are implemented at the time of licensing or material transfer, and insight into the physical security arrangements of the nuclear material over the longer-term, often 30-year duration of these agreements is by no means guaranteed. Ensuring that the United States has the tools it needs to visit facilities in the future—even after an initial transfer of material is made per a conditional export license—is important to supporting U.S. nuclear nonproliferation objectives. We continue to believe that our recommendation and Matter for Congressional Consideration are consistent with the report’s findings and would enhance the security of U.S.-obligated nuclear material in other countries. In addition, DOE and NRC commented that (1) our report contained errors in fact and judgment, (2) our report’s recommendations could result in foreign partners requiring reciprocal access rights to U.S. facilities that contain nuclear material that they transferred to the United States, which could have national security implications, and (3) our recommendation that agencies establish a process for conducting annual reconciliations of inventories of nuclear material and develop a repository to maintain data regarding U.S. inventories of nuclear material overseas would be costly to implement. Our response to these comments is as follows:  None of the agencies’ comments caused us to change any factual statement we made in the report. DOE provided a limited number of technical comments, which we incorporated as appropriate. Importantly, some of the facts that agencies did not dispute included: (1) our analysis that found U.S. agencies made only a single attempt to comprehensively account for transferred U.S. HEU almost 20 years ago and, at that time, were only able to verify the amount and location of less than one-tenth of transferred U.S. HEU; and (2) partner countries did not meet IAEA physical security guidelines for protecting U.S. nuclear material in about half of the cases we reviewed from 1994 through 2010. In our view, these security weaknesses place U.S.-obligated nuclear material at risk and raise potential proliferation concerns. These agreements for nuclear cooperation are long-term in scope and are often in force for 30 years or more. As we noted in our report, the world today is dramatically different than the time when most of the agreements were negotiated. New threats have emerged, and nuclear proliferation risks have increased significantly. NRC commented that countries may not want to change the “status quo” as it pertains to nuclear cooperation agreement terms, including those regarding the physical protection of U.S.-obligated nuclear material. In our view, the status quo, or business-as-usual approach should not apply to matters related to the security of U.S.-obligated nuclear material located at partner facilities throughout the world. Moreover, implementing a more robust security regime is consistent with and complements the administration’s goal of securing all vulnerable nuclear material worldwide within a 4-year period.  DOE and NRC’s comment that the United States may be asked to demonstrate reciprocity by nuclear cooperation agreement partners to verify that adequate physical protection is being provided to their nuclear material while in U.S. custody has merit and needs to be taken into consideration when developing or reviewing nuclear cooperation agreements. As a result, we added language to the conclusions and recommendation sections to additionally state that “careful consideration should be given to the impact of any reciprocity clauses on U.S. national security when negotiating or reviewing these agreements.” In addition, DOE and NRC commented that we are suggesting a costly new effort in recommending that agencies account for and track U.S.-obligated nuclear material overseas. However, we noted in our report that NMMSS officials told us that NMMSS is currently capable of maintaining information regarding inventories of U.S. nuclear material overseas. Moreover, DOE and NRC did not conduct an analysis to support their assertion that such a system would be costly. Although we did not perform a cost-benefit analysis, based on our conversations with NMMSS staff and the lack of a DOE cost-benefit analysis, to the contrary, there is no evidence to suggest that adding additional information to the NMMSS database would necessarily entail significant incremental costs or administrative overhead. We are sensitive to suggesting or recommending new requirements on federal agencies that may impose additional costs. However, it is important to note that the U.S. government has already spent billions of dollars to secure nuclear materials overseas, as well as radiation detection equipment to detect possible smuggled nuclear material at our borders and the border crossings of other countries. The administration intends to spend hundreds of millions more to support the president’s 4-year goal to secure all vulnerable nuclear material worldwide. If necessary, an expenditure of some resources to account for U.S. nuclear material overseas is worthy of consideration. We stand by our recommendations that State work with nuclear cooperation agreement partners that the United States has transferred material to, to develop a baseline inventory of U.S. nuclear material overseas, and that DOE work with other federal agencies to develop a central repository to maintain data regarding U.S. inventories of nuclear material overseas. DOE disagreed with our findings that the U.S. interagency physical protection visit program (1) lacked formal goals, and that (2) U.S. agencies have not established a formal process for coordinating and prioritizing interagency physical protection visits, in addition to the three areas of general disagreement. During the course of our work, we found no evidence of an interagency agreed-upon list of program goals. In its comments, DOE stated that the formal goal of the program is to determine whether U.S.-obligated nuclear material at the partner country facility is being protected according to the intent of IAEA security guidelines. This is the first time the goal has been articulated to us as such. Moreover, we disagree with DOE’s second assertion that it has established a formal process for coordinating and prioritizing visits. Our report notes that we found DOE has not (1) worked with NRC and State to establish a plan and prioritize U.S. physical protection visits or (2) measured performance in a systematic way. In particular, our report notes that, in October 2009, a DOE Office of Nonproliferation and International Security official reported to us that it had formulated a list of 10 countries that contained U.S. nuclear material and were priorities for physical protection teams to visit. However, a senior-level DOE nonproliferation official told us that DOE had not discussed this list with State or NRC, or other agency officials, and it could not be considered an interagency agreed-upon list. In addition, NRC Office of International Program officials told us they thought interagency coordination could be improved, and a State Bureau of International Security and Nonproliferation official told us that agency coordination has improved in the past 6 months. Moreover, as we further state in the report, in February 2011, DOE officials told us that the department is conducting a study of its methodology for prioritizing physical protection visits. In addition, in July 2011, in conjunction with the classification review for this report, DOE officials stated that while DOE, NRC, and State work together on coordinating U.S. government positions regarding priorities and procedures for the interagency physical protection program, no updated document exists that formalizes the process for planning, coordinating, and prioritizing U.S. interagency physical protection visits. We note that the documents that DOE refers to are internal DOE documents presented to GAO in 2008 and 2009 in response to questions regarding nuclear cooperation agreements. These documents are not an interagency agreed upon document, but reflects DOE’s views on determining which countries and facilities interagency physical protection teams should visit. Further, DOE officials in July 2011 stated that DOE, NRC, and State do not have an agreed-upon way to measure performance in a systematic way, and that while the goals for the monitoring and evaluation activities have not yet been formalized through necessary updated documents, a prioritized list of countries to visit does exist. These officials noted that the U.S. government is working to update its planning documents and examining its methodology for prioritizing physical protection visits. Any changes will be included in these updated documents. Therefore, we continue to believe that DOE should work with the other agencies to develop formal goals for and a systematic process for determining which foreign facilities to visit for future physical protection visits, and that the process should be formalized and agreed to by all agencies. NRC commented that in order to demonstrate that U.S. nuclear material located abroad is potentially insecure, GAO made an assessment based on U.S. agencies not conducting activities which are, according to NRC, neither authorized nor required by U.S. law or by agreements negotiated under Section 123 of the AEA. In fact, we acknowledge that U.S. agencies are not required to conduct certain activities or collect certain information. Moreover, we do not suggest that agencies undertake activities that are not authorized by law. We recommend that the agencies either expand upon and refine outreach they are already conducting, contingent on the willingness of our cooperation agreement partners, or negotiate new terms in nuclear cooperation agreements as necessary. If the agencies find that they are unable to negotiate new terms we recommend that Congress consider amending the AEA to require such terms. State commented that determining annual inventories and reconciliations of nuclear material, as well as establishing enhanced facility-by-facility reporting for those partners with which the United States already has an annual inventory reconciliation is a DOE function, not a State function. We agree that DOE plays a vital role in carrying out these activities— once such bilaterally agreed upon measures are in place. However, we believe it is appropriate to recommend that the Department of State—as the agency with the lead role in any negotiation regarding the terms and conditions of U.S. nuclear cooperation agreements—work with DOE and NRC to secure these measures with all U.S. partners. State also commented that there is a cost to the U.S. nuclear industry in terms of lost competitiveness should the requirements in U.S. nuclear cooperation agreements be strengthened to include better access to critical facilities for U.S. interagency physical protection teams. State provided no further information to support this point. Our report acknowledges that any change to the nuclear cooperation framework or authorizing legislation will be very sensitive and that flexibility in the agreements is necessary. We also stated that it may be possible to change the framework of agreements in a way that does not hamper the ability of the U.S. nuclear industry to remain competitive. While we would not want to alter these agreements in such a way that our nuclear industry is put at a competitive disadvantage, in our view, the security of U.S. nuclear material overseas should never be compromised to achieve a commercial goal. Finally, State asserted that interagency physical protection teams have been granted access to every site they have requested under the consultation terms of U.S. nuclear cooperation agreements. As a result, State believes the provisions of the current agreements are adequate. As we note in our report, access to partner facilities is not explicitly spelled out in the agreements and, in our view, this is a limitation for the U.S. agencies in obtaining timely and systematic access to partner nuclear facilities. While State may be technically correct that access has been granted, our report clearly shows that many sites believed to contain Category I quantities of U.S. nuclear material have been visited only after lengthy periods of time, or have not been visited at all. We continue to believe that enhanced physical protection access measures could help interagency teams ensure that they are able to visit sites containing U.S. nuclear material in a timely, systematic, and comprehensive fashion. We are sending copies of this report to the appropriate congressional committees, the Secretaries of Energy and State, the Chairman of the Nuclear Regulatory Commission, and other interested parties. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff members have any questions about this report, please contact me at (202) 512-3841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VII. Appendix I: Objectives, Scope, and Methodology We addressed the following questions during our review: (1) assess U.S. agency efforts to account for U.S. nuclear material overseas, (2) assess the Department of Energy’s (DOE) and other U.S. agencies’ efforts to monitor and evaluate the physical security conditions of U.S. nuclear material subject to the terms of nuclear cooperation agreements, and (3) describe DOE’s activities to secure or remove potentially vulnerable U.S. nuclear material at partner facilities. To assess U.S. agency efforts to account for U.S. nuclear material overseas, we reviewed relevant statutes, including the Atomic Energy Act of 1954 (AEA), as amended, as well as the texts of all current nuclear cooperation agreements. We obtained data from the Nuclear Materials Management and Safeguards System (NMMSS), a database jointly run by DOE and the Nuclear Regulatory Commission (NRC), which, among other things, maintains data on U.S. peaceful use exports and retransfers of enriched uranium and plutonium that have occurred since 1950, and reviewed DOE and GAO reviews of the NMMSS database. To assess the reliability of data in the NMMSS database, we interviewed officials from DOE and NRC and a former DOE contractor to identify any limitations in NMMSS’s data on the location and status of U.S. material overseas and found these data to be sufficiently reliable for the purposes of accounting for U.S. exports of nuclear material. We compared NMMSS data with other official and unofficial DOE sources of information regarding U.S. nuclear material transfers, including DOE data on nuclear material returns, to determine the reliability of DOE’s inventory data for U.S. nuclear material transferred overseas. We reviewed DOE, NRC, and other U.S. agency records and interviewed officials at those agencies to determine the extent to which DOE, NRC, and State are able to identify where U.S. nuclear material was exported, retransferred, and is currently held. We selected a non-probability sample of partners based on, among other considerations, quantities of U.S. special nuclear material transferred to them. Results of interviews of non-probability samples are not generalizeable to all partners but provide an understanding of those partners’ views of the U.S. government’s efforts to account for its nuclear material inventories overseas subject to nuclear cooperation agreement terms. We conducted site visits in four countries holding U.S.-obligated material and interviewed governmental officials and nuclear facility operators in these countries to discuss material accounting procedures. Further, we interviewed officials from five partners regarding their observations about working with the U.S. government to account for material subject to the terms of nuclear cooperation agreements. We analyzed the texts of administrative arrangements with key countries to determine the extent to which DOE conducts inventory reconciliations of inventory transferred between the United States and a partner country. To assess DOE’s and other U.S. agencies’ efforts to monitor and evaluate the physical security conditions of U.S. nuclear material overseas subject to nuclear cooperation agreement terms and describe DOE’s activities to secure or remove potentially vulnerable U.S. nuclear material at partner facilities, we reviewed all U.S. nuclear cooperation agreements in force, as well as other U.S. statutes, and IAEA’s security guidelines, “The Physical Protection of Nuclear Material and Nuclear Facilities,” INFCIRC/225/Rev.4, and other relevant international conventions to determine the extent to which such laws and international conventions provide for DOE and U.S. agencies to monitor and evaluate the physical security of transferred U.S. nuclear material subject to U.S. nuclear cooperation agreement terms. We interviewed officials from DOE, NRC, and the Department of State (State) to gain insights into how effective their efforts are, and how their efforts might be improved. We selected a nonprobability sample of partners based on, among other considerations, quantities of U.S. special nuclear material transferred to them and interviewed officials to determine how DOE and other U.S. agencies work with partner countries to exchange views on physical security and the process by which U.S. nuclear material is returned to the United States. Results of interviews of non-probability samples are not generalizeable to all partners but provide an understanding of those partners’ views of the U.S. government’s efforts to monitor and evaluate the physical security conditions of U.S. nuclear material overseas subject to nuclear cooperation agreement terms. We also obtained and analyzed the records of all available U.S. physical protection visits to partner facilities from 1974 through 2010. We reviewed agency documents and interviewed officials from DOE, NRC, and State regarding the policies and procedures for determining which partners to visit, how they conducted physical protection visits at partner facilities, and mechanisms for following up on the results of these visits. In particular, we compared the sites visited with NMMSS records of U.S. material exported and retransferred, and other information to evaluate the extent to which U.S. physical protection visits were made to all sites overseas containing U.S. special nuclear material. We obtained written responses from Global Threat Reduction Initiative (GTRI), and reviewed other information regarding their program activities. To better understand IAEA’s role in maintaining safeguards and evaluating physical security measures, we interviewed IAEA officials and reviewed relevant documents. We conducted this performance audit from September 2010 to June 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Current and Previous U.S. Nuclear Cooperation Agreement Partners The United States currently has 27 agreements in force for peaceful nuclear cooperation with foreign countries, the European Atomic Energy Community (EURATOM), the International Atomic Energy Agency (IAEA), and Taiwan. Figure 1 shows the partner countries with which the United States currently has or previously had a nuclear cooperation agreement with. As indicated in figure 1, the United States has nuclear cooperation agreements in force with Argentina, Australia, Bangladesh, Brazil, Canada, China, Colombia, EURATOM, Egypt, India, Indonesia, IAEA, Japan, Kazakhstan, Morocco, Norway, Peru, Russia, South Africa, South Korea, Switzerland, Taiwan, Thailand, Turkey, Ukraine, and United Arab Emirates. In addition, the United States previously had nuclear cooperation agreements with Chile, Dominican Republic, Iran, Israel, Lebanon, New Zealand, Pakistan, Philippines, Uruguay, Venezuela, and Vietnam. Appendix III: International Guidelines for the Categorization of Nuclear Material Appendix IV: Comments from the Department of Energy Appendix V: Comments from the Nuclear Regulatory Commission Appendix VI: Comments from the Department of State Appendix VII: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, Glen Levis, Assistant Director; Antoinette Capaccio; Julia Coulter; Michelle Munn; and Alison O’Neill made key contributions to this report.
The United States has exported special nuclear material, including enriched uranium, and source material such as natural uranium under nuclear cooperation agreements. The United States has 27 nuclear cooperation agreements for peaceful civilian cooperation. Under the U.S. Atomic Energy Act of 1954 (AEA), as amended, partners are required to guarantee the physical protection of U.S. nuclear material. GAO was asked to (1) assess U.S. agency efforts to account for U.S. nuclear material overseas, (2) assess the Department of Energy's (DOE) and U.S. agencies' efforts to evaluate the security of U.S. material overseas, and (3) describe DOE's activities to secure or remove potentially vulnerable U.S. nuclear material at partner facilities. GAO analyzed agency records and interviewed DOE, Nuclear Regulatory Commission (NRC), Department of State (State), and partner country officials. This report summarizes GAO's classified report issued in June 2011. DOE, NRC, and State are not able to fully account for U.S. nuclear material overseas that is subject to nuclear cooperation agreement terms because the agreements do not stipulate systematic reporting of such information, and there is no U.S. policy to pursue or obtain such information. U.S. nuclear cooperation agreements generally require that partners report inventory information upon request, however, DOE and NRC have not systematically sought such data. DOE and NRC do not have a comprehensive, detailed, current inventory of U.S. nuclear material--including weapon-usable material such as highly enriched uranium (HEU) and separated plutonium--overseas that includes the country, facility, and quantity of material. In addition, NRC and DOE could not fully account for the current location and disposition of U.S. HEU overseas in response to a 1992 congressional mandate. U.S. agencies, in a 1993 report produced in response to the mandate, were able to verify the location of 1,160 kilograms out of 17,500 kilograms of U.S. HEU estimated to have been exported. DOE, NRC, and State have established annual inventory reconciliations with five U.S. partners, but not the others it has transferred material to or trades with. Nuclear cooperation agreements do not contain specific access rights that enable DOE, NRC, or State to monitor and evaluate the physical security of U.S. nuclear material overseas, and the United States relies on its partners to maintain adequate security. In the absence of access rights, DOE's Office of Nonproliferation and International Security, NRC, and State have conducted physical protection visits to monitor and evaluate the physical security of U.S. nuclear material at facilities overseas when permitted. However, the agencies have not systematically visited countries believed to be holding the highest proliferation risk quantities of U.S. nuclear material, or systematically revisited facilities not meeting international physical security guidelines in a timely manner. Of the 55 visits made from 1994 through 2010, U.S. teams found that countries met international security guidelines approximately 50 percent of the time. DOE has taken steps to improve security at a number of facilities overseas that hold U.S. nuclear material but faces constraints. DOE's Global Threat Reduction Initiative (GTRI) removes U.S. nuclear material from vulnerable facilities overseas but can only bring back materials that have an approved disposition pathway and meet the program's eligibility criteria. GTRI officials told GAO that, of the approximately 17,500 kilograms of HEU exported from the United States, 12,400 kilograms are currently not eligible for return to the United States. Specifically, GTRI reported that over 10,000 kilograms of U.S. HEU are believed to be in fuels from reactors in Germany, France, and Japan that have no disposition pathways in the United States and are adequately protected. In addition, according to GTRI, 2,000 kilograms of transferred U.S. HEU are located primarily in European Atomic Energy Community countries and are currently in use or adequately protected. GAO suggests, among other things, that Congress consider directing DOE and NRC to compile an inventory of U.S. nuclear material overseas. DOE, NRC, and State generally disagreed with GAO's recommendations, including that they conduct annual inventory reconciliations with all partners, stating they were unnecessary. GAO continues to believe that its recommendations could help improve the accountability of U.S. nuclear material in foreign countries.
GAO_GAO-04-302
Background DOD increasingly relies on advanced technology in its weapons for effectiveness on the battlefield and actively seeks to include foreign partners in weapon system development and acquisition. DOD’s policy also encourages the sale of certain weapons to foreign governments through the Foreign Military Sales Program and direct commercial sales made by companies. While these efforts have the potential to enhance coalition operations and reduce weapons’ unit costs, DOD has acknowledged that the efforts also risk making U.S. technologies potentially vulnerable to exploitation. DOD reported that an increasing number of countries have reverse engineering capability and actively seek to obtain U.S. technology through various means. As a method to protect critical technologies, the Under Secretary of Defense for Acquisition, Technology, and Logistics directed the military services in 1999 to implement anti-tamper techniques. While the techniques will not prevent exploitation, they are intended to delay or discourage attempts to reverse engineer critical technologies in a weapon system or develop countermeasures to a system or subsystem. In 2001, the Under Secretary of Defense for Acquisition, Technology, and Logistics designated the Air Force as the Executive Agent responsible for implementing DOD’s anti-tamper policy. The Executive Agent oversees an annual budget of about $8 million per year to implement policy and manage anti-tamper technology projects through the Air Force Research Laboratory. DOD, in conjunction with the Air Force Research Laboratory and the Department of Energy’s Sandia National Laboratories, also holds periodic information sessions to educate the acquisition community about anti-tamper policy, guidance, and technology developments. In addition, military services and defense agencies, such as the Missile Defense Agency, have an anti-tamper focal point to coordinate activities. Program managers are responsible for considering anti-tamper measures on any weapon system with critical technologies. Since it is not feasible to protect every technology, program managers are to conduct an assessment to determine if anti-tamper protection is needed. The first step of the decision process is to determine if the system has critical technologies. If program managers determine the system has no critical technologies, they are to document that decision according to draft guidance. Program managers of systems that contain critical technologies complete the remaining steps of the process. Based on draft guidance, program managers are to conceptually address how they will implement anti- tamper measures at system development, otherwise known as milestone B. DOD’s anti-tamper decision process is illustrated in figure 1. Program managers can obtain assistance on their assessments from government laboratories, contractors, and the intelligence community. They are required to document the decision to use or not to use anti- tamper techniques in a classified annex of the program protection plan, which is subject to approval from the program’s milestone decision authority. Anti-tamper techniques vary depending on the type of protection the system requires. An example of an anti-tamper technique is software encryption, which scrambles software instructions to make them unintelligible without first being reprocessed through a deciphering technique. Another example is a thin opaque coating placed on microelectronic components, which makes it difficult to extract or dissect the components without great damage. Programs can apply multiple anti- tamper techniques to a critical technology. For example, a program could encrypt critical data on a microelectronic chip that is also covered with a protective coating. Each layer of protection could act as an obstacle to reverse engineering. Anti-Tamper Implementation Has Been Hampered by Several Factors, and Support to Address Them Has Been Limited Implementation of the anti-tamper policy has been hampered by several factors. First, identification of critical technology is subject to interpretation and program managers and DOD officials can and have arrived at different conclusions about what needs to be protected. Second, applying anti-tamper protection can take time and money, which may compete with a program manager’s cost and schedule objectives. Finally, some programs found it difficult to apply anti-tamper techniques when the techniques were not fully developed, and others were unsure which techniques were available to them. In general, the later anti-tamper techniques are applied, the more difficult and costly it can be to implement. Thus far, support to help program managers address some of these factors has been limited. Different Interpretations of Critical Technologies May Increase the Risk of Some Going Unprotected DOD officials acknowledged that the identification of critical technologies—a basis for determining if anti-tamper protection is needed—is subjective, which can result in different conclusions regarding what needs protection. DOD’s Program Managers Anti-Tamper Handbook defines technology as critical if compromise results in degrading combat effectiveness, shortening the expected combat life of the system, or significantly altering program direction. While a broad definition allows for flexibility to determine what is critical on individual systems, it may increase the risk that the same technology is protected on some systems but not on others or that different conclusions can be reached on whether programs have critical technologies. For example: An official from an intelligence agency described a case where two services used the same critical technology, but only one identified the technology as critical and provided protection. The intelligence agency official speculated that if exploited, knowledge gained from the unprotected system could have exposed the technology on both systems to compromise. While both systems were ultimately protected, the intelligence agency official stated that the situation could occur again. Officials from the Executive Committee told us that two program managers stated that their systems had no critical technologies and therefore were not subject to the anti-tamper policy. Both managers were directed by the Executive Committee to reconsider their determination and apply anti-tamper protection. As a result, one program is in the process of determining which technologies are critical, and the other program is applying anti-tamper protection as a condition to export the system. While different conclusions can be reached regarding what is critical, various organizations can serve as a check on a program manager’s assessment. However, no organization has complete information or visibility of all programs across the services and agencies. For example, the anti-tamper Executive Agent and the military service focal points do not have full knowledge about which program offices have or have not identified critical technologies or applied anti-tamper protection. In 2001, DOD attempted to collect such information, but not all programs provided data and DOD did not corroborate what was provided to ensure that program officials were consistently assessing critical technologies. The Executive Agent stated that there are no plans to update this data. Conducting oversight over program managers’ assessments may be difficult because of limited resources. Specifically, the Executive Agent has two full-time staff and the military service focal points perform duties other than anti-tamper management. Furthermore, according to a military official, program offices that determine they have no critical technologies are not required to obtain the focal points’ concurrence. While other organizations can review a program manager’s critical technology assessment as part of various acquisition and export processes, they may not have a full perspective of the assessments made by all programs across the services and the agencies. For example, different milestone decision authorities only review an individual program manager’s critical technology decisions for programs coming under their responsibility. Also, the Executive Committee may weigh in on the determinations, but it only reviews exports involving stealth technology. While it was apparent that the systems had critical technologies, some program managers needed assistance to determine which specific technologies were critical. For example, a program office tasked the contractor to identify critical technologies, and it has worked for months with the contractor to agree upon and finalize a list of critical technologies on the system. Also, an intelligence official, who is available to assist program managers in assessing their systems’ criticality, found that some program managers identified too many technologies as critical and that others did not identify all of the systems’ critical elements. In one instance, a program manager indicated that a system had 400 critical technologies, but an intelligence agency narrowed down the list to about 50 that it considered critical. In another case, a program manager concluded that an entire system was one critical technology, but the intelligence agency recommended that the system’s technologies be broken down and identified approximately 15 as critical. Although there are various resources to help program managers identify critical technologies, they may have limited utility, or may not be known, and therefore not requested. For example, the Militarily Critical Technologies List—cited in guidance as a primary reference for program managers—may not be up to date and may not include all technologies, according to some DOD officials. Another resource—the Program Managers Anti-Tamper Handbook—contains information regarding critical technology determinations, but program managers are not always aware that the handbook exists, in part because it is not widely distributed. In addition, the Defense Intelligence Agency can conduct an independent assessment of a system’s critical elements and technologies, if requested by the program manager. However, many officials we interviewed were unaware that the agency provides this assistance. According to a military official, the focal points are available to review a program manager’s assessment if requested. In some instances, program managers may have differing perceptions of what constitutes a critical technology. According to DOD’s guidance, critical technologies can be either classified or unclassified. However, an anti-tamper focal point stated that there is a perception that the anti- tamper policy only applies to classified programs. We found in one instance that the manager for a weapon program stated that the program did not require anti-tamper protection because it had no critical technologies that were classified. Applying Anti-Tamper Protection Can Affect Cost and Schedule Objectives Applying anti-tamper protection takes money and time, which can affect a program manager’s cost and schedule objectives. Generally, anti-tamper implementation is treated as an added requirement that is not separately funded for most programs. Program officials acknowledged that anti- tamper costs can be difficult to estimate and isolate because they are intertwined with other costs, such as research and development or production costs. As we have found in prior work, the later a requirement is identified, the more costly it is to achieve. Most programs we visited experienced or estimated cost increases, and some encountered schedule delays as they attempted to apply anti-tamper techniques. For example: A program official told us the anti-tamper protection for a program upgrade increased both design and production costs for the receiver unit. The program official stated that the anti-tamper protection increased total unit cost by an estimated $31 million, or 10 percent. Program officials expressed concern that unit cost increases may affect procurement decisions, particularly for one service, which is the largest acquirer of units and may be unable to purchase the proposed number. A program office estimated that it needs a budget increase of $56 million, or 10 percent, to fund the desired anti-tamper protections. Officials from that program told us that the existing program budget was inadequate to fund the added anti-tamper requirements. As a result, the program manager requested, and is waiting for, separate funding before attempting to apply anti-tamper protection to the system. One program office awarded a contract modification for the design, implementation, and testing of anti-tamper techniques valued at $12.5 million. Initially, the contractor had estimated the anti-tamper costs to be $35 million, but the program office did not approve all techniques suggested by the contractor. In addition, the contractor estimated that the recurring unit price for anti-tamper protection on future production lots may be $3,372 per unit. The U.S. government and the contractor have not completed unit price negotiations. Program officials told us that anti-tamper implementation contributed to a 6-month schedule delay. Another program office estimated that $87 million is needed to protect two critical technologies with multiple anti-tamper techniques. The program office expects that half of the anti-tamper budget will be used to test the techniques. The anti-tamper protection will only be applied if the system is approved for export. At that time, program officials will reexamine the anti-tamper cost estimates. In addition, it may take 5 years to adequately apply the techniques. Officials from an international program stated that, thus far, they have experienced a 60-day schedule delay while they wait for the contractor to estimate the system’s anti-tamper cost. Program officials stated that the potential for increased costs and additional schedule delays is high. Program officials and representatives from the Executive Committee stated that the cost of anti-tamper protection can be significantly higher for an international program for various reasons, including that the U.S. version and the international version of the system may require different anti-tamper techniques. Cost and schedule impacts may also be more significant if the programs are further along in the acquisition process when program offices first attempt to apply anti-tamper protection. Several programs that have experienced significant cost increases or delays were in or beyond the program development phase when they attempted to apply anti-tamper techniques. For example, when the anti-tamper policy was issued, one program had just obtained approval to begin system development and program officials believed it was too late to implement anti-tamper protection. As a result, the program received an interim waiver of the anti-tamper policy, and it only plans to apply anti-tamper techniques if the system is approved for export. While DOD has not systematically collected cost data for anti-tamper application across programs, DOD officials have stated that it is more cost-effective for programs to consider anti-tamper requirements at program inception, rather than later in the acquisition process. An official from a program that applied anti-tamper techniques in the production phase stated that ideally a program should identify its anti- tamper needs, including cost and technology, as early as possible. Recent Army anti-tamper guidance indicates that programs should receive approval for their preliminary anti-tamper plans at the concept stage. Needs Outpace Availability of Techniques and Tools Anti-tamper techniques can be technically difficult to incorporate on a weapon system, such as when the technology is immature. DOD is working to oversee the development of generic anti-tamper techniques and tools to help program managers identify potential techniques, but many of these efforts are still in progress and it is uncertain how they will help program managers. While program managers want knowledge about generic techniques, they ultimately have to design and incorporate techniques needed for their unique systems to ensure protection of critical technologies and to meet performance objectives. Problems in applying anti-tamper techniques typically arose when the programs were already in design or production or when the techniques were not fully developed or specifically designed for the system. For example: Officials from a program told us that they experienced problems when applying an anti-tamper protective coating. Because the team applying the coating did not coordinate with teams working on other aspects of the system, the problems with the coating were not discovered until just before production. Prior to an initial development test, the program office received a temporary waiver to test the system without the anti-tamper technique because the coating caused malfunctioning. The program office and its contractor are working to resolve issues with the anti-tamper technique. A program office was not able to copy anti-tamper techniques used by a similar program and, therefore, attempted to apply a generically developed anti-tamper coating, which resulted in problems. Specifically, the coating caused the system to malfunction, so the program office requested assistance from a national laboratory, but the laboratory’s solution melted key components of the system. Therefore, the program office requested that the contractor develop a new coating and other methods of protection for the system. The contractor’s anti- tamper techniques were successfully applied to the system. One program required advanced anti-tamper techniques to protect miniaturized internal components, but the technology was still in development and not available for immediate application. According to program officials, research and development of the anti-tamper technique was originally expected to be completed in 2002 and is now estimated to be available in 2006. Currently, officials are uncertain that the technique will meet their needs because the technique is being generically developed. In the absence of being able to apply the anti- tamper technique, the program received approval from DOD to use procedural protections, whereby U.S. military personnel provide physical security of the system when it is used in foreign countries, which includes locking the unit in a protected room to restrict access by foreign nationals. DOD officials stated that physical security can be less reliable than actual anti-tamper protection. Some program managers told us that they need more help in deciding what anti-tamper techniques they should apply to their individual systems. To provide information, DOD has a classified database that describes current anti-tamper techniques. An Air Force Research Laboratory official stated that they are in the process of updating this database, developing a rating system on the value of various techniques to be included in the database, and creating a classified technology road map that will prioritize the needs for various anti-tamper techniques. These tools are currently unavailable. DOD and Sandia National Laboratories also have provided information on anti-tamper techniques and tools to program managers at periodic workshops where attendance is voluntary. To further assist program managers, DOD is in the process of overseeing the development of generic anti-tamper techniques, but it is uncertain to what extent such techniques address a program’s specific needs. In 2001, DOD issued several contracts to encourage anti-tamper technology development. To date, several defense contractors have provided anti- tamper technology concepts, but according to the Executive Agent, programs need to further develop the technology before it can be applied to and function on a particular system. According to Air Force Research Laboratory and Sandia National Laboratories officials, generic anti-tamper techniques can be considered, but program managers have to design and incorporate the techniques needed for their unique systems. Program managers ultimately have to ensure that the techniques protect critical technologies and do not adversely affect performance objectives for the system. Conclusions Anti-tamper protection is one of the key ways DOD can preserve U.S. investment in critical technologies, while operating in an environment of coalition warfare and a globalized defense industry. However, implementation of the anti-tamper policy, thus far, has been difficult—in part because DOD has not developed an implementation strategy to ensure success. For program managers expected to implement anti-tamper protection, the policy can compete with their goals of meeting cost and schedule objectives, particularly when the anti-tamper requirement is identified late in the system development process. Without providing more oversight and guidance about what needs to be protected and how to do so, DOD is at risk of program managers making decisions on individual programs that can result in unprotected technologies and have negative consequences for maintaining the military’s overall technological advantage. Recommendations for Executive Action We are recommending that the Secretary of Defense direct the Under Secretary of Acquisition, Technology, and Logistics and the anti-tamper Executive Agent to take the following five actions to improve oversight and assist program offices in implementing anti-tamper protection on weapon systems. To better oversee identification of critical technologies for all programs subject to the anti-tamper policy, we recommend that the Secretary of Defense direct the Under Secretary for Acquisition, Technology, and Logistics, in coordination with the Executive Agent and the focal points, to (1) collect from program managers information they are to develop on critical technology identification and (2) appoint appropriate technical experts to centrally review the technologies identified for consistency across programs and services. To better support program managers in the identification of critical technologies, the Secretary of Defense should direct the Under Secretary for Acquisition, Technology, and Logistics, in coordination with the Executive Agent and the focal points, to (1) continue to identify available anti-tamper technical resources, (2) issue updated policy identifying roles and responsibilities of the technical support organizations, and (3) work with training organizations to ensure training includes practical information on how to identify critical technologies. To help minimize the impact to program cost and schedule objectives, the Secretary of Defense should direct the Under Secretary for Acquisition, Technology, and Logistics to work with program managers to ensure that the cost and techniques needed to implement anti-tamper protection are identified early in a system’s life cycle and to reflect that practice in guidance and decisions. To maximize the return on investment of DOD’s anti-tamper technology efforts, the Secretary of Defense should direct the Executive Agent to monitor the value of developing generic anti-tamper techniques and evaluate the effectiveness of the tools, once deployed, in assisting program managers to identify and apply techniques on individual programs. To ensure successful implementation of the anti-tamper policy, the Secretary of Defense should direct the Under Secretary for Acquisition, Technology, and Logistics to develop a business case that determines whether the current organizational structure and resources are adequate to implement anti-tamper protection and if not, what other actions are needed to mitigate the risk of compromise of critical technologies. Agency Comments and Our Evaluation In written comments on a draft of this report, DOD partially concurred with one recommendation and offered an alternative solution, which we did not incorporate. DOD concurred with our remaining four recommendations and provided alternative language for two, which we incorporated as appropriate. DOD’s letter is reprinted in the appendix. DOD partially concurred with our recommendation to collect and centrally review the program’s critical technology identifications and proposed, instead, that it develop a standardized process to minimize subjectivity, incorporate that process into anti-tamper policy, and monitor subsequent implementation. As part of its rationale, DOD stated that technical representatives in the services currently work with program managers to implement the anti-tamper policy and that quarterly conferences and seminars are ways to disseminate important information to program managers. We believe DOD’s proposal is an improvement over the current process given that program managers need more technical support and guidance to identify critical technologies. However, we do not believe DOD’s proposal is sufficient because a central review mechanism is needed to ensure consistent critical technology identification across the services and the agencies. Without central visibility over program managers’ critical technology identifications, the risk exists that the same technology is protected on some systems but not on others. Knowledge gained from unprotected systems can expose critical technology to compromise, which minimizes the impact of anti-tamper protection. In addition, DOD’s dissemination of information at conferences may be limited because conference attendance is voluntary and all program managers may not attend and receive the information. Given the need for consistency and a central review, we did not revise our recommendation. DOD concurred with our remaining recommendations, but offered alternative language for two, which we incorporated. Specifically, for our recommendation aimed at better supporting program managers in identifying critical technologies, DOD proposed adding language that underscored the need for identifying technical resources and maintaining up-to-date policies on technical support organizations’ roles and responsibilities. While DOD has identified some resources and listed them in several documents, it has not developed a comprehensive list of resources to assist program managers. Therefore, we added to our recommendation that DOD continue to identify available anti-tamper technical resources. For our recommendation that DOD evaluate generic anti-tamper techniques, DOD proposed language that offered greater flexibility, which seemed reasonable and we incorporated. Scope and Methodology To determine how DOD implemented the anti-tamper policy, we collected data and interviewed officials from 17 programs, which were identified by DOD as having experience with implementing the policy or by us through our review. Twelve of the 17 programs reported that their systems had critical technologies, and most were in various stages of implementing the anti-tamper policy. From those programs we selected six for an in-depth review. We conducted structured interviews with the six programs that had identified critical technologies on their systems to understand their experiences with applying anti-tamper techniques. We selected systems that represented a cross-section of acquisition programs and various types of systems in different phases of development. To the extent possible, when selecting the programs for an in-depth review, we considered factors that may increase a system’s vulnerability and exposure to exploitation. We also considered whether the system was approved for export by examining the Defense Security Cooperation Agency’s data on foreign military sales. In addition, we analyzed available program information from the anti-tamper Executive Agent and the military focal points to determine programs reporting critical technologies and anti-tamper plans. DOD acknowledged that the information was incomplete, and we did not independently verify the reliability of the data. We supplemented the program information by interviewing the Executive Agent, the military focal points, representatives from the intelligence community, DOD’s Executive Committee, the Department of Energy’s Sandia National Laboratories, the Air Force Research Laboratory, defense contractors, and an electronic security specialist. We also discussed DOD’s anti-tamper policy with current and former officials from the Office of the Secretary of Defense. To observe DOD’s training of program managers, we attended a DOD anti-tamper information workshop and a quarterly review. We analyzed pertinent DOD policies, directives, instructions, and guidance governing anti-tamper protection on systems. We also conducted a literature search to obtain information on program protection and industry practices related to anti-tamper measures. We are sending copies of this report to interested congressional committees; the Secretary of Defense; and the Director, Office of Management and Budget. We will make copies available to others upon request. In addition, this report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please call me at (202) 512-4841. Others making key contributions to this report were Anne-Marie Lasowski, Yelena T. Harden, Gregory K. Harmon, and Holly Ciampi. Appendix: Comments from the Department of Defense
The U.S. government has invested hundreds of billions of dollars in developing the most sophisticated weapon systems and technologies in the world. Yet, U.S. weapons and technologies are vulnerable to exploitation, which can weaken U.S. military advantage, shorten the expected combat life of a system, and erode the U.S. industrial base's technological competitiveness. In an effort to protect U.S. technologies from exploitation, the Department of Defense (DOD) established in 1999 a policy directing each military service to implement anti-tamper techniques, which include software and hardware protective devices. This report reviews DOD's implementation of the anti-tamper policy as required by the Senate report accompanying the National Defense Authorization Act for Fiscal Year 2004. Program managers have encountered difficulties in implementing DOD's anti-tamper policy on individual weapon systems. First, defining a critical technology--a basis for determining the need for anti-tamper--is subjective, which can result in different conclusions regarding what needs anti-tamper protection. While different organizations can check on program managers' assessments, no organization has complete information or visibility across all programs. Some program managers said they needed assistance in determining which technologies were critical, but resources to help them were limited or unknown and therefore not requested. Second, anti-tamper protection is treated as an added requirement and can affect a program's cost and schedule objectives, particularly if the program is further along in the acquisition process. Programs GAO contacted experienced or estimated cost increases, and some encountered schedule delays when applying antitamper protection. Officials from one program stated that their existing budget was insufficient to cover the added cost of applying anti-tamper protection and that they were waiting for separate funding before attempting to apply such protection. Finally, anti-tamper techniques can be technically difficult to incorporate in some weapon systems--particularly when the techniques are not fully developed or when the systems are already in design or production. One program that had difficulty incorporating the techniques resorted to alternatives that provided less security. While DOD is overseeing the development of generic anti-tamper techniques and tools to help program managers, many of these efforts are still in progress, and program managers ultimately have to design and incorporate techniques needed for their unique systems.
GAO_GAO-06-647
Background Each year, OMB and federal agencies work together to determine how much government plans to spend for IT and how these funds are to be allocated. Over the past decade, federal IT spending has risen to an estimated $64 billion in fiscal year 2007. OMB plays a key role in overseeing these IT investments and how they are managed, stemming from its predominant mission: to assist the President in overseeing the preparation of the federal budget and to supervise budget administration in Executive Branch agencies. In helping to formulate the President’s spending plans, OMB is responsible for evaluating the effectiveness of agency programs, policies, and procedures; assessing competing funding demands among agencies; and setting funding priorities. OMB also ensures that agency reports, rules, testimony, and proposed legislation are consistent with the President’s budget and with administration policies. In carrying out these responsibilities, OMB depends on agencies to collect and report accurate and complete information; these activities depend, in turn, on agencies having effective IT management practices. To drive improvement in the implementation and management of IT projects, Congress enacted the Clinger-Cohen Act in 1996 to further expand the responsibilities of OMB and the agencies under the Paperwork Reduction Act. In particular, the act requires agency heads, acting through agency chief information officers (CIO), to, among other things, better link their IT planning and investment decisions to program missions and goals and to implement and enforce IT management policies, procedures, standards, and guidelines. OMB is required by the Clinger-Cohen Act to establish processes to analyze, track, and evaluate the risks and results of major capital investments in information systems made by executive agencies. OMB is also required to report to Congress on the net program performance benefits achieved as a result of major capital investments in information systems that are made by executive agencies. OMB is aided in its responsibilities by the Chief Information Officers Council as described by the E-Government Act of 2002. The council is designated the principal interagency forum for improving agency practices related to the design, acquisition, development, modernization, use, operation, sharing, and performance of federal government information resources. Among the specific functions of the CIO Council are the development of recommendations for the Director of OMB on government information resources management policies and requirements and the sharing of experiences, ideas, best practices, and innovative approaches related to information resources management. Prior Review on Governmentwide IT Investment Management Has Identified Weaknesses Only by effectively and efficiently managing their IT resources through a robust investment management process can agencies gain opportunities to make better allocation decisions among many investment alternatives and further leverage their investments. However, the federal government faces enduring IT challenges in this area. For example, in January 2004 we reported on mixed results of federal agencies’ use of IT investment management practices. Specifically, we reported that although most of the agencies had IT investment boards responsible for defining and implementing the agencies’ IT investment management processes, no agency had fully implemented practices for monitoring the progress of its investments. Executive-level oversight of project-level management activities provides organizations with increased assurance that each investment will achieve the desired cost, benefit, and schedule results. Accordingly, we made several recommendations to agencies to improve their practices. OMB’s Management Watch List Intended to Correct Project Weaknesses and Business Case Deficiencies In carrying out its responsibilities to assist the President in overseeing the preparation of the federal budget, OMB reported in the President’s fiscal year 2004 budget that there were 771 IT investment projects on what was called the At-Risk List (later referred to as the Management Watch List). This list included mission-critical projects that did not successfully demonstrate sufficient potential for success based on the agency Capital Asset Plan and Business Case, also known as the exhibit 300, or did not adequately address IT security. To identify projects for inclusion on the Management Watch List, OMB used scoring criteria contained in OMB Circular A-11 that the agency established for evaluating the justifications for funding that federal agencies submitted for major investments and for ensuring that agency planning and management of capital assets is consistent with OMB policy and guidance. This evaluation is carried out as part of OMB’s responsibility to help ensure that investments of public resources are justified and that public resources are wisely invested. In presenting the fiscal year 2005 budget, OMB reported that there were 621 major projects on the Management Watch List, consisting of mission- critical projects that needed to improve performance measures, project management, and IT security. OMB staff described this assessment as again being based on evaluations of the exhibit 300s that agencies submitted to justify project funding. Agencies were required to successfully correct identified project weaknesses and business case deficiencies; otherwise, they risked OMB’s placing limits on their spending. In April 2005, we reported on OMB’s development of its Management Watch List. We concluded that OMB’s scoring of the exhibit 300s addressed many critical IT management areas and promoted the improvement of investments. However, because OMB did not compile a single aggregate list and had not developed a structured, consistent process for deciding how to follow up on corrective actions being taken by the agencies, the agency missed the opportunity to use its scoring process more effectively to identify management issues that transcended individual agencies, to prioritize follow-up actions, and to ensure that high- priority deficiencies were addressed. To take advantage of this potential benefit, we recommended that OMB compile a single aggregate list and use the list as the basis for selecting projects for follow up and for tracking follow-up activities by developing specific criteria for prioritizing the IT projects included on the list. OMB has continued to report on its Management Watch List in the most recent President’s budget request. Table 1 shows the budget information for projects on the Management Watch List for fiscal years 2004, 2005, 2006, and 2007. Table 2 shows the number of projects on the Management Watch List for fiscal years 2004, 2005, 2006, and 2007. OMB’s August 2005 Memorandum on Improving Performance of High Risk IT Projects To continue improving IT project planning and execution, OMB issued a memorandum in August 2005 to all federal chief information officers, directing them to begin taking steps to identify IT projects that are high risk and to report quarterly on their performance. As originally defined in OMB Circular A-11 and subsequently reiterated in the August 2005 memorandum, high risk projects are those that require special attention from oversight authorities and the highest levels of agency management because of one or more of the following four reasons: The agency has not consistently demonstrated the ability to manage complex projects. The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. Delay or failure of the project would introduce for the first time unacceptable or inadequate performance or failure of an essential mission function of the agency, a component of the agency, or another organization. As directed in the memorandum, by August 15, 2005, agencies in collaboration with OMB were required to initially identify their high risk IT projects using these criteria. In addition, OMB subsequently provided additional instructions through e-mails to agencies. Through these instructions, OMB directed agencies to declare all e-government and line of business (LOB) initiatives managed by their agency as high risk. In addition, the instructions specified that partner agencies consider investments associated with migrations to an e-government or LOB initiative as high risk until they have completed migration or OMB determines they should no longer be designated as high risk. For the identified high risk projects, beginning September 15, 2005, and quarterly thereafter, CIOs were to assess, confirm, and document projects’ performance. Specifically, agencies were required to determine, for each of their high risk projects, whether the project was meeting one or more of four performance evaluation criteria: (1) establishing baselines with clear cost, schedule, and performance goals; (2) maintaining the project’s cost and schedule variances within 10 percent; (3) assigning a qualified project manager; and (4) avoiding duplication by leveraging inter-agency and governmentwide investments. If a high risk project meets these four performance evaluation criteria, agencies are instructed to document this using a standard template provided by OMB and provide this template to oversight authorities (e.g., OMB, agency inspectors general, agency management, and GAO) on request. If any of the identified high risk projects have performance shortfalls, meaning that the project did not meet one or more of the four performance evaluation criteria, agencies are required to document the information on these projects on the standard template and provide it to OMB along with copies to the agency inspector general. For each of these projects, agencies must specify, using the template, (1) the specific performance shortfalls, (2) the specific cause of the shortfall, (3) a plan of action and milestones actions needed to correct each shortfall, and (4) the amount and source of additional funding needed to improve performance. Federal Agencies Identified 226 Projects as High Risk In response to OMB’s August 2005 memorandum, as of March 2006, the 24 CFO agencies identified 226 IT projects as high risk, totaling about $6.4 billion and representing about 10 percent of the President’s total IT budget request for fiscal year 2007. According to the agencies, these projects were identified as such mainly because of one or more of the four reasons provided in OMB’s memorandum. About 70 percent of the projects identified were reported as high risk because their delay or failure would impact the agency’s essential business functions. Moreover, about 35 percent of the high risk projects—or 79 investments, totaling about $2.2 billion in fiscal year 2007 planned funding, were reported as having performance shortfalls primarily because of cost and schedule variances exceeding 10 percent. High Risk Projects Identified Total About $6.4 Billion for Fiscal Year 2007 As of March 2006, the 24 CFO agencies identified 226 IT investments as high risk. Collectively, five agencies—the Small Business Administration, National Aeronautics and Space Administration, Office of Personnel Management, and the Departments of Veterans Affairs and Homeland Security—identified about 100 of these projects. According to the President’s most recent budget, about $6.4 billion has been requested for fiscal year 2007 by the 24 CFO agencies for the 226 high risk projects. Five of these agencies—the Departments of Defense, Homeland Security, Transportation, Veterans Affairs, and Justice, account for about 70 percent of the total high risk budget, totaling about $4.5 billion. Table 3 shows the number of high risk projects and associated funding reported by each of the 24 CFO agencies. Most Projects Reported as High Risk Because Their Delay or Failure Could Impact Mission Performance Agencies reported 195 of the 226 projects as meeting one or more of the reasons defined by OMB. Specifically, more than half of the agencies reported that their IT projects were identified as high risk because delay or failure of the project would result in inadequate performance or failure of an essential mission function. About one fourth of the projects were determined to be high risk because of high development, operating, or maintenance costs. In addition, three agencies identified 11 projects as high risk because of the inability to manage complex projects. Table 4 summarizes the OMB reasons for high risk designations. A total of 31 projects were identified as high risk using rationale other than OMB’s four criteria. In these cases, agencies reasons included that the business cases had weaknesses or approved baselines were not established. Agencies Identified 79 Projects with Performance Shortfalls Agencies identified about 35 percent of the high risk projects as having performance shortfalls. Specifically, for the last reporting quarter—March 2006—agencies identified 79 investments, totaling about $2.2 billion in fiscal year 2007 planned funding, as having performance shortfalls. The most frequent reason provided for the shortfalls was cost and schedule variances exceeding 10 percent. By contrast, only two projects were reported by agencies as having an overlapping or duplicative IT investment. Since September 2005, the number of projects with performance shortfalls has increased—from 58 projects in September 2005 to 67 projects in December 2005 to the 79 in March 2006. For the September and December 2005 and March 2006 reporting periods, figure 1 illustrates that agencies have reported that most of the weaknesses were in cost and schedule variances not within 10 percent and that there was an increase in projects that do not have clear baseline information on cost, schedule, and performance goals. Figure 2 illustrates the number of agency high risk projects with and without shortfalls as of March 2006. The majority of the agencies reported that their high risk projects did not have performance shortfalls in any of the four areas identified by OMB. In addition, six agencies—the departments of Commerce, Energy, Housing and Urban Development, and Labor, and the National Aeronautics and Space Administration and the National Science Foundation—reported that none of their high risk projects experienced any performance shortfalls. For the identification of all high risk projects by agency including funding, reasons for the high risk designation, specific performance shortfalls, and planned improvement efforts, see appendix III. Processes Exist to Identify and Oversee High Risk Projects, but Opportunities Exist to Improve These Processes Although agencies, with OMB’s assistance, generally identified their high risk projects by evaluating their IT portfolio against the four criteria specified by OMB, the criteria were not always consistently applied. In addition, OMB did not define a process for updating the list. To oversee high risk projects, agencies reported having investment management practices in place; however, we have previously reported on agencies’ maturing investment management processes and have made several recommendations to improve them. OMB staff perform their oversight of high risk projects by reviewing the quarterly performance reports, but they do not have a single aggregate list to analyze projects and for tracking progress on a governmentwide basis. Unless they address the issues regarding the identification, update, and oversight of high risk projects, OMB and agencies could be missing opportunities to perform these activities more effectively. High Risk Projects Identified Primarily Using OMB’s Criteria, but Criteria Not Always Consistently Applied Agencies primarily used the criteria defined in OMB’s August 2005 memorandum in determining the initial list of high risk projects; however, the criteria were not always consistently applied. Specifically, most agencies reported that officials from the Office of the CIO compared the criteria against their current portfolio to determine which projects met OMB’s definition. They then submitted the list to OMB for review. According to OMB and agency officials, after the submission of the initial list, examiners at OMB worked with individual agencies to identify or remove projects as appropriate. According to most agencies, the final list was then approved by their CIO. However, OMB’s criteria for identifying high risk projects were not always consistently applied. In several cases, agencies did not use OMB’s criteria to identify high risk projects. As previously discussed, some agencies reported using other reasons to identify a total of 31 high risk projects. For example, the Department of Homeland Security reported investments that were high risk because they had weaknesses associated with their business cases based on the evaluation by OMB. The Department of Transportation reported projects as high risk because two did not have approved baselines, and four had incomplete or poor earned value management (EVM) assessments. Regarding the first criterion for high risk designation—the agency has not demonstrated the ability to manage complex projects—only three agencies reported having projects meeting this criterion. This appears to be somewhat low, considering that we and others have previously reported on weaknesses in numerous agencies’ ability to manage complex projects. For example, we have reported in our high risk series on major programs and operations that need urgent attention and transformation in order to ensure that our federal government functions in the most economical, efficient, and effective manner possible. Specifically, the Department of Defense’s efforts to modernize its business systems have been hampered because of weaknesses in practices for (1) developing and using an enterprise architecture, (2) instituting effective investment management processes, and (3) establishing and implementing effective systems acquisition processes. We concluded that the Department of Defense, as a whole, remains far from where it needs to be to effectively and efficiently manage an undertaking with the size, complexity, and significance of its departmentwide business systems modernization. We also reported that, after almost 25 years and $41 billion, efforts to modernize the air traffic control program of the Federal Aviation Administration, the Department of Transportation’s largest component, are far from complete and that projects continue to face challenges in meeting cost, schedule, and performance expectations. However, neither the Department of Defense nor the Department of Transportation identified any projects as being high risk because of their inability to manage complex projects. While agencies have reported a significant number of IT projects as high risk, we identified other projects on which we have reported and testified that appear to meet one or more of OMB’s criteria for high risk designation including high development or operating costs and recognized deficiencies in adequate performance but were not identified as high risk. Examples we have recently reported include the following projects: The Decennial Response Integration System of the Census Bureau is intended to integrate paper, Internet, and telephone responses. Its high development and operating costs are expected to make up a large portion of the $1.8 billion program to develop, test, and implement decennial census systems. In March 2006, we testified that the component agency has established baseline requirements for the acquisition, but the bureau has not yet validated the requirements or implemented a process for managing them. We concluded that, until these and other basic contract management activities are fully implemented, this project faced increased risks that the system would experience cost overruns, schedule delays, and performance shortfalls. System—an initiative managed by the Departments of Commerce and Defense and the National Aeronautics and Space Administration—is to converge two satellite programs into a single satellite program capable of satisfying both civilian and military requirements. In November 2005, we reported that the system was a troubled program because of technical problems on critical sensors, escalating costs, poor management at multiple levels, and the lack of a decision on how to proceed with the program. Over the last several years, this system has experienced continual cost increases to about $10 billion and schedule delays, requiring difficult decisions about the program’s direction and capabilities. More recently, we testified that the program is still in trouble and that its future direction is not yet known. While the program office has corrective actions under way, we concluded that, as the project continues, it will be critical to ensure that the management issues of the past are not repeated. The Rescue 21 project is a planned coastal communications system of the Department of Homeland Security. We recently reported that inadequacies in several areas contributed to Rescue 21 cost overruns and schedule delays. These inadequacies occurred in requirements management, project monitoring, risk management, contractor cost and schedule estimation and delivery, and executive level oversight. Accordingly, the estimated total acquisition cost has increased from $250 million in 1999 to $710.5 million in 2005, and the timeline for achieving full operating capability has been extended from 2006 to 2011. For the projects we identified as appearing to meet OMB’s criteria for high risk, the responsible agencies reported that they did not consider these investments to be high risk projects for reasons such as (1) the project was not a major investment; (2) agency management is experienced in overseeing projects; or (3) the project did not have weaknesses in its business case. In particular, one agency stated that their list does not include all high risk projects, it includes only those that are the highest priority of the high risk investments. However, none of the reasons provided are associated with OMB’s high risk definition. While OMB staff acknowledged that the process for identifying high risk projects might not catch all projects meeting the criteria, they stated that they have other mechanisms for determining the performance of all IT projects, including high risk projects, such as the review of earned value management data. Nevertheless, without consistent application of the high risk criteria, OMB and executives cannot have the assurance that all projects that require special attention have been identified. Process for Updating High Risk Projects Is Not Defined OMB’s guidance does not define a process for updating high risk projects that have been identified including identifying new projects and removing current ones. In the absence of such guidance, agencies use different procedures, for example, for removing projects from the list. Specifically, some agencies reported removing projects from the list if they no longer meet OMB’s criteria and other agencies reported removing a project if it (1) is completed or moves into operations; (2) has become compliant with its cost and schedule baseline goals; (3) is no longer considered a major IT investment; (4) becomes on track and maintains this status within specific cost, schedule and performance for a minimum of two quarters; or (5) addresses major weaknesses such as earned value management requirements. While OMB staff acknowledge that there is no defined process for updating the set of projects, they stated that agencies are in constant communication with individual analysts at OMB through e-mails, phone calls, or meetings to identify new high risk projects if they meet the definition or remove old ones if they no longer meet the criteria. Nevertheless, without guidance for updating high risk projects on a continuing basis, OMB and agency executives cannot be assured they have identified the appropriate projects that should be designated as high risk. OMB and Agencies Can Further Improve Oversight of High Risk Projects All 24 CFO agencies reported having procedures for overseeing high risk projects. While some agencies reported using their current investment management processes for specific oversight, other agencies established additional oversight procedures. For example, one agency developed and documented specific procedures for sending a quarterly data call to the program offices that have high risk investments. The program office then completes a template capturing current performance information and sends it to the Office of the CIO for review and feedback. The CIO office forwards it to OMB, as required. In contrast, some other agencies reported that these projects are managed as part of their current investment review process—requiring the investment review board to perform control reviews along with other investments. While procedures for overseeing high risk projects are positive steps, we have previously reported that agencies generally have weaknesses in project oversight. In particular, we reported that agencies did not always have important mechanisms in place for agencywide investment management boards to effectively control investments, including decision-making rules for project oversight, early warning mechanisms, and/or requirements that corrective actions for underperforming projects be agreed upon and tracked. To remedy these weaknesses, we have made several recommendations to improve processes for effective oversight, many of which remain open. Until agencies establish the practices needed to effectively manage IT investments including those that are high risk, OMB, agency executives, and Congress cannot be assured that investments are being properly managed. OMB’s oversight of high risk projects, in turn, entails reviewing the performance reports on a quarterly basis. Specifically, according to OMB staff, individual analysts review the quarterly performance reports of projects with shortfalls to determine how well the projects are progressing and whether the actions described in the planned improvement efforts are adequate. These officials also stated that the OMB analysts review the quarterly reports for completeness and consistency with other performance data already received on IT projects. This includes quarterly e-Gov Scorecards,earned value management data, and the exhibit 300. For projects without shortfalls, officials stated that while the memorandum does not direct agencies to submit these reports, agencies communicate the status of these projects to the appropriate officials. According to OMB, the reporting requirement for high risk projects enhances oversight by capturing all key elements in a single report and providing oversight authorities and agency management early indicators of any problems or shortfalls since the reporting is conducted on a quarterly basis. However, OMB does not maintain a single aggregate list of high risk projects. OMB staff told us they do not construct a single list because they did not see such an activity as necessary in achieving the intent of the guidance—to improve project planning and execution. Consistent with our Management Watch List observations and recommendations, we believe that by not having a single list, OMB is not fully exploiting the opportunity to use the quarterly reports as a tool for analyzing high risk projects on a governmentwide basis and for tracking governmentwide progress. It is limiting its ability to identify and report on the full set of IT investments across the federal government that require special oversight and greater agency management attention. High Risk and Management Watch List Projects Identified Using Different Criteria The high risk projects and Management Watch List projects are identified using different sets of criteria. In addition, while the identification of high risk projects centers on an agency’s oversight of the project’s performance, the Management Watch List focuses more on a project’s planning. As discussed previously, the high risk list consists of projects identified by the agencies with the assistance of OMB, using specific criteria established by OMB, including memorandum M-05-23. As discussed previously, these projects are reported quarterly by the agencies to OMB on a template focusing on each project’s performance in four specified areas and noted shortfalls. The agencies are also to report planned corrective actions addressing the shortfalls. On the other hand, OMB determines projects to be included on its Management Watch List based on an evaluation of exhibit 300 business cases that agencies submit for major projects as part of the budget development process. This evaluation is part of OMB’s responsibility for helping to ensure that investments of public resources are justified and that public resources are wisely invested. Each exhibit 300 is assigned a score in 10 different categories, the results of which determine whether an individual project (or investment) warrants being included on the Management Watch List. This may result in OMB’s asking the agency to submit a remediation plan to address the weaknesses identified in the agency’s business case. While the criteria for identifying the Management Watch List projects and high risk projects differ, Management Watch List projects can also be high risk. For example, of the 226 total number of high risk projects, agencies identified 37 of these projects as being on OMB’s Management Watch List, with 19 of these projects having performance shortfalls. According to OMB staff, identifying and addressing poorly planned projects as part of the Management Watch List process could result in fewer projects with performance shortfalls over time. Nevertheless, both types of projects require close attention because of their importance in supporting critical functions and the likelihood that performance problems associated with them could potentially result in billions of taxpayers’ dollars being wasted if they are not detected early. Conclusions OMB and agencies’ efforts to identify 226 high risk projects are important steps in helping focus management attention on critically important IT projects. Although many projects were appropriately identified as high risk initiatives consistent with OMB’s guidance, OMB’s criteria were not always consistently applied. As a result, projects that appear to be high risk were not always identified as such. Further, because OMB has not provided guidance on how the initial set of high risk projects list should be updated, agencies do not have a consistent process for doing so. Agencies and OMB have both taken actions to ensure oversight of the high risk projects. Specifically, agencies are using existing oversight procedures or ones they have specifically established for the high risk projects and OMB is reviewing quarterly reports. However, weaknesses remain: agencies need to implement specific recommendations we have previously made to improve their practices for overseeing projects. Finally, OMB has not developed a single aggregate list of high risk projects to track progress, perform governmentwide analysis, and report the results to Congress. While the criteria for high risk projects and those on the Management Watch List differ, both types of projects support critical business functions and could experience performance problems that could become costly to address if they are not detected early. Given this, the Management Watch List projects and the high risk projects both require continued attention. Recommendations for Executive Action In order for OMB to take advantage of the potential benefits of using the quarterly performance reports as a tool for identifying and overseeing high risk projects on a governmentwide basis, we are recommending that the Director of OMB take the following three actions: Direct federal agency CIOs to ensure that they are consistently applying the criteria defined by OMB. Establish a structured, consistent process to update the initial list of high risk projects on a regular basis, including identifying new projects and removing previous ones to ensure the list is current and complete. Develop a single aggregate list of high risk projects and their deficiencies and use that list to report to Congress progress made in correcting high risk problems, actions under way, and further actions that may be needed. OMB could consider using the information we have developed in appendix III as a starting point for developing this single list. In implementing these recommendations, OMB should consider working with the CIO Council to help ensure governmentwide acceptance of these actions. Because we have outstanding recommendations aimed at (1) improving agencies’ investment management practices and (2) using the Management Watch List as a tool for analyzing, setting priorities, and following up on IT projects, we are not making any new recommendations in this report regarding these issues. Agency Comments and Our Evaluation OMB’s Administrator for the E-Government and Information Technology provided written comments on a draft of this report (reprinted in app. II). In these comments, OMB stated that it appreciated our careful review of OMB’s process for identifying and overseeing high risk projects. However, the agency disagreed with our recommendations and made other observations. In its comments, OMB stated that it is concerned about our interpretation of the goals and intent of the high risk process in comparison to GAO’s high risk list. Our intent is not to confuse the goals and intent of the two efforts. Nevertheless, as noted in our report, some major programs and operations have been placed on our high risk list because of weaknesses in key agency management practices, and this is consistent with OMB’s first criterion for high risk designation—the agency has not demonstrated the ability to manage complex projects. In its comments, OMB also observed that the policy for identifying and overseeing high risk projects is separate and apart from OMB’s Management Watch List and presents oversight authorities with information that differs in focus, timing, and expected results. While we agree with OMB that the two policies are different and acknowledge this in our report, we also noted in the report that Management Watch List projects can also be high risk. We believe projects from both lists warrant close attention because of their importance in supporting critical functions and the likelihood that performance problems associated with them could potentially result in billion of taxpayers’ dollars being wasted if they are not detected early. Regarding our recommendations to direct agencies to consistently apply the criteria for designating projects as high risk and to establish a structured, consistent process to update the initial list of high risk projects, OMB stated that the process and criteria for designating projects as high risk are clear and that some flexibility in the application of the criteria is essential. While some flexibility in the application of the criteria may be appropriate, we believe these criteria should be applied more consistently so that projects that clearly appear to meet them, such as those we mention in the report, are identified. OMB also disagreed with our recommendation to develop a single aggregate list of projects and their deficiencies to perform adequate oversight and management. As noted in the report, we believe that, by not having this list, OMB is not fully exploiting the opportunity to use the agencies’ quarterly reports as a tool for analyzing high risk projects on a governmentwide basis and for tracking governmentwide progress. In addition, OMB is limiting its ability to identify and report on the full set of IT investments across the federal government that requires special oversight and greater agency management attention. As agreed with your offices, unless you publicly announce its contents earlier, we plan no further distribution of this report until 30 days from the date of this letter. At that time, we will send copies to other interested congressional committees, the Director of the Office of Management and Budget, and other interested parties. Copies will also be made available at no charge on our Web site at www.gao.gov. If you have any questions on matters discussed in this report, please contact me at (202) 512-9286 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) provide a summary of high risk projects that identifies by agency the number of high risk projects, their proposed budget for fiscal year 2007, agency reasons for the high risk designation, and reported performance shortfalls; (2) determine how high risk projects were identified and updated and what processes and procedures have been established to effectively oversee them; and (3) determine the relationship between the high risk list and OMB’s Management Watch List. We conducted our work at OMB and the 24 chief financial officer (CFO) agencies in Washington, D.C. The 24 agencies are the departments of Agriculture, Commerce, Defense, Education, Energy, Health and Human Services, Homeland Security, Housing and Urban Development, the Interior, Justice, Labor, State, Transportation, the Treasury, and Veterans Affairs; and the Environmental Protection Agency, General Services Administration, National Aeronautics and Space Administration, National Science Foundation, Nuclear Regulatory Commission, Office of Personnel Management, Small Business Administration, Social Security Administration, and U.S. Agency for International Development. To address the first objective, we requested and reviewed documentation that identifies, for each agency, the number of high risk projects, their proposed budget for fiscal year 2007, agency reasons for the high risk designation, and reported performance shortfalls. In particular, we reviewed agency performance reports on high risk projects for September and December 2005 and March 2006 that identified high risk projects and planned improvement efforts, if any. We did not independently verify the information contained in these performance reports. However, we asked all 24 CFO agencies to confirm the data in appendix III regarding their high risk projects. Furthermore, we obtained the funding information for all high risk projects for fiscal years 2005, 2006, and 2007 from the Report on IT Spending for the Federal Government, Exhibit 53. We did not verify these data. To address the second objective, we used a structured data collection instrument to better understand the 24 CFO agencies’ processes and procedures for identifying and overseeing high risk projects. All 24 agencies responded to our structured questionnaire. We did not verify the accuracy of the agencies’ responses; however, we reviewed supporting documentation that selected agencies provided to validate their responses. We contacted agency officials when necessary for follow-up information. We then analyzed the agencies’ responses. Moreover, we identified and reviewed prior GAO reports on projects with weaknesses that met OMB’s high risk definition. Finally, to gain insight into OMB’s processes and procedures to oversee the high risk list, we reviewed related policy guidance, including its Memorandum on Improving IT Project Planning and Execution (M-05-23, dated August 4, 2005), and the Clinger-Cohen Act. We also interviewed OMB staff including the chief of the Information Technology and Policy Branch. To address the third objective, we interviewed OMB staff who are responsible for developing and monitoring the high risk list and Management Watch List, including the chief of the Information Technology and Policy Branch. In addition, we reviewed our prior work on OMB’s Management Watch List, (GAO-05-276), to better understand the processes for placing projects on the Management Watch List and following up on their corrective actions. Finally, we requested information from the 24 CFO agencies on which of their high risk projects were also on the Management Watch List. Two of the 24 agencies did not identify how many of their high risk projects were also on the Management Watch List. We conducted our work in Washington, D.C., from October 2005 through May 2006 in accordance with generally accepted government auditing standards. Appendix II: Comments from the Office of Management and Budget Appendix III: Summary of High Risk IT Projects by Department or Agency FY2005 actuals (in millions) millions) millions) Unclear baselines, schedule variance not within 10 percent, and qualified project manager is not in place. Component agency has 20 people currently enrolled in project management training and revising business case. The investment has been elevated to the Undersecretary level to address management issues. FY2005 actuals (in millions) millions) millions) Unclear baselines, schedule variance not within 10 percent, and qualified project manager is not in place. Revising business case and addressing project management issues. Component agency has signed agreements for conversion to enterprise human resource integration. A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. millions) millions) millions) A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. millions) millions) Defense Acquisition Executive established a Joint Program Executive Officer with acquisition authority across all product lines in 2nd quarter fiscal year 2005. This officer commissioned an independent assessment of program cost, schedule, and performance, and technical maturity in spring 2005. The Defense Acquisition Executive last reviewed progress on the project’s planning on November 22, 2005. On December 1, 2005, Deputy Secretary of Defense determined project is a viable solution for Army personnel and pay and transferred the program to the new Business Transformation Agency. Air Force assessment will be briefed to the Defense Business Systems Management Committee on March 23, 2006. The Navy assessment will start March 13, 2006, followed by the Marine Corps in fiscal year 2007. Completion date is to be determined. FY2005 actuals (in millions) millions) millions) Systems Integrator Source Selection under way. Program will realign schedule subsequent to systems integrator contract award in June 2006. An Army 3-star level review was conducted on February 1, 2006, and the Office of the Secretary of Defense, Networks and Information Integration, Overarching Integrated Product Team was briefed on February 2, 2006. The program office will undergo another Overarching Integrated Product Team review in June 2006 and will submit for Office of the Secretary of Defense approval a baseline that includes metrics for cost, schedule, and performance. The prime contract was fully defined on January 2, 2006. The program rebaselining is planned to be completed in the 3rd quarter of fiscal year 2006. A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. FY2006 enacted (in millions) millions) The use of earned value management techniques will closely monitor the project’s development and production schedule. Project schedule agreed to by upper management, constantly overseen. The project manager is attending IT project manager certification program. Rebaseline the cost and schedule based on changing requirements. FY2005 actuals (in millions) FY2006 enacted (in millions) millions) Cost and schedule variances not within 10 percent and project manager is not qualified. The project manager is serving in a temporary capacity as the office is going through reorganization. Project manager is not qualified. The project manager is attending IT project manager certification program. Cost and schedule variances not within 10 percent and project manager is not qualified. The project manager is scheduled to complete IT project manager certification program. Project manager is not qualified. The project manager is attending IT project manager certification program. A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. FY2005 actuals (in millions) millions) millions) A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. FY2005 actuals (in millions) millions) millions) Unclear baselines and project manager is not qualified. Baseline revision is completed and will be submitted to the agency Investment Review Board for review/approval 3/14/06. Project manager has completed 2 courses of a 7 course master’s certification program. FY2005 actuals (in millions) millions) millions) Governance issues remain unclear. Specifically, it is imperative that a financing strategy be in place and that migrations be adequately funded before the Shared Service Centers start servicing new customers. A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. FY2005 actuals (in millions) millions) millions) Project is in initial phase; therefore, baselines have not been approved and earned value management is not yet required. Program manager is not qualified. Project manager enrolled in training to achieve level III certification. Corrective actions not reported. Program manager is not qualified. Certification application to be submitted to DHS by 1/31/06. Unclear baselines and program manager is not qualified. Conducting internal Investment Review Board making “within threshold adjustments” to key work breakdown structure by 6/1/06 and assign a fully qualified project manager by 3/15/06. Unclear baselines and program manager is not qualified. Appropriate resources have been contacted to complete the approval of the baseline documentation and project manager certification by 5/24/06. FY2005 actuals (in millions) millions) millions) Unclear baselines, project manager is not qualified, and duplication exists between other investments. Submit baseline documents by 3/1/06 and project manager certification by 2/15/06 to prepare for the Investment Review Board briefing scheduled for 4/26/06. Weakness in the area of performance goals. Create detailed project plans to satisfy earned value management criteria. Briefing to the component agency’s administrator on need for funding. Program manager is not qualified. Project manager has developed and is implementing a training plan to achieve certification. Program manager is not qualified. Training plan in place and program office is looking to backfill position. Revised deployment schedule is contingent on completing the investment review process. FY2005 actuals (in millions) millions) millions) Component agency officials are giving technical assistance to develop and present an approved baseline to DHS by 3/15/06 and project manager certification to be granted April 2006. Corrective actions not reported. Since current project manager is acting, DHS will hire an individual with appropriate certification level. A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. According to agency officials, since Secure Flight and Crew Vetting were considered as one investment in the fiscal year 2007 budget submission, the 2005 actuals, 2006 enacted and 2007 request are the same for both projects. They will be separate investments in fiscal year 2008. millions) millions) Reasons for high risk designation Supports the presidential initiative for a citizen- centered, results- oriented, market- based government. A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. millions) millions) New contract was awarded that includes requirement for contractor to use an ANSI Standard 748-compliant EVMS. An Integrated Baseline Review is under way and will be completed by March 31, 2006. Project will request DOI Investment Review Board approval of new baseline in April. A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. millions) millions) The planned contract award of the development contract is January 2007. The ANSI/EIA-748 compliance will occur in April 2007. A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. millions) millions) Reasons for high risk designation $14.0 B, D A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. FY2006 enacted (in millions) FY2007 request (in millions) On October 17, 2005, the Under Secretary for Management signed a task order authorizing the initiation of a detailed contingency planning effort for this investment. A report on this planning effort was submitted by the Chief Information Officer to the Under Secretary for Management on February 13, 2006. OMB and the General Services Administration, the managing partner of this e-government initiative, have been consistently apprised of the problems with the vendor’s software and the efforts the Department of State has made to help the vendor design the needed functionality. The international version of the software is scheduled to be released by the vendor near the end of fiscal year 2006. Department of State anticipates a significant amount of testing prior to using the international capabilities of this software in a production environment. As a result, this will push the first overseas pilot into fiscal year 2007. FY2005 actuals (in millions) FY2006 enacted (in millions) FY2007 request (in millions) The National Finance Center is assessing the impact of system modifications to meet the Department of State’s payroll processing requirements. System development efforts by the National Finance Center will determine the implementation schedule for the agency and the center’s migration activities and overall costs for both agencies. The National Finance Center has committed to providing a written cost estimate by March 17, 2006. A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The projects is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. FY2005 actuals (in millions) FY2006 enacted (in millions) FY2007 request (in millions) Corrective actions taken to put the program back on track to meet fiscal year 2007 target date for full implementation. Program rebaselined in December 2005 and corrective actions taken that bring it within variance limits. FY2005 actuals (in millions) FY2006 enacted (in millions) FY2007 request (in millions) Investment Review Board has not baselined this project. Investment Review Board has not baselined this project. A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. millions) millions) The development, modernization, enhancement costs are expected to fall within tolerance as a result of closeout costs being reported. The corrective action for the schedule variance is being handled as part of the restructuring and re- planning activity in 1st quarter fiscal year 2006. A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. millions) millions) This project is being terminated. A new performance measurement baseline with associated cost and schedule variances will be submitted for OMB approval. A new performance measurement baseline with associated cost and schedule variances will be submitted for OMB approval. A new performance measurement baseline with associated cost and schedule variances will be submitted for OMB approval. A new performance measurement baseline with associated cost and schedule variances will be submitted for OMB approval. A new performance measurement baseline with associated cost and schedule variances will be submitted for OMB approval. FY2005 actuals (in millions) millions) millions) A new performance measurement baseline with associated cost and schedule variances will be submitted for OMB approval. A new performance measurement baseline with associated cost and schedule variances will be submitted for OMB approval. A new performance measurement baseline with associated cost and schedule variances will be submitted for OMB approval. A new performance measurement baseline with associated cost and schedule variances will be submitted for OMB approval. This project is being terminated. A new performance measurement baseline with associated cost and schedule variances will be submitted for OMB approval. FY2005 actuals (in millions) millions) millions) A new performance measurement baseline with associated cost and schedule variances will be submitted for OMB approval. A new performance measurement baseline with associated cost and schedule variances will be submitted for OMB approval. A new performance measurement baseline with associated cost and schedule variances will be submitted for OMB approval. A new operational baseline with associated cost and schedule variances will be submitted for OMB approval. An operational baseline with associated cost and schedule variances will be submitted for OMB approval. No planned improvement efforts reported. An operational baseline with associated cost and schedule variances will be submitted for OMB approval. FY2005 actuals (in millions) millions) millions) A new operational baseline with associated cost and schedule variances will be submitted for OMB approval. An operational baseline with associated cost and schedule variances will be submitted for OMB approval. A new performance measurement baseline with associated cost and schedule variances will be submitted for OMB approval. An operational baseline with associated cost and schedule variances will be submitted for OMB approval. A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. FY2006 enacted (in millions) FY2007 request (in millions) A rebaseline will be requested and monitored by operational analysis rather than earned value management until development funds are reauthorized. A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. millions) millions) This is a large project in the initial stage. Unclear baselines, cost, and schedule variances not within 10 percent and project manager is not qualified. Based on the results of an independent assessment, GSA has determined that this investment is not meeting the current and future business objectives. As a result, GSA is terminating this investment. GSA has initiated a data migration initiative that will enable migration of the two regions to the legacy system. Will provide quarterly updates on progress of migration activity. FY2005 actuals (in millions) millions) millions) Cost variance not within 10 percent and project manager is not qualified. Update task planned start and end dates on protest resolution and project manager will continue required training to meet CIO program manager certification criteria. A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. millions) millions) millions) N/A millions) millions) millions) Reasons for high risk designation A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. FY2006 enacted (in millions) FY2007 request (in millions) Reasons for high risk designation A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. millions) millions) A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. millions) millions) millions) Unclear baselines, cost and schedule variances not within 10 percent, and project manager is not qualified. The Office of Personnel Management’s project coordinator will work with OMB staff and interagency Information Systems Security Line of Business participants to clarify governmentwide and agency goals. Once the goals are clarified, the baseline cost and schedule will be developed. Agency will assess the project manager against the agency’s qualification guidelines. This project is still in the planning phase and a baseline is being developed. Corrective actions not reported. N/A millions) millions) millions) The Human Resources Management Line of Business/Human Resource Development Project Management Office will closely monitor the delivery of activities on the enterprise architecture, Workforce Development Roadmap, and performance management sub- projects. OPM requested the completion of remaining baseline corrections to resolve located schedule errors. Cost and schedule variances not within 10 percent. For both the cost/ and schedule variances, the agency is updating out estimate to complete to reflect a realistic timeline given the current circumstances with external stakeholders. N/A millions) millions) millions) Reasons for high risk designation A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The projects is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. millions) millions) FY2005 actuals (in millions) millions) millions) Project manager is not yet qualified. Original project deliverable for fiscal year 2006 was deferred, with no project manager required. New project manager is receiving training as part of Office of CIO directed formal training activity. A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. According to agency officials, the fiscal year 2006 request was enacted for these investments. FY2006 enacted (in millions) millions) A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. FY2005 actuals (in millions) millions) millions) Baselines not yet established and cost and schedule variances not within 10 percent. To collect information from various sources at the agency and the Department of State in order to validate milestones. A=The agency has not consistently demonstrated the ability to manage complex projects. B=The project has exceptionally high development, operating, or maintenance costs, either in absolute terms or as a percentage of the agency’s total IT portfolio. C=The project is being undertaken to correct recognized deficiencies in the adequate performance of an essential mission program or function of the agency, a component of the agency, or another organization. E=Other. Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Acknowledgments In addition to the contact named above, the following people made key contributions to this report: William G. Barrick, Nancy Glover, Nnaemeka Okonkwo, Sabine Paul, and Niti Tandon.
In August 2005, the Office of Management and Budget (OMB) issued a memorandum directing agencies to identify high risk information technology (IT) projects and provide quarterly reports on those with performance shortfalls--projects that did not meet criteria established by OMB. GAO was asked to (1) provide a summary identifying by agency the number of high risk projects, their proposed budget for fiscal year 2007, agency reasons for the high risk designation, and reported performance shortfalls; (2) determine how high risk projects were identified and updated and what processes and procedures have been established to effectively oversee them; and (3) determine the relationship between the high risk list and OMB's Management Watch List--those projects that OMB determines need improvements associated with key aspects of their budget justifications. In response to OMB's August 2005 memorandum, the 24 major agencies identified 226 IT projects as high risk, totaling about $6.4 billion in funding requested for fiscal year 2007. Agencies identified most projects as high risk because their delay or failure would impact the essential business functions of the agency. In addition, agencies reported that about 35 percent of the high risk projects--or 79 investments--had a performance shortfall, meaning the project did not meet one or more of these four criteria: establishing clear baselines, maintaining cost and schedule variances within 10 percent, assigning a qualified project manager, and avoiding duplication with other investments. Although agencies, with OMB's assistance, generally evaluated their IT portfolio against the criteria specified by OMB to identify their high risk projects, the criteria were not always consistently applied. Accordingly, GAO identified several projects that appeared to meet OMB's definition for high risk but were not determined by agencies to be high risk. In addition, OMB does not define a process for updating high risk projects. As a result, agencies had inconsistent updating procedures. Regarding oversight of these projects, agencies either established special procedures or used their existing investment management processes. OMB staff stated that they review the projects' performance and corrective actions planned. However, OMB has not compiled the projects into a single aggregate list, which would serve as a tool to analyze and track the projects on a governmentwide basis. High risk projects and Management Watch List projects are identified using different criteria. The former is meant to track the management and performance of projects, while the latter focuses on an agency's project planning. Both sets of projects require attention because of their importance in supporting critical functions and the likelihood that their performance problems could potentially result in billions of taxpayers' dollars being wasted if the problems are not detected early.
GAO_GAO-08-640
Background FDA uses advisory committees to provide expert advice and make recommendations to help the agency reach regulatory decisions, particularly concerning controversial issues or new products. FDA advisory committees are subject to the Federal Advisory Committee Act (FACA), which requires that committee memberships be fairly balanced in terms of views presented and the functions to be performed by the advisory committee. FDA advisory committees have charters that explain the purpose of the committee and specify the number of standing committee members and the expertise needed by the members. FDA advisory committee members can be medical professionals, scientists, researchers, industry leaders, consumers, and patients. At an advisory committee meeting, committee members generally meet publicly to discuss and evaluate information about a specific issue. Depending on the issues or products to be discussed at a committee meeting, a committee member may have a potential financial conflict of interest. In that event, FDA decides whether the member’s expertise is needed for discussing those issues or products, and if so, whether the member should be granted a conflict of interest determination—a waiver or an appearance authorization—to participate in the meeting. The members who do participate in the committee meeting may make recommendations to FDA—by voting or by consensus through discussions—that are nonbinding on the agency. (See app. III.) FDA Advisory Committees FDA has 31 advisory committees that are administratively attached to FDA centers or to the Office of the Commissioner. Most of the advisory committees—25—are attached to three FDA centers: CDER has 16 committees, CBER has 5, and CDRH has 4. (See app. IV.) Advisory committees usually meet as individual committees but may meet jointly to consider issues involving shared interests. Joint committee meetings may involve two advisory committees from the same center or from two different centers depending on the issue to be discussed. Advisory committees may also have subcommittees that meet to review specific information that may be presented later to the full advisory committee. Federal Conflict of Interest Provisions May Permit Member Participation FDA may permit an advisory committee member—standing or temporary—who has a conflict of interest and whose expertise is needed, to participate in a meeting under certain circumstances. There are four conflict of interest determinations—three statutory waivers and an appearance authorization as provided for in OGE regulations—that FDA can use to permit members with a conflict of interest or the appearance of a conflict of interest to participate. Federal law prohibits federal employees, including SGEs, from personally and substantially participating in an advisory committee meeting involving a particular matter that would have a direct and predictable effect on the employee’s financial interest or the interests of others specified by law. In determining whether an FDA advisory committee meeting involves a particular matter, FDA officials told us that they first consider each topic to be discussed at the meeting and determine whether it involves specific parties, a class of persons, or the interests of a large and diverse group of people. If one of the meeting topics involves specific parties or a class of persons, FDA officials then determine whether the advisory committee members who will attend the meeting have any conflicts of interest or the appearance of conflicts of interest involving that meeting topic. Officials told us if they are uncertain whether a meeting topic is a particular matter, the issue is referred to FDA’s ACOMS and EIS. EIS may refer the issue to HHS’s general counsel which may also seek advice from the OGE. The law has two waiver provisions that allow standing and temporary members to participate in an advisory committee meeting if certain criteria are met. One waiver—known as a § 208(b)(3) waiver—applies only to SGEs serving on an advisory committee subject to FACA. When granting this waiver, FDA certifies in writing in advance that the need for the SGE’s services outweighs the potential for a conflict of interest at a specific upcoming meeting. Another type of waiver—known as a § 208(b)(1) waiver— applies to federal employees generally, including SGEs and those not employed by FDA but who are members of FDA committees. When granting these waivers, FDA must determine that the interest involved is not so substantial as to be deemed likely to affect the integrity of the services which the government may expect from that individual. FDA may grant a member a full or a limited waiver—a written certification—to allow participation in the meeting. A full waiver may allow a member to participate in the discussions and to vote on recommendations. FDA may also grant a limited waiver to allow a member to discuss but not to vote on the recommendations. In addition, there are certain situations in which the member’s financial interest qualifies for an exemption from the application of the conflict of interest statutes and regulations applicable to federal employees, as provided by OGE regulations, and participation will be permitted despite the outside interest. In addition to 18 U.S.C. § 208, there was a provision in the Food and Drug Administration Modernization Act, in effect prior to October 2007, which effectively prohibited CBER and CDER advisory committee members from voting on committee meeting topics involving clinical investigations or approvals of drugs or biologics in which the member or his or her immediate family could gain financially from the committee’s advice. However, FDA could grant a waiver of this voting restriction—known as the § 355(n)(4) waiver—to a member if FDA determined that his or her participation was necessary to provide the committee with essential expertise. No waiver could be granted if the meeting involved the member’s own scientific work, such as work done by the member to develop a new drug being considered for approval by CDER. Finally, federal regulations require the consideration of the appearance of a conflict of interest for advisory committee members who will be participating in a specific-parties meeting when there are circumstances in which the member’s impartiality could be questioned. The appearance of a conflict may be created when someone in the advisory committee member’s household has a financial interest that will likely be affected by the committee’s actions or when one of the parties involved in the meeting has a close personal or professional relationship to the committee member. To grant an appearance authorization, FDA determines that the interest of the agency in the member’s participation in an advisory committee meeting’s topic outweighs the concern that a reasonable person with knowledge of the relevant facts would question the member’s impartiality in the matter before the advisory committee, which may call into question the integrity of FDA’s programs and operations. (See table 1 for a summary of the four conflict of interest determinations.) FDA Conflict of Interest Determination Process The appropriate FDA center review division and committee management staff for the advisory committee meeting decide whether a member meets the requirements for an applicable conflict of interest determination to allow him or her to participate. To assist in making conflict of interest determinations, FDA uses its Waiver Criteria 2000 guidance, which provides policies and procedures for handling conflicts of interest. On the basis of the advisory committee meeting’s topic and its designation, the center review division involved in the advisory committee meeting typically compiles a list of companies and products affected by the meeting’s topic. The advisory committee management staff then sends a memorandum with the final list of companies and products and the FDA Form 3410—the FDA financial disclosure form—to the advisory committee members. Members review the memorandum, complete the Form 3410, and report back to FDA on whether they believe they have any personal or imputed financial interests and past involvements with the affected companies and products listed for the upcoming advisory committee meeting’s topic. The FDA center advisory committee management staff for the particular advisory committee review members’ FDA financial disclosure forms and determine whether a member has a potential conflict of interest for the meeting or a part of the meeting. If a member has a conflict, FDA can accept a member’s decision to not participate because of the member’s own decision that he or she has a conflict of interest, exclude or disqualify a member from participating, seek another individual with the appropriate expertise needed to participate who has a less significant or no conflict of interest, or decide the member’s expertise is needed, and that the member meets the criteria for a conflict of interest determination to allow him or her to participate in the meeting discussion and vote. If there is a question about whether a member should be granted a determination, the center’s advisory committee management entity may seek advice from the review division. If there are further questions about whether the determination should be granted, advice may be sought from FDA’s ACOMS and EIS. ACOMS and EIS review all conflict of interest determinations before their final approval. The final decision to grant or deny a determination is made by the FDA Associate Commissioner for Policy and Planning. (See fig. 2.) Since November 2005, FDA has been subject to requirements related to public disclosure of its conflict of interest waivers on its Web site. From November 2005 until October 2007, FDA had been required by law to publicly post the nature and basis of conflict of interest waivers on its Web site. As of October 2007, the FDA Amendments Act of 2007 require FDA to publicly disclose on the agency’s Web site, prior to every advisory committee meeting, the reasons for all waivers granted as well as the type, nature, and magnitude of the financial interests being waived. In October 2007, FDA announced draft guidance to implement agencywide procedures for the public disclosure of (1) the type, nature, and magnitude of any financial conflict of interest for which an advisory committee member has been granted a waiver for a committee meeting on its Web site, and (2) conflict of interest waivers that would be written so that information protected from public disclosure would not appear in the waivers and thus would not need to be redacted. Public disclosure at an FDA advisory committee meeting can also, for example, include an announcement naming the attending members who have conflict of interest determinations. FDA Used Several Methods to Recruit Candidates for Advisory Committee Membership and Prescreened Candidates for Potential Conflicts of Interest Prior to the FDA Amendments Act of 2007, FDA employed several methods to recruit candidates for advisory committees and to evaluate candidates by prescreening them for advisory committee membership. Common recruitment methods used by FDA include announcing vacancies in the Federal Register, distributing recruitment brochures at advisory committee meetings and national meetings, receiving nominations by word-of-mouth or asking current advisory committee members for nominations, and posting information about recruitment on FDA’s Web site. Candidates who are selected to serve on an FDA advisory committee either as a consumer representative, industry representative, or patient representative are recruited and nominated using a different process than candidates identified for standing advisory committee membership. To prescreen candidates, FDA reviewed candidates’ curricula vitae and usually conducted prescreening interviews. FDA officials within the three FDA centers we studied, CBER, CDER, and CDRH, prescreened each candidate to determine whether there was any financial interest or activity that might present a potential conflict of interest if the individual were to become an advisory committee member. FDA employed many of the same recruiting and prescreening methods as those employed by EPA and the National Academies, organizations we previously identified as employing certain recruitment and prescreening methods that could ensure independent and balanced advisory committees. FDA Employed Several Methods to Recruit Candidates for Advisory Committee Membership FDA employed several recruitment methods to identify candidates for standing advisory committee membership, prior to the FDA Amendments Act of 2007. FDA officials in CBER, CDER, and CDRH told us that the methods commonly used to recruit candidates include announcing advisory committee vacancies in the Federal Register, distributing recruitment brochures at advisory committee meetings and national meetings, and receiving nominations by word-of-mouth or asking current advisory committee members for nominations. The FDA officials we interviewed stated that asking current advisory committee members for nominations was the most effective recruitment method because the members understand the advisory committee process and the commitment level required to serve as an FDA advisory committee member, and can communicate this information to the potential candidate. FDA staff in CBER and CDRH told us that posting vacancy announcements in the Federal Register was the least effective method of identifying qualified candidates because the centers received unsolicited curricula vitae from individuals seeking full-time jobs with FDA. Other recruitment methods reported include identifying possible candidates from the center’s consultant pool, which is a list of individuals whom FDA has determined have expertise that may be needed for future advisory committee meetings, and posting recruitment information on FDA’s Web site. CDRH staff reported that searching the consultant pool for a potential candidate is preferred because the executive secretary and the review division are usually familiar with the individual’s performance on an advisory committee and the individual is familiar with the advisory committee process. In February 2007, FDA posted on its Web site a link to information about advisory committees and available vacancies for individuals interested in advisory committee membership. From the Web site, the public can access information about current advisory committee vacancies, required qualifications to become an advisory committee member, and instructions on how to apply for advisory committee membership. Candidates who are selected to serve on an FDA advisory committee either as a consumer representative, industry representative, or patient representative are recruited and nominated using a different process than candidates identified for standing advisory committee membership. FDA officials work with consumer and industry organizations to identify qualified candidates to serve as representatives. Consumer and industry groups nominate the candidates and FDA indicated that it generally accepts the organizations’ recommendations for nomination. For patient representatives, FDA’s Office of Special Health Issues’ Patient Representative Program is responsible for recruiting and nominating candidates. When an advisory committee meeting topic is of particular importance to the patient population (e.g., cancer or HIV/AIDS-related topics), the advisory committee’s executive secretary will ask Patient Representative Program staff to recommend a patient representative to attend the advisory committee meeting. FDA Prescreened Advisory Committee Member Candidates for Potential Conflicts of Interest FDA officials in the three centers told us they prescreened advisory committee member candidates to determine whether they had any financial interests or if they were involved in any activity that might pose a potential conflict of interest, even though prior to October 1, 2007, HHS did not require its agencies to prescreen candidates at the time of their nomination to an advisory committee. To prescreen candidates, FDA reviewed the candidates’ curricula vitae and usually conducted a prescreening interview. The FDA officials told us that the interview is usually conducted by telephone using a prescreening form. The prescreening form asks candidates to provide information about their current investments, employment and consulting relationships held in the past 12 months, and current and past contracts and grants. FDA Used Many of the Same Recruiting and Prescreening Methods as Those Employed by Organizations Identified as Having Some Promising Recruitment and Prescreening Methods FDA employed many of the same recruiting and prescreening methods as EPA and the National Academies, organizations found to have some promising methods that could ensure that advisory committee members are independent and advisory committees are balanced. Prior to October 1, 2007, FDA generally used the same recruitment methods as EPA and the National Academies (see table 2). One exception was FDA’s method for obtaining nominations for potential members from the public. FDA provides an e-mail address on its Web site for nominations, a method that relies on individuals submitting to the agency, via e-mail, a curriculum vitae and contact information. In contrast, EPA’s Science Advisory Board’s Web site allows the public to self-nominate or nominate an individual to be an advisory committee member by submitting information via a form on its Web site. Prior to October 1, 2007, FDA also employed many but not all of the same prescreening methods as EPA and the National Academies (see table 3). EPA and the National Academies asked candidates to complete an official financial disclosure and background form prior to being selected as a committee member. An EPA official we interviewed stated that asking candidates for detailed financial information prior to selection to an advisory committee enables EPA to identify individuals without conflicts of interest early in the advisory committee recruitment process. An FDA official told us that FDA did not ask candidates to complete a financial disclosure and background form because the form would require responses about specific products or companies or both, which may not be known at the time of the prescreening interview. EPA’s and the National Academies’ prescreening methods included obtaining input from the general public whereas FDA’s methods generally did not. For example, EPA’s Science Advisory Board used a public notice process to obtain public comments on proposed candidates. The names and biographical sketches of candidates are posted on its Web site, and EPA requests the public to provide information, analysis, or documentation that the agency should consider in evaluating the candidates. Similarly, the National Academies publicly announces the slate of provisional study committee members by posting their biographies on its Web site, and requests public comment. FDA did not post a list of potential nominees on its Web site and did not seek public comment about potential candidates. Barriers Existed to Recruiting Qualified FDA Advisory Committee Candidates, Particularly Those without Potential Conflicts of Interest, but FDA May Have Been Able to Mitigate Barriers by Expanding Outreach Efforts According to FDA officials, former FDA advisory committee members, and a PhRMA representative, FDA faced barriers to recruiting qualified individuals to serve on its advisory committees, particularly candidates without potential conflicts of interest, although FDA may have been able to mitigate these barriers by expanding its outreach efforts. FDA officials, former FDA advisory committee members, and a PhRMA representative identified the following barriers: FDA sought the same leading experts as industry; FDA’s most effective recruitment method—word-of-mouth—was limited in the number of potential candidates it could generate; and aspects of FDA advisory committee service deterred some potential advisory committee members. FDA already employed several recruitment methods to identify qualified FDA advisory committee candidates. However, FDA may have been able to mitigate barriers by focusing additional outreach efforts on recruiting retired experts, experts from colleges and universities, and individuals with epidemiological and statistical expertise. Under the FDA Amendments Act of 2007, FDA’s process for prescreening candidates for advisory committee membership has been modified. (See app. I.) Barriers Existed to Recruiting Qualified Individuals to Serve on FDA Advisory Committees FDA officials, former FDA advisory committee members, and a PhRMA representative identified barriers that existed to recruiting qualified FDA advisory committee candidates, particularly those without potential conflicts of interest. These barriers were that FDA sought the same experts as industry, FDA’s most effective advisory committee recruitment method was limited in the number of potential candidates it could generate, and aspects of FDA advisory committee service may have deterred some potential advisory committee members. The Experts FDA Sought to Serve on Its Advisory Committees Frequently Had Industry Ties FDA contended that it sought the same leading experts to serve on its advisory committees as industry sought to conduct its research and product trials. As a result, the experts FDA deemed most qualified to serve on its advisory committees often had industry ties, according to the agency. FDA officials, former FDA advisory committee members, and a PhRMA representative generally agreed that many individuals who have the experience necessary to participate on an advisory committee have industry ties. FDA officials told us that private industry sponsors most medical development in the United States. As a result, people in fields relevant to FDA advisory committees gain experience from working with industry. A representative from PhRMA told us if an individual has no or minimal potential conflicts of interest, he would question whether the person has the expertise needed to serve on an FDA advisory committee. FDA’s Most Effective Recruitment Method Had Limitations Although FDA employed several methods to recruit advisory committee candidates, FDA staff generally agreed that word-of-mouth, such as informal discussions among FDA advisory committee members, agency staff, and interested parties, was most effective in generating nominations for qualified advisory committee candidates. FDA officials and former FDA advisory committee members told us that this recruitment method was effective because people familiar with the advisory committee process—FDA review division staff and FDA advisory committee members—can identify individuals who would be qualified to serve on advisory committees because they understand what advisory committee membership entails. Former members also noted that advisory committee members, who are experts in their field, know other qualified experts who could serve as advisory committee members. Similarly, former advisory committee members explained that asking FDA review division staff for recommendations was effective because these individuals are active in the scientific community and can also identify individuals qualified to serve on FDA’s advisory committees. Despite being effective in generating nominations, word-of-mouth recruitment is limited because only the colleagues of FDA advisory committee members or FDA staff learn about the opportunity to serve on committees rather than a broader pool of candidates. Two former FDA advisory committee members cautioned that, while they believe word-of- mouth is an effective recruitment method, it may lead to self-perpetuating committee membership, in which a limited group of peers continually comprise an advisory committee. An official from EPA echoed these concerns, stating that, although this is an effective method to recruit candidates for some EPA advisory committees, it also is problematic because he believes advisory committee members only nominate their colleagues. Similarly, former advisory committee members noted that FDA staff nominations may also be problematic. For example, one former member explained that it gives the appearance that FDA may pad its advisory committees, which could compromise the committees’ perceived independence. Some Aspects of FDA Advisory Committee Service May Have Deterred Potential Members Some aspects of FDA advisory committee service may have also deterred qualified advisory committee candidates. More than half of the 12 former FDA advisory committee members we spoke with agreed that the time commitment involved in preparing for and attending FDA advisory committee meetings acted as a deterrent for some potential advisory committee members. Standing members of an FDA advisory committee are expected to participate in all meetings held by that advisory committee unless they are excluded from a meeting due to a conflict of interest. For example, CDER’s Anti-Infective Drugs Advisory Committee held three meetings in 2006. Unless excluded, a standing member of this committee would have been expected to attend all three advisory committee meetings. In addition, more than half of the 12 former advisory committee members we interviewed also agreed that FDA’s work-related activities and financial information disclosure reporting requirements dissuaded some people from becoming an advisory committee member, although some said that the public disclosure of an individual’s conflict of interest waivers was not a deterrent. As mentioned earlier, advisory committee members complete financial disclosure forms before each advisory committee meeting, and since November 2005 FDA has posted information disclosing the nature and basis of advisory committee member conflict of interest waivers on its Web site. The negative publicity surrounding certain advisory committee meetings, especially media attention to some members’ ties to industry, may have also deterred some people from serving on FDA advisory committees. An FDA advisory committee management official in CDER, the center with the most advisory committee meetings held in years 2004 and 2006 combined, explained that public scrutiny concerning advisory committee members’ conflicts of interest is the most difficult challenge FDA staff face in generating member nominations. The FDA official said people serving on FDA advisory committees “feel like they are in fishbowls” and are concerned that they are considered tainted if they receive a conflict of interest waiver. A representative from PhRMA echoed these concerns, stating that many FDA advisory committees receive public scrutiny, which may act as a disincentive for individuals to serve on committees. Some former advisory committee members we spoke with also agreed that the media attention surrounding certain advisory committee meetings can deter people from serving on FDA advisory committees, although some former members either disagreed or said that qualified candidates should be prepared to withstand media pressure. FDA May Have Been Able to Mitigate Barriers by Expanding Outreach Efforts FDA may have mitigated barriers to recruiting qualified advisory committee candidates, particularly those without potential conflicts of interest, if it had expanded outreach efforts to retired experts, experts from universities and colleges, and individuals with statistical and epidemiological expertise. Former advisory committee members and representatives from entities knowledgeable about FDA advisory committee recruitment agreed that expanding outreach efforts to retired experts, experts from universities and colleges, and individuals with statistical and epidemiological expertise would be effective in recruiting qualified FDA advisory committee members, particularly those without conflicts of interest. In addition, although FDA stated that it employed several methods to recruit advisory committee members, representatives from consumer groups said that FDA should make a greater effort to recruit qualified advisory committee candidates, particularly those without conflicts of interest. Most former advisory committee members we spoke with generally agreed that FDA could have expanded outreach efforts to retired experts in fields relevant to its advisory committees in order to mitigate barriers to recruiting qualified advisory committee candidates, particularly those without potential conflicts of interest. Retired experts are no longer employed and, therefore, may be less likely to have current ties to industry. For example, a National Academies official we spoke with explained that when the type of expertise needed for a committee lends itself to inherently conflicted professionals—for example, if a committee focuses on the operations of drug manufacturers—the organization could seek an individual who is retired. However, some FDA officials noted that retired experts may not be familiar with new science and technologies or interested in committing the time necessary to serve on an advisory committee, or they may have conflicts of interest because they consult privately. One FDA official said that the center in which she is employed may recruit individuals who retired in the past 2 years to participate on an advisory committee or panel, but individuals retired longer than that are usually not familiar with current technologies and are, therefore, not qualified for the center’s advisory committee or panel participation. Although the majority of former advisory committee members we spoke with agreed that expanding outreach efforts to retired experts would improve FDA’s advisory committee process, many former members noted that FDA advisory committees require members who are active in their field. Most former FDA advisory committee members and the consumer groups we spoke with agreed that expanding outreach efforts to experts from universities and colleges would be effective in recruiting qualified advisory committee candidates. FDA noted that most of its advisory committee members are already academicians. An AAMC official suggested that FDA ask medical colleges to solicit their own staff to serve on FDA advisory committees. He also told us that AAMC does not currently assist FDA with advisory committee recruitment, but it would if asked. For example, he said AAMC would be willing to post FDA advisory committee member vacancies on its Web site at no cost. However, two former members noted that academicians may receive industry funding for research or consulting and, therefore, may have conflicts of interest. The FDA Amendments Act of 2007 modifies FDA’s process for prescreening candidates for advisory committee membership. For example, the act directs FDA to develop outreach strategies for potential members of advisory committees at universities, colleges, and other academic research centers. Most former FDA advisory committee members and consumer groups we interviewed said that expanding outreach efforts to epidemiologists and statisticians would be effective in recruiting qualified advisory committee candidates, particularly those without potential conflicts of interest. According to some former advisory committee members, epidemiologists and statisticians add expertise in data analysis to FDA advisory committees. For example, biostatisticians could provide expertise in interpreting clinical trial data. Representatives from two consumer advocacy groups told us these individuals may be less likely than clinicians to have conflicts of interest and may bring a different focus to committee deliberations. According to these consumer interest group representatives, the agency’s advisory committees are overly weighted towards clinicians and clinical trialists. One representative told us that clinicians are more likely to have potential conflicts of interest because they are more likely to have received industry funding, and another representative said that they generally have a bias towards product approval because they seek more options—that is, drugs and medical devices—to help with diagnosis and treatment of their patients. The majority of the former FDA advisory committee members we interviewed agreed that focusing outreach efforts on recruiting statisticians and epidemiologists would be an effective way for FDA to recruit qualified advisory committee candidates, particularly those without potential conflicts of interest. In The Future of Drug Safety – Promoting and Protecting the Health of the Public: FDA’s Response to the Institute of Medicine’s 2006 Report, FDA stated in 2007 that it will increase the epidemiology expertise on its drug-related advisory committees. The FDA Amendments Act of 2007 modifies FDA’s process for prescreening candidates for advisory committee membership. (See app. I.) Most Advisory Committee Meeting Participants Were Standing Members, and Many Members Had Conflict of Interest Determinations Our analysis of the composition of FDA advisory committee meeting participants from 2 recent years indicates that most participants were standing members, but a large minority of participants were temporary members. In the 83 advisory committee meetings held by CBER, CDER, and CDRH in 2004 and 2006, standing and temporary members were 58 and 42 percent, respectively, of the 1,218 total meeting participants. An advisory committee member who has a conflict of interest and whose expertise is needed may be permitted by FDA to participate in an advisory committee meeting under certain circumstances by granting a conflict of interest determination. About 16 percent of the participants received a conflict of interest determination that allowed them to participate. In 49 of the 83 meetings, at least one participating standing or temporary member had at least one conflict of interest determination that allowed the member to participate. The 200 participants with conflict of interest determinations in those 49 meetings had a total of 234 determinations. The FDA Amendments Act of 2007 limits the number of conflict of interest determinations—statutory waivers—that FDA can grant and FDA’s conflict of interest policy revisions change the amount of the disqualifying financial interests. Most Advisory Committee Meeting Participants Were Standing Members Standing members were the predominant participants in the 83 advisory committee meetings held by CBER, CDER, and CDRH in 2004 and 2006 that we analyzed. These 83 meetings were held before the 2007 FDA advisory committee process and statutory changes. Temporary members participated in 79 of the 83 meetings. Of the 1,218 participants in the 83 meetings, 58 percent were standing members and 42 percent were temporary. (See table 4.) The participants in CDER’s 17 meetings held in 2006 were nearly evenly split between standing and temporary members at 52 percent and 48 percent respectively. At Least One Standing or Temporary Member Had a Conflict of Interest Determination in over Half of the Advisory Committee Meetings Forty-nine of the 83 advisory committee meetings we analyzed—over half of all the meetings—had at least 1 standing or temporary member with a conflict of interest determination. FDA may permit an advisory committee member who has a conflict of interest and whose expertise is needed to participate in an advisory committee meeting under certain circumstances by granting a conflict of interest determination. Two hundred standing and temporary members—about 16 percent of the 83 meetings’ 1,218 participants—had at least one conflict of interest determination. Forty-two of the 49 meetings—86 percent—had 2 or more members who received at least one conflict of interest determination. Ninety-five percent of CDER’s 2004 and 2006 meetings had 2 or more members with determinations followed by CBER (85 percent) and CDRH (73 percent). The 200 members had 234 conflict of interest determinations. (See table 5). Most members—167—had only 1 conflict of interest determination; 33 members each had 2 or more determinations. Standing members had 62 percent (nearly two-thirds) of the 234 determinations and temporary members had 38 percent (over one-third). Among the 234 conflict of interest determinations, the most often granted determination—155—was the § 208(b)(3) financial interest waiver. Standing members had 104 and temporary members had 51 of these waivers. This waiver can be granted for either specific-parties or non- specific party advisory committee meeting topics and to standing and temporary SGE members, so it should have been the conflict of interest determination most often granted to members. Nearly one-half of the 155 § 208(b)(3) waivers—72—were granted to CDER meeting members, 50 to standing, and 22 to temporary members. The remaining 79 of the 234 determinations were 36 statutory waivers—§ 355(n)(4) waivers (27) and § 208(b)(1) financial interest waivers (9)—and 43 regulatory § 2635.502 appearance authorizations. The FDA Amendments Act of 2007 limits the number of certain conflict of interest determinations—the statutory waivers—that FDA can grant and FDA’s conflict of interest policy revisions change the amount of the disqualifying financial interests. Agency Comments and Our Evaluation HHS reviewed a draft of this report and provided comments, which are reprinted in appendix V. HHS also provided technical comments, which we incorporated as appropriate. In its comments, HHS noted that on August 4, 2008, after we had provided the draft report for its review on July 29, 2008, FDA issued four final guidance documents concerning management of its advisory committees. The guidances include stricter limits on financial conflicts of interest for committee members, improved committee meeting voting procedures, and process improvements for disclosing information about advisory committee members’ financial interests and waivers, and for preparing and making publicly available information given to advisory committee members for specific matters considered at advisory committee meetings. These final guidance documents were available to us in draft form during the course of our work and the portions of the draft guidances that we discussed in the report did not change in the final 2008 guidances. HHS commented on several other aspects of the draft report. First, HHS asked us to note that our findings are applicable only to CBER, CDER, and CDRH advisory committee meetings, and we revised our report to clarify that we did not include all of the FDA centers. Our work focused on those three FDA centers because most of FDA’s advisory committees were affiliated with them; these centers’ advisory committee meetings represented more than 80 percent of the total FDA advisory committee meetings held in 2004 and 2006. Second, HHS commented that three groups of experts we included in the report as possible sources for expanding the agency’s recruitment outreach for advisory committee members—academic experts, epidemiologists and statisticians, and retired experts—may not be more likely to be free of conflicts of interest. These expert groups were identified by individuals we interviewed as sources they believed could be less likely to have conflicts of interest, and we attributed the statements to those individuals in the report. In addition, the FDA Amendments Act of 2007 discusses FDA’s advisory committee recruitment methods and directs FDA to develop and implement strategies on effective outreach to the academic community. Third, HHS commented that the comparison of the recruitment methods used by EPA and the National Academies to FDA’s recruitment methods did not consider additional restraints FDA may have in selecting qualified, minimally conflicted individuals to serve on an advisory committee. However, the report focuses on EPA’s and the National Academies’ methods to identify potential advisory committee members and uncover conflicts of interest that are not employed by FDA. The approaches employed by these other organizations may provide additional options that FDA could use to expand the pool of potential advisory committee members. Finally, HHS commented on our use of the term conflict of interest determinations. Throughout our report, we used the term to include both conflict of interest waivers and appearance authorizations granted to advisory committee members to allow them to participate in advisory committee meetings. Although the standards for these determinations are different, they are all made to allow members to participate in advisory committee meetings notwithstanding ethical concerns over their participation. We revised the report to clarify that the FDA Amendments Act of 2007 provisions involving the agency’s advisory committees only apply to conflict of interest waivers. As we agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of it until 30 days from its date. We will then send copies to others who are interested and make copies available to others who request them. In addition, the report will also be available at no charge on our Web site at http://www.gao.gov. If you or your staffs have any questions about this report, please contact me at (202) 512-7114 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Appendix I: Major 2007 Actions Affecting FDA Recruiting and Conflict of Interest Determination Processes In 2007, two major actions occurred that affect the Food and Drug Administration’s (FDA) processes for recruiting and prescreening individuals for advisory committee membership and for granting financial conflict of interest waivers to allow members to participate in advisory committee meetings. Those two actions were the passage of the FDA Amendments Act of 2007—an amendment of the Federal Food, Drug, and Cosmetic Act—and FDA’s draft March 2007 conflict of interest guidance. The FDA Amendments Act of 2007 modifies the agency process for prescreening candidates for advisory committee membership. The act requires FDA to develop and implement strategies to conduct outreach to potential advisory committee candidates at universities and colleges, other academic research centers, professional and medical societies, and patient and consumer groups. FDA may also develop a new committee member recruitment method, which would allow entities, such as universities and other academic research centers, receiving funding from the National Institutes of Health, the Agency for Healthcare Research and Quality, the Centers for Disease Control and Prevention, or the Veterans Health Administration, to identify a person whom FDA could contact about the nomination of individuals to serve on advisory committees. Under the prescreening modification, a candidate for FDA advisory committee membership, as of October 1, 2007, completes the Office of Government Ethics Form 450 or FDA Form 3410—financial disclosure reports that provide information about the individual’s financial interests—prior to being appointed as an FDA advisory committee member. According to the FDA Amendments Act of 2007, this pre-appointment financial review is intended to reduce the likelihood that a candidate, if appointed as a member, would later require a statutory conflict of interest determination to participate in advisory committee meetings. Conflict of interest determinations to allow a member with a conflict to participate in an advisory committee meeting are affected by both FDA’s draft March 2007 guidance and the FDA Amendments Act of 2007. The draft guidance provides that an advisory committee member with personal financial conflicts of interest—referred to as disqualifying financial interests in the guidance—generally would not be allowed to participate in an advisory committee meeting if the combined value of those interests exceeds $50,000. FDA would not grant a waiver in those circumstances unless the FDA Commissioner determined a waiver was appropriate. Two provisions of the FDA Amendments Act of 2007 affect conflict of interest determinations. First, the law repealed 21 U.S.C. § 355(n)(4)—the § 355 (n)(4) waiver—that applied only to members voting on FDA advisory committee meeting matters related to the clinical investigations and approvals of drugs and biologics—usually Center for Biologics Evaluation and Research (CBER) and Center for Drug Evaluation and Research (CDER) advisory committee meetings. The law also created a new waiver provision—the § 712(c)(2)(B) waiver—to all FDA advisory committee members. Under the new provision, an individual or a member of his or her immediate family who has a financial conflict of interest cannot participate unless FDA determines that a waiver is necessary to afford the advisory committee essential expertise. The law also limits the number of waivers that FDA can grant advisory committee members, reducing the number of waivers (per total meetings) granted annually by 5 percent for a total reduction of 25 percent over 5 years. Appendix II: Scope and Methodology In this report, we examined FDA’s advisory committee member recruitment, selection, and conflicts of interest prescreening and screening processes, as well as the agency’s use of temporary and standing advisory committee members. We chose to analyze three FDA centers—CBER, CDER, and CDRH—because most of FDA’s advisory committees were affiliated with them—and these three centers’ advisory committee meetings represented more than 80 percent of the total FDA advisory committee meetings held in the two years we included. We did not examine FDA’s other centers’ advisory committee meetings. Specifically, we describe (1) how FDA recruited individuals for advisory committee membership and evaluated candidates by prescreening them for potential conflicts of interest, (2) barriers that were reported to recruiting qualified individuals to serve on FDA advisory committees, particularly candidates without potential conflicts of interest, and (3) the proportion of standing and temporary members who participated in advisory committee meetings, and the frequency with which members with one or more conflict of interest determinations participated in advisory committee meetings. During the course of our work, two major actions occurred that changed FDA’s recruitment and conflict of interest policies. (See app. I.) In March 2007, FDA issued a draft advisory committee guidance that revises how FDA screens individuals to determine if they have conflicts of interest for a specific advisory committee meeting. In addition, Congress amended the Federal Food, Drug, and Cosmetic Act to include, among other provisions, a section addressing recruitment, prescreening, and conflicts of interest, which took effect on October 1, 2007. At the time of our review, it was too soon to assess the effect of the changes on FDA’s processes, consequently, this report focuses on FDA’s organization, processes, and conflict of interest determinations as documented prior to the 2007 actions. To address our objectives, we performed a literature review of studies related to FDA advisory committee member recruitment, selection, and conflict of interest prescreening and screening processes. We reviewed Office of Government Ethics and federal conflict of interest laws, and Department of Health and Human Services’ (HHS) and FDA’s written policies, guidance, reports, and forms related to advisory committee management. We interviewed individuals and groups familiar with FDA’s advisory committee member recruitment, selection, and conflict of interest screening processes including FDA staff, selected former advisory committee members, and representatives from the Association of American Medical Colleges (AAMC), Center for Science in the Public Interest, Pharmaceutical Research and Manufacturers of America (PhRMA), and Public Citizen’s Health Research Group. In addition, we reviewed FDA’s advisory committee meeting records and conflict of interest determination records for advisory committee meetings held by three FDA centers—CBER, CDER, and CDRH—in 2004 and 2006. We chose to analyze these three centers because most of FDA’s advisory committees were affiliated with them—and these centers’ advisory committee meetings represented more than 80 percent of the total FDA advisory committee meetings held in 2004 and 2006. Details on the scope of our work and methods to address each objective follow. To examine how FDA recruited individuals for advisory committee membership and prescreened candidates for potential conflicts of interest, we reviewed HHS and FDA written policies, guidances, reports, and forms related to advisory committee management. These documents include HHS’s Federal Advisory Committee Management Handbook, FDA’s Policy and Guidance Handbook for FDA Advisory Committees, and FDA’s quarterly reports to Congress on its efforts to identify and screen qualified people for appointment to FDA advisory committees. We also reviewed advisory committee information on FDA’s Web site and examined FDA forms used to prescreen candidates for advisory committee membership. In addition, we interviewed staff from FDA’s Advisory Committee Oversight and Management Staff; FDA’s Ethics and Integrity Staff; staff from CBER, CDER, and CDRH; and advocacy organizations that nominate individuals to serve on FDA’s advisory committees, including PhRMA and Public Citizen’s Health Research Group. We also interviewed officials from organizations we previously identified as employing specific recruitment and prescreening methods that could ensure independent and balanced advisory committees. These organizations are the U.S. Environmental Protection Agency (EPA) and the National Academies. To examine barriers that were reported to recruiting qualified individuals to serve on FDA advisory committees, particularly candidates with no potential conflicts of interest, we interviewed individuals and groups familiar with FDA’s advisory committee recruitment process and officials from organizations we identified in 2004 as employing specific recruitment methods that could ensure independent and balanced advisory committees. Individuals interviewed include staff from CBER, CDER, and CDRH office, review division, and advisory committee management; 12 former CBER, CDER, and CDRH advisory committee members; staff from EPA, the National Institutes of Health, and the National Academies who were involved with the advisory committee process at their organizations; and staff from AAMC, PhRMA, and consumer advocacy groups that have taken a position on FDA’s nomination and selection processes for advisory committee members. To determine the proportion of participants in FDA’s CBER, CDER, and CDRH advisory committee meetings who were standing members or temporary members, we reviewed FDA’s advisory committee meeting records for 83 meetings held by the 3 centers in 2004 and 2006. The 83 meetings did not include (1) the 10 joint advisory committee meetings— meetings involving 2 advisory committees—held in 2004 and 2006, which were analyzed separately, or (2) advisory committee subcommittee meetings, which are not covered by the Federal Advisory Committee Act. Beginning in November 2005, FDA was required to post information on its Web site about the conflict of interest waivers it granted that allowed certain members to participate in meetings. We chose to review the committee meetings held in 2004 and 2006—2 years with the most recent data when we began our work—because (1) 2004 was the last full year before FDA began to post waiver information in 2005, and (2) 2006 was the first full year in which the waiver information had to be posted. We excluded 2005 from the analysis because it was the year the Web site posting requirement began. To verify the number of standing and temporary members who attended the 83 meetings, we reviewed the 2004 and 2006 FDA advisory committee meeting records, which included meeting minutes, meeting summaries, meeting transcripts, lists of meeting attendees, and annual committee member rosters—the list of standing members—for the years 2004 and 2006. If an advisory committee meeting was conducted for more than 1 day, a standing or temporary member was included in the analysis, if the member attended at least 1 day of the meeting. To analyze the number and type of conflict of interest determinations received by standing and temporary members, we analyzed 49 of the 83 CBER, CDER, and CDRH advisory committee meetings held in 2004 and 2006. The following criteria were used to select the 49 meetings: (1) the advisory committee meetings with the designation most often used by the centers—for CDER and CDRH, specific-parties meetings and, for CBER, non-specific party meetings, and (2) advisory committee meetings that had at least one standing or temporary member who received at least one conflict of interest determination. If an advisory committee meeting involved both a specific-parties and a non-specific party meeting topic, the meeting was included if any standing or temporary member attending the meeting received a conflict of interest determination. To determine the number and type of conflict of interest determinations among the 49 advisory committee meetings’ standing and temporary members, we created a participant-level data collection instrument to retrieve information from FDA’s advisory committee meeting records and conflict of interest waiver records for each advisory committee meeting included in the project analysis. We reviewed the following records to collect the needed data: conflict of interest waivers and their conflict of interest checklists, acknowledgement and consent for disclosure of potential conflicts of interest forms, and appearance authorization memorandums. Information we collected included the advisory committee meeting participant’s status (for example, standing or temporary member) and the conflict of interest determination (for example, § 208(b)(3) waiver). When FDA issued its March 2007 Draft Guidance for the Public, FDA Advisory Committee Members, and FDA Staff on Procedures for Determining Conflict of Interest and Eligibility for Participation in FDA Advisory Committees, we narrowed the scope of our work and excluded an assessment of whether FDA adhered to its FDA Waiver Criteria Document (2000) when it made its conflict of interest determinations for the meetings we analyzed. To assess the reliability of the conflict of interest determination information we summarized, we reviewed questions from 5 percent of the data collection instruments completed for the 49 advisory committee meetings for accuracy in transferring conflict of interest determination information from the FDA records, and determined the information collected was sufficiently reliable for our report. We conducted our work from October 2006 through September 2008 in accordance with generally accepted government auditing standards. Appendix III: Factors That May Affect FDA Advisory Committee Meeting Recommendations FDA may, like other federal agencies, determine its advisory committees’ meeting topics to suit its own purposes. There are many factors involved in conducting an FDA advisory committee meeting that may affect a committee’s recommendations to the agency, in addition to any possible effects from a committee member’s conflicts of interest. Also, like other federal agencies, FDA generally has the freedom to accept, reject, or modify its advisory committees’ recommendations. The following discussion of various meeting factors is limited to FDA’s CBER, CDER, and CDRH advisory committees. For each advisory committee meeting, the FDA staff involved may include individuals from the review division with subject matter expertise on the advisory committee’s meeting topics and the division director; the review team—the FDA staff working on a particular product being considered by the advisory committee; the advisory committee’s executive secretary; and the center’s advisory committee meeting management entity. Pre-Advisory Committee Meeting Decisions Who should be selected as standing advisory committee members? The FDA advisory committee charters—the committee’s organizational document—list the expertise a committee’s standing members should have. The review division is involved in the selection of nominees for a committee’s standing members and the expertise they represent. It has been suggested that a member’s type of expertise may affect how the member analyzes the information provided at an advisory committee meeting and what recommendation decision the member makes. Who should be selected as the advisory committee chair? Review divisions determine who is selected to serve as an advisory committee’s chair rather than committee members choosing a chair from among themselves. In consultation with the review division, the chair’s responsibilities may include helping develop the meeting’s agenda and topic questions, deciding the meeting’s voting procedure, monitoring the length of meeting presentations, and approving meeting minutes. Advisory Committee Meeting Decisions Why is an advisory committee meeting needed? Although an advisory committee may have a regular meeting schedule, the advisory committee’s review division decides when an advisory committee meeting is needed. Meetings may be held when there are controversial issues that committee advice could help the agency resolve. For example, in July 2007, two of CDER’s advisory committees met jointly to consider whether Avandia, a diabetes drug, should remain on the market given concerns that its use increased heart risks for those with diabetes. What is the advisory committee meeting’s topic and what questions are to be answered? The review division selects the topic, develops the issues FDA seeks advice on into topic questions for the advisory committee to address at the meeting, and compiles the background information for the committee to review. Other options for developing possible meeting topics: Subcommittee meetings: The review division may select a limited number of advisory committee members—including at least two standing members—and other consultants to serve as a subcommittee to discuss and develop an issue of FDA’s choosing. The subcommittee then provides this information to an advisory committee for its consideration. Homework assignments: FDA may also select advisory committee members and other experts to conduct homework assignments, again on issues of FDA’s choosing. A homework assignment may involve, for example, an in-depth review of an issue that may be considered as a potential topic at an upcoming advisory committee meeting or review of a product early in its development. Are temporary members needed, and if yes, who should be selected? The review division will determine whether the standing committee members able to attend the meeting have the needed expertise to address the topics to be discussed at the advisory committee meeting. If additional expertise is determined to be necessary, temporary members can be selected to serve on the committee for the meeting. The review division decides which individuals—usually from the center’s consultant pool—are selected to serve as temporary members. Each center maintains a consultant pool and selects the pool’s individual experts. Are guest speaker presentations needed, and if yes, who should be selected? The review division may determine that additional information needs to be presented at an advisory committee meeting. The division can select and invite guest speakers to make presentations and answer questions before the committee. Guest speakers may, for example, be members of other FDA advisory committees, individuals from a center’s consultant pool, federal employees from other agencies, or national or international experts from outside FDA. Guest speakers do not vote, and they do not participate in the committee’s discussions. Are patient representatives needed, and if yes, who should be selected? CBER, CDER, and CDRH cancer-related advisory committees are required to have patient representatives participate in all advisory committee meetings. For other advisory committees, the review division considers the topic to be discussed at a particular meeting when determining whether it is necessary for a patient representative to serve at an advisory committee meeting. Patient representatives usually serve on advisory committees that focus on disease-specific topics such as reviews of products and therapies for HIV/AIDS and cancer diagnosis and treatment. When participating in CBER and CDER advisory committees’ meetings, patient representatives usually vote, but when participating in CDRH’s committee meetings, they do not vote. Who should be selected to make FDA’s presentations at meetings? A review division’s role at an advisory committee meeting is to present the issues and data concerns the advisory committee will consider, and to pose questions to the committee throughout the meeting. For example, a review division director may introduce the committee meeting topic—for example, a new drug approval application, provide the regulatory history concerning how similar drugs were developed, describe any issues that have arisen with similar drugs, and discuss the types of clinical trials used to evaluate the previously approved drugs. The review division determines which FDA staff attend the meeting and whether they make presentations. Advisory Committee Meeting Conflict of Interest Determinations What companies and products are determined to be affected by the meeting topic? After an advisory committee meeting’s topic is selected, the review division compiles a list of the companies and products it determines are affected by the topic. The list is then reviewed by the advisory committee’s management entity, for example, CDER’s Advisors and Consultants Staff. The more affected companies or products involved, the greater the possibility that committee members may have financial interests in an affected company or product, and the greater the possibility that members may have conflicts of interests. To which advisory committee members with conflicts of interest does FDA decide to grant conflict of interest determinations? For each advisory committee meeting, the center’s advisory committee meeting management entity reviews each member’s possible conflicts of interest based on the information the member self reports on his or her FDA financial disclosure form—3410—and determines whether they will affect the individual’s ability to participate in the meeting. If there are members that are determined to have conflicts of interest, the review division may seek individuals with similar expertise, who do not have conflicts of interest, to participate in the meeting as temporary members. Advisory committee members who have conflicts of interest, but who have expertise the review division determines is needed for the committee’s meeting topic, can be given a conflict of interest determination if the standards of the applicable statutes and regulations are met. Recommendations from the Advisory Committee Meeting How does the advisory committee reach its meeting’s recommendation— by voting or reaching a consensus? The review division, which determines the meeting topic and questions, can indicate whether the committee should vote or reach a consensus on the recommendations made at the committee meeting. A committee chair may also decide that an issue should be addressed by a vote of the members. Generally, committee members vote when a meeting has a specific topic, such as a new drug approval application. There may be instances when the members reach a consensus opinion without voting. What options does FDA have concerning the advisory committee meeting’s recommendation? Following an advisory committee meeting, the center’s review division evaluates the advisory committee’s recommendation to determine whether FDA should accept or reject it, have the committee discuss the meeting topic again, or hold workshops on the meeting topic subject. FDA, like other federal agencies, generally does not have to accept its advisory committees’ recommendations. Recent Studies on FDA Advisory Committee Meeting Recommendations Recent studies have focused on whether FDA advisory committee members with conflict of interest determinations that allow them to participate in the committee meetings may influence the committee’s recommendations. Public Citizen’s 2006 study: The Public Citizen study on FDA conflicts of interest found a “weak relationship” between an FDA advisory committee member who had a conflict of interest and who also voted in favor of the drug at issue. The study also found that excluding advisory committee members (standing members) and voting consultants (temporary members) who had conflict of interest determinations would not have altered the overall vote result—whether favorable or unfavorable toward a drug—of any advisory committee meeting studied. National Research Center for Women & Families 2006 report: The National Research Center’s report, which included information from other studies of FDA advisory committees and their members with conflicts of interest, concluded that “it is possible to understand how a few committee members with conflicts of interest can have a disproportionate impact on approval recommendations.” The report stated that because FDA has its advisory committees meet to discuss controversial or innovative products, “the public might therefore expect that many of the drugs and devices reviewed by advisory committees would not be recommended for approval.” Using 11 randomly selected CDER and CDRH advisory committees, the report found that 79 percent of the 89 products reviewed between 1998 and 2005 were recommended for approval, and that the recommendations were usually unanimous. FDA’s 2007 study: A research firm under contract with FDA assessed the relationship of FDA advisory committee members’ expertise and their financial conflicts of interest. The study concluded that (1) standing advisory committee members with higher expertise were more likely than other standing members to have been granted conflict of interest waivers, (2) alternative members—temporary members—could be found for a specific advisory committee meeting, but many of them would likely require conflict of interest waivers, and (3) the ability to create a conflict- of-interest-free advisory committee was speculative. Appendix IV: FDA Advisory Committees for the Three Centers Analyzed Center for Biologics Evaluation and Research Center for Drug Evaluation and Research Center for Devices and Radiological Health Appendix V: Comments from the Department of Health and Human Services Appendix VI: GAO Contact and Staff Acknowledgments Acknowledgments In addition to the contact above, Martin Gahart, Assistant Director; George Bogart; Helen Desaulniers; Adrienne Griffin; Cathleen Hamann; Martha Kelly; Deitra Lee; Amanda Pusey; Daniel Ries; Opal Winebrenner; and Suzanne Worth made key contributions to this report. Related GAO Products Federal Advisory Committee Act: Issues Related to the Independence and Balance of Advisory Committees. GAO-08-611T. Washington, D.C.: April 2, 2008. Drug Safety: Further Actions Needed to Improve FDA’s Postmarket Decision-making Process. GAO-07-856T. Washington, D.C.: May 9, 2007. NIH Conflict of Interest: Recusal Policies for Senior Employees Need Clarification. GAO-07-319. Washington, D.C.: April 30, 2007. Drug Safety: FDA Needs to Further Address Shortcomings in Its Postmarket Decision-making Process. GAO-07-599T. Washington, D.C.: March 22, 2007. Food and Drug Administration: Decision Process to Deny Initial Application for Over-the-Counter Marketing of the Emergency Contraceptive Drug Plan B Was Unusual. GAO-06-109. Washington, D.C.: November 14, 2005. Federal Research: NIH and EPA Need to Improve Conflict of Interest Reviews for Research Arrangements with Private Sector Entities. GAO-05-191. Washington, D.C.: February 25, 2005. Federal Advisory Committees: Additional Guidance Could Help Agencies Better Ensure Independence and Balance. GAO-04-328. Washington, D.C.: April 16, 2004. University Research: Most Federal Agencies Need to Better Protect against Financial Conflicts of Interest. GAO-04-31. Washington, D.C.: November 14, 2003.
The Department of Health and Human Services' (HHS) Food and Drug Administration (FDA) has been criticized about how it recruits individuals to become members of its advisory committees and how it grants some determinations that allow members with conflicts of interest to participate in committee meetings. Advisory committee meetings can include both standing and temporary members. Temporary members only serve for a particular meeting. GAO was asked to examine FDA's advisory committee processes. GAO reported on (1) how FDA recruited individuals for membership and evaluated candidates for potential conflicts of interest, (2) barriers that were reported to recruiting qualified individuals to serve on committees, and (3) the proportion of standing and temporary members, and the frequency with which members with conflict of interest determinations participated in meetings. GAO reviewed FDA advisory committee policies and analyzed meeting records for FDA's Center for Biologics Evaluation and Research (CBER), Center for Drug Evaluation and Research (CDER), and Center for Devices and Radiological Health (CDRH). GAO also interviewed individuals familiar with FDA's committee member recruiting process. GAO did not examine the effects of changes in FDA's advisory committee processes resulting from the FDA Amendments Act of 2007 and 2007 FDA policy revisions as it was too soon to assess them. Prior to the FDA Amendments Act of 2007, FDA employed several methods to recruit candidates for advisory committees and to evaluate candidates by prescreening them for potential conflicts of interest. FDA recruited candidates by announcing vacancies in the Federal Register, distributing recruitment brochures at advisory committee meetings and national meetings, word-of-mouth or asking current advisory committee members, and posting recruitment and conflict of interest information on FDA's Web site. To evaluate advisory committee candidates for conflicts of interest, FDA reviewed the candidates' curricula vitae and usually conducted a prescreening interview. FDA employed many of the same recruitment and evaluation practices used by organizations previously identified by GAO as employing methods that could ensure an independent and balanced advisory committee. FDA faced barriers to recruiting qualified advisory committee candidates, particularly those without potential conflicts of interest, according to FDA officials and former FDA advisory committee members. However, GAO found that the agency may have been able to mitigate these barriers by expanding its outreach efforts. FDA staff and former FDA advisory committee members GAO interviewed generally agreed that individuals with the expertise FDA sought for its advisory committees were the same leading experts that industry sought to conduct research. In addition, word-of-mouth--the advisory committee member recruitment method FDA officials generally agreed was most effective--was limited in the number of candidate nominations it could generate. The FDA Amendments Act of 2007 modifies FDA's process for prescreening candidates for committee membership. Standing and temporary members were 58 and 42 percent, respectively, of the 1,218 participants in the 83 advisory committee meetings held by CBER, CDER, and CDRH in 2004 and 2006 that GAO reviewed. FDA may permit an advisory committee member who has a conflict of interest, or an appearance of a conflict, and whose expertise is needed to participate in an advisory committee meeting under certain circumstances by granting a conflict of interest determination. More than half of the meetings had at least one standing or temporary member with at least one conflict of interest determination. The 200 members found to have at least one conflict of interest determination represented about 16 percent of all 83 meetings' participants. The FDA Amendments Act of 2007 limits the number of certain conflict of interest determinations that FDA can grant and FDA's conflict of interest policy revisions limit the amount of the disqualifying financial interests. In its comments on a draft of this report, HHS noted that on August 4, 2008, after GAO provided the draft report for its review, FDA issued four final guidance documents concerning management of its advisory committees. HHS also provided additional clarifications about aspects of FDA's advisory committees. GAO revised the report to cite the final guidances and to incorporate HHS's clarifications where appropriate.
GAO_T-HEHS-98-41
Background To qualify for home health care, a beneficiary must be confined to his or her residence (that is, “homebound”); require intermittent skilled nursing, physical therapy, or speech therapy; be under the care of a physician; and have the services furnished under a plan of care prescribed and periodically reviewed by a physician. If these conditions are met, Medicare will pay for part-time or intermittent skilled nursing; physical, occupational, and speech therapy; medical social services; and home health aide visits. Beneficiaries are not liable for any coinsurance or deductibles for these home health services, and there is no limit on the number of visits for which Medicare will pay. Medicare pays for home health care on the basis of the reasonable costs actually incurred by an agency (costs that are found to be necessary and related to patient care), up to specified limits. The BBA reduced these cost limits for reporting periods beginning on or after October 1, 1997. Home Health Cost Growth The Medicare home health benefit is one of the fastest growing components of Medicare spending. From 1989 to 1996, part A expenditures for home health increased from $2.4 billion to $17.7 billion—an increase of over 600 percent. Home health payments currently represent 13.5 percent of Medicare part A expenditures. At Medicare’s inception in 1966, the home health benefit under part A provided limited posthospital care of up to 100 visits per year after a hospitalization of at least 3 days. In addition, the services could only be provided within 1 year after the patient’s discharge and had to be for the same illness. Part B coverage of home health was limited to 100 visits per year. These restrictions under part A and part B were eliminated by the Omnibus Reconciliation Act of 1980 (ORA) (P.L. 96-499), but little immediate effect on Medicare costs occurred. benefit to grow as patients were discharged from the hospital earlier in their recovery periods. However, HCFA’s relatively stringent interpretation of coverage and eligibility criteria held growth in check for the next few years. Then, as a result of court decisions in the late 1980s, HCFA issued guideline changes for the home health benefit that had the effect of liberalizing coverage criteria, thereby making it easier for beneficiaries to obtain home health coverage. For example, HCFA policy had been that daily skilled nursing services provided more than four times a week were excluded from coverage because such services were not part-time and intermittent. The court held that regardless of how many days per week services were required they would be covered so long as they were part-time or intermittent. HCFA was then required to revise its coverage policy. Daily skilled nursing care is now covered for a period of up to 3 weeks. Additionally, another court decision prevented HCFA’s claims processing contractors from denying certain physician-ordered services unless the contractors could supply specific clinical evidence that indicated which particular service should not be covered. The combination of these changes has had a dramatic effect on utilization of the home health benefit in the 1990s, both in terms of the number of beneficiaries receiving services and in the extent of these services. (The appendix contains a figure that shows growth in home health expenditures in relation to the legislative and policy changes.) For example, ORA and HCFA’s 1989 home health guideline changes have essentially transformed the home health benefit from one focused on patients needing short-term care after hospitalization to one that serves chronic, long-term care patients as well. The number of beneficiaries receiving home health care has more than doubled in recent years, from 1.7 million in 1989 to about 3.9 million in 1996. During the same period, the average number of visits to home health beneficiaries also more than doubled, from 27 to 72. beneficiaries needing short-term care following a hospital stay to those receiving care for chronic conditions. Interim Changes to Cost Reimbursement To gain some measure of control over payments immediately, the BBA made some significant changes to the cost-based reimbursement system used for home health care while HCFA is developing a PPS for the longer term. Home health agency cost limits had been set separately for agencies in rural and urban areas, at 112 percent of the mean costs of freestanding agencies. Limits will now be set at 105 percent of the median costs of freestanding agencies. In addition, the BBA added a limit on the average per-beneficiary payment received during a year. This limitation is based on a blend—75 percent on the agency’s 1994 costs per beneficiary and 25 percent on the average regional per beneficiary costs in that year, increased for inflation in the home health market basket index since then. Hospital-based agencies have the same limits. The per-visit cost-limit provision of Medicare’s reimbursement system for home health agencies gave some incentives for providers to control their costs, and the revised per-visit and per-beneficiary limits should increase those incentives. However, for providers with per-visit costs considerably below their limits, there is little incentive to control costs, and per-visit limits do not give any incentive to control the number of visits. On the other hand, the new per-beneficiary limit should give an incentive to not increase the number of visits per beneficiary above the 1994 levels used to set this limit. However, the number of visits per beneficiary had already more than doubled by 1994 from that in 1989, so the per-beneficiary limits will be based on historically high visit levels. Moreover, per-beneficiary limits give home health agencies an incentive to increase their caseloads, particularly with lighter-care cases, perhaps in some instances cases that do not even meet Medicare coverage criteria. This creates an immediate need for more extensive and effective review by HCFA of eligibility for home health coverage. Design Issues for a Home Health PPS include selecting an appropriate unit of service, providing for adjustments to reflect case complexity, and assuring that adequate data are available to set the initial payment rates and service use parameters. The primary goal of a PPS is to give providers incentives to control costs while delivering appropriate services and at the same time pay rates that are adequate for efficient providers to at least cover their costs. If a PPS is not properly designed, Medicare will not save money, cost control incentives will at best be weak, or access to and quality of care can suffer. With the altered incentives inherent in a PPS, HCFA will also need to design and implement appropriate controls to ensure that beneficiaries receive necessary services of adequate quality. Most of the specifics about the home health PPS required by the BBA were left to HCFA’s discretion. This delegation was appropriate because insufficient information was available for the Congress to make the choices itself. Selecting the Unit of Service Many major decisions need to be made. First, HCFA must choose a unit of service, such as a visit or episode of care, upon which to base payment. A per-visit payment is not a likely choice because it does little to alter home health agency incentives and would encourage making more, and perhaps shorter, visits to maximize revenues. An episode-of-care system is the better choice, and HCFA is looking at options for one. Designing a PPS based on an episode of care also raises issues. The episode should generally be long enough to capture the care typically furnished to patients, because this tends to strengthen efficiency incentives. A number of ways to accomplish this goal exist. For example, HCFA could choose to set a constant length of time as the episode. In 1993, to cover 82 percent of home health patients, the episode would have to have been long enough to encompass 90 visits, which, assuming four visits a week on average, would mean an episode of about 150 days. Because of the great variability across patients in the number of visits and length of treatment, this alternative places very great importance on the method used to distinguish the differences among patients served across home health agencies in order to ensure reasonable and adequate payments. with mainly physical therapy, while a patient with arthritis recovering from the same injury might need a longer period with perhaps more home health aide services. This option would also require a good method for classifying patients into the various patient categories and determining resource needs. A third option is to use a fixed but relatively brief period, such as 30 or 60 days, sufficient to cover the needs of the majority of patients, with subsequent periods justified by the patient’s condition at the end of each period. The effectiveness of this option would, among other things, depend on a good process for verifying and evaluating patient condition periodically and adequate resources to operate that process. Also, HCFA will need to design a utilization and quality control system to guard against decreases in visits, which could affect quality, and home health agencies treating patients who do not quality for benefits. This will be necessary because an episode-of-care system gives home health agencies an incentive to maximize profits by decreasing the number of visits during the episode, potentially harming quality of care. Such a system also gives agencies an incentive to increase their caseloads, perhaps with patients who do not meet Medicare’s requirements for the benefit. The effectiveness of PPS will ultimately depend on the effective design of these systems and devoting adequate resources to operate them. Adjusting for Case Complexity Another major decision for HCFA, closely related to the unit-of-service decision, is the selection and design of a method to adjust payments to account for the differences in the kinds of patients treated by various home health agencies, commonly called a case-mix adjuster. Without an adequate case-mix adjuster, agencies that serve populations that on average require less care would be overcompensated. Also, agencies would have an incentive to seek out patients expected to need a low level of care and shun those needing a high level of care, thus possibly affecting access to care. Currently, there is limited understanding of the need for, and content of, home health services and, at the same time, a large variation across agencies in the extent of care given to patients with the same medical conditions. HCFA is currently testing a patient classification system for use as a case-mix adjuster, and the BBA requires home health agencies to submit to HCFA the patient-related data HCFA will need to apply this system. However, it is too early to tell whether HCFA’s efforts will result in an adequate case-mix adjuster. Ensuring an Adequate Database for Calculations PPS rates. Historical data on utilization and cost of services form the basis for calculating the “normal” episode of care and the cost of services, so it is important that those data are adequate for that purpose. Our work and that of the HHS Inspector General has found examples of questionable costs in home health agency cost reports. For example, we reported in August 1995 on a number of problems with contractor payments for medical supplies such as surgical dressings, which indicate that excessive costs are being included and not removed from home health agency cost reports. Also, the Inspector General found substantial amounts of unallowable costs in the cost reports of a large home health agency chain, which was convicted of fraud on the basis of these findings. Earlier this year, we suggested that it would be prudent for HCFA to audit thoroughly a projectable sample of home health agency cost reports. The results could then be used to adjust HCFA’s cost database to help ensure that unallowable costs are not included in the base for setting prospective rates. In response to a presidential directive, HCFA is planning to audit about 1,800 home health agency cost reports over the next year, about double the number that it otherwise would have audited. If these audits are thorough and the results are properly used, this effort could represent a significant step toward improving HCFA’s home health cost database. A good cost database could be a considerable aid to HCFA in calculating the initial payment rates under PPS. medical review of 80 high-dollar claims it had previously processed. The intermediary found that it should have denied 46 of them in whole or in part. Also, Operation Restore Trust, a joint effort by federal and several state agencies to identify fraud and abuse in Medicare and Medicaid, found very high rates of noncompliance with Medicare’s coverage conditions. For example, in a sample of 740 patients drawn from 43 home health agencies in Texas and 31 in Louisiana that were selected because of potential problems, some or all of the services received by 39 percent of the beneficiaries were denied. About 70 percent of the denials were because the beneficiary did not meet the homebound definition. Although these are results from agencies suspected of having problems, they illustrate that substantial amounts of noncovered care are likely to be reflected in HCFA’s home health care utilization data. Because of these problems, it would also be prudent for HCFA to conduct thorough on-site medical reviews, which increase the likelihood of identifying whether patients are eligible for services, of a projectable sample of agencies to give it a basis on which to adjust utilization rates for purposes of establishing a PPS. We are not aware that such a review is under way or planned. Safeguards Against Fraud and Abuse Still Needed A PPS for home health should enable Medicare to give agencies increased incentives to control costs and to slow the growth in program payments. A reduction in program safeguards contributed to the cost growth of the 1990s, and HCFA will need to develop a utilization and quality control program to protect against the likely incentives that agencies will have to increase caseloads unnecessarily and to diminish care, and harm quality. Moreover, a PPS alone will not eliminate home health fraud and abuse. Continued vigilance will be needed, and the BBA gives HCFA additional tools that should help it protect the program. Reduced Program Safeguards Made the Program Vulnerable health claims in fiscal year 1987, the contractors’ review target was lowered by 1995 to 3.2 percent of all claims (or even, depending on available resources, to a required minimum of 1 percent). We found that a lack of adequate controls over the home health program, such as little contractor medical review and limited physician involvement, makes it nearly impossible to know whether the beneficiary receiving home care qualifies for the benefit, needs the care being delivered, or even receives the services being billed to Medicare. Also, because of the small percentage of claims selected for review, home health agencies that bill for noncovered services are less likely to be identified than was the case 10 years ago. In addition, because relatively few resources had been available for auditing end-of-year provider cost reports, HCFA has little ability to identify whether home health agencies were charging Medicare for costs unrelated to patient care or other unallowable costs. Because of the lack of adequate program controls, some of the increase in home health costs likely stemmed from abusive practices. The Health Insurance Portability and Accountability Act of 1996 (HIPAA) recently increased funding for program safeguards. However, per-claim expenditures will remain below the level in 1989, after adjusting for inflation. We project that in 2003, payment safeguard spending as authorized by HIPAA will be just over one-half of the 1989 per-claim level, after adjusting for inflation. Finally, as discussed earlier, a PPS will give home health agencies incentives to increase the number of patients they treat and to cut back on the amount of care furnished to patients in order to maximize profits. To safeguard against the new incentives of a PPS, HCFA needs to implement utilization and quality control systems specifically designed to address the PPS’s incentives. Without adequate monitoring, home health agencies that choose to do so could game the system to maximize profits or take actions that reduce quality. New Anti-Fraud-And-Abuse Provisions and Initiatives The Congress and the administration recently have taken actions to combat fraud and abuse in the provision of and payment for Medicare home health services. Through BBA, the Congress has given HCFA some new tools to improve the administration of this benefit. The administration also has recently announced a moratorium on home health agency certifications as HCFA revises the criteria for certification. BBA Provisions The BBA included several provisions that could be used to prevent untrustworthy providers from entering the Medicare home health market. For example, BBA authorizes HHS to refuse to allow individuals or entities convicted of felonies from participating in Medicare. Also, Medicare can exclude an entity whose former owner transfers ownership to a family or household member in anticipation of, or following, an exclusion or cause for exclusion. In addition, BBA requires entities and individuals to report to HCFA their taxpayer identification numbers and the Social Security numbers of owners and managing employees. This should make easier the tracking of individuals who have been sanctioned under the Social Security Act or convicted of crimes, if they move from one provider to another. Another provision of the BBA that may prove useful in fighting fraud and abuse is the requirement that any entity seeking to be certified as a home health agency must post a surety bond of at least $50,000. This should provide at least minimal assurance that the entity has some financial and business capability. Finally, BBA authorizes HCFA to establish normative guidelines for the frequency and duration of home health services and to deny payment in cases exceeding those guidelines. One area where changes could help to control abuse in home health not directly addressed by the BBA is the survey and certification of agencies for participation in Medicare. State health departments under contract with HCFA visit agencies that wish to participate in Medicare to assess whether they meet the program’s conditions of participation—a set of 12 criteria covering such things as nursing services, agency organization and governance, and medical records—thought to be indicative of an agency’s ability to provide quality care. When Medicare was set up, it was not done with abusive billers and defrauders in mind. Rather, Medicare’s claims system assumes that, for the most part, providers submit proper claims for services actually rendered that are medically necessary and meet Medicare requirements. For home health care, the home health agency usually develops the plan of care and is responsible for monitoring the care provided and ensuring that care is necessary and of adequate quality. In other words, the agency is responsible for managing the care it furnishes. While these functions are subject to review by Medicare’s regional home health intermediaries, only a small portion of claims (about 1 percent) are reviewed, and most of those are paper reviews of the agency’s records. Early this year, HCFA proposed regulations to modify the home health conditions of participation and their underlying standards. The modifications would change the emphasis of the survey and certification process from an assessment of whether an agency’s internal processes are capable of ensuring quality of care toward an assessment that includes some of the outcomes of the care actually furnished. HCFA believes this change in emphasis will provide a better basis upon which to judge quality of care. HCFA is currently considering the comments received on the proposed revisions in preparation for finalizing them, but it does not yet have a firm date for their issuance. We believe that the survey and certification process could be further modified so that it would also measure agencies’ compliance with their responsibilities to develop plans for, and deliver, only appropriate, necessary, covered care to beneficiaries. Such modifications could be tied to the new features that HCFA selects as it designs the home health PPS. For example, the case-mix adjuster might be designed to take into account the specific illnesses of the patients being treated along with other factors that affect the resources needed to care for patients, such as limitations in their ability to perform the activities of daily living. Agencies would have a financial incentive to exaggerate the extent of illness or limitations because doing so would increase payments. The survey teams might be able to evaluate whether the agency being surveyed had in fact correctly classified patients at the time the outcome information is reviewed. Use of state surveyors for such purposes would not be unprecedented because survey teams also assessed whether Medicare home health coverage criteria were met during Operation Restore Trust. As discussed previously, HCFA needs to design utilization review systems to ensure that, if home health agencies respond inappropriately to the incentives of PPS, such responses will be identified and corrected. HCFA should also consider as it designs such systems using the survey and certification process to measure whether home health agencies meet their utilization management responsibilities. This would help to identify abusive billers of home health services while at the same time help to ensure quality. Moratorium on New Certifications HCFA, the moratorium is designed to stop the admission of untrustworthy providers while HCFA strengthens its requirements for entering the program. In a September 19 memorandum, HCFA clarified the provisions of the moratorium. According to the memorandum, the moratorium applies to new home health agencies and new branches of existing agencies. It will last until the requirements to strengthen the home health benefit have been put in place, which HCFA officials estimate to be in 6 months. No new federal or state surveys are to be scheduled or conducted for the purpose of certifying new home health agencies; those surveys in progress but not completed when the moratorium was announced are to be terminated; and previously scheduled surveys for new certifications are to be canceled. HCFA will, however, enter into new home health agency provider agreements if the new agency has completed the initial survey successfully, meaning that the agency has complied with Medicare’s conditions of participation and has satisfied all other provider agreement requirements. HCFA said it would make rare exceptions to the certification moratorium if a home health agency provides compelling evidence demonstrating that the agency will operate in an underserved area that has no access to home care. According to a HCFA official, several actions are planned during the moratorium. HHS is expected to implement the program safeguards mandated by the BBA, such as implementing the requirement for home health agencies to post at least a $50,000 surety bond before they are certified and promulgating a rule requiring new agencies to have enough funds on hand to operate for the first 3 to 6 months. HHS is also expected to develop new regulations requiring home health agencies to provide more ownership and other business-related information and requiring agencies to reenroll every 3 years. At this point, it is difficult to say what practical effect the moratorium will have on the home health industry or the Medicare program. However, the moratorium could be useful, first, in sending a signal that the administration is serious about weeding out untrustworthy providers and, second, in establishing a milestone for issuing regulatory reforms. Conclusion service and an adequate case-mix adjuster for a PPS as well as remove the effects of cost report abuse and inappropriate utilization from its databases so that those problems do not result in overstatement of PPS rates. HCFA also needs to quickly implement the new tools in the BBA so that it can keep untrustworthy providers from gaining access to the program and remove those that already have access. Moreover, HCFA needs a new utilization and quality control system designed specifically to address the new incentives under PPS. This concludes my prepared remarks, and I will be happy to answer any questions you or Members of the Subcommittee may have. Medicare Home Health Expenditures, 1980-96 Medicare Home Health Agencies: Certification Process Is Ineffective in Excluding Problem Agencies (GAO/T-HEHS-97-180, July 28, 1997). Medicare: Need to Hold Home Health Agencies More Accountable for Inappropriate Billings (GAO/HEHS-97-108, June 13, 1997). Medicare Post-Acute Care: Cost Growth and Proposals to Manage It Through Prospective Payment and Other Controls (GAO/T-HEHS-97-106, Apr. 9, 1997). Medicare: Home Health Cost Growth and Administration’s Proposal for Prospective Payment (GAO/T-HEHS-97-92, Mar. 5. 1997). Medicare Post-Acute Care: Home Health and Skilled Nursing Facility Cost Growth and Proposals for Prospective Payment (GAO/T-HEHS-97-90, Mar. 4, 1997). Medicare: Home Health Utilization Expands While Program Controls Deteriorate (GAO/HEHS-96-16, Mar. 27, 1996). Medicare: Allegations Against ABC Home Health Care (GAO/OSI-95-17, July 19, 1995). Medicare: Increased Denials of Home Health Claims During 1986 and 1987 (GAO/HRD-90-14BR, Jan. 24, 1990). Medicare: Need to Strengthen Home Health Care Payment Controls and Address Unmet Needs (GAO/HRD-87-9, Dec. 2, 1986). The Elderly Should Benefit From Expanded Home Health Care but Increasing These Services Will Not Insure Cost Reductions (GAO/IPE-83-1, Dec. 7, 1982). Response to the Senate Permanent Subcommittee on Investigations’ Queries on Abuses in the Home Health Care Industry (GAO/HRD-81-84, Apr. 24, 1981). Medicare Home Health Services: A Difficult Program to Control (GAO/HRD-81-155, Sept. 25, 1981). Home Health Care Services—Tighter Fiscal Controls Needed (GAO/HRD-79-17, May 15, 1979). The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
GAO discussed how the Balanced Budget Act of 1997 (BBA) addressed the issues of rapid cost growth in Medicare's home health benefit, focusing on: (1) the reasons for the rapid growth of Medicare home health care costs in the 1990s; (2) the interim changes in the BBA to Medicare's current payment system; (3) issues related to implementing the BBA's requirement to establish a prospective payment system (PPS) for home health care; and (4) the status of efforts by Congress and the administration to strengthen program safeguards to combat fraud and abuse in home health services. GAO noted that: (1) changes in law and program guidelines have led to rapid growth in the number of beneficiaries using home health care and in the average number of visits per user; (2) in addition, more patients now receive home health services for longer periods of time; (3) these changes have not only resulted in accelerating cost but also marked a shift from an acute-care, short-term benefit toward a more chronic-care, longer-benefit; (4) the recently enacted BBA included a number of provisions designed to slow the growth in home health expenditures; (5) these include tightening payment limits immediately, requiring a PPS beginning in fiscal year 2000, prohibiting certain abusive billing practices, strengthening participation requirements for home health agencies, and authorizing the Secretary of Health and Human Services to develop normative guidelines for the frequency and duration of home health services; (6) all of these provisions should help control Medicare costs; (7) however, the Health Care Financing Administration (HCFA), the agency responsible for administering Medicare, has considerable discretion in implementing the law which, in turn, means the agency has much work to do within a limited time period; and (8) HCFA's actions, both in designing a PPS and in implementing enhanced program controls to assure that unscrupulous providers cannot readily game the system, will determine to large extent how successful the legislation will be in curbing past abusive billing practices and slowing the rapid growth in spending for this benefit.
GAO_GAO-11-671T
USPS’s Financial Condition Continues to Deteriorate, and USPS Anticipates a Substantial Cash Shortfall This Fiscal Year USPS’s financial condition has deteriorated significantly since fiscal year 2006, and its financial outlook is grim in both the short and long term. In July 2009, we added USPS’s financial condition and outlook to our high- risk list because USPS was incurring billion-dollar deficits and the amount of debt it incurred was increasing as revenues declined and costs rose. USPS’s financial condition has been negatively affected by decreasing mail volumes as customers have increasingly shifted to electronic communications and payment alternatives, a trend that is expected to continue. USPS reported that total mail volume decreased 3 percent in the second quarter of fiscal year 2011, while First-Class Mail declined by 7.6 percent compared with the same period last year, negatively affecting revenue as First-Class Mail is USPS’s most profitable mail. Half way through fiscal year 2011, USPS reported a net loss of $2.6 billion. USPS has reported achieving some cost savings in the last 5 years—for example, it eliminated about 137,000 full- and part-time positions. However, USPS has had difficulty reducing its compensation and benefits costs and has struggled to optimize its workforce and its retail, mail processing, and delivery networks to reflect declining mail volume. USPS has relied increasingly on debt to fund its operations and has increased its net borrowing by nearly $12 billion over the last 5 years. USPS recently reported that its financial performance for the first 6 months of fiscal year 2011 was worse than expected, and that, not only will it reach its $15 billion statutory debt limit by the end of the fiscal year, it now projects a substantial cash shortfall and that it will be unable to pay all of its financial obligations. Specifically, USPS said that absent legislative change it will be forced to default on payments to the federal government, including a $5.5 billion pre-funding payment for retiree health benefits due on September 30, 2011. While USPS’s financial condition continues to deteriorate, we and USPS have presented options to improve the agency’s financial condition. Specifically, we have reported that Congress and USPS need to reach agreement on a package of actions to restore USPS’s financial viability, which will enable USPS to align its costs with revenues, manage its growing debt, and generate sufficient funding for capital investment. Proposed legislation, including S. 353 and draft legislation expected to be introduced by Senator Carper, provide a starting point for considering key issues where congressional decisions are needed to help USPS undertake needed reforms. As we have previously reported, to address USPS’s viability in the short-term, Congress should consider modifying the funding requirements for USPS’s retiree health benefits in a fiscally responsible manner. For long-term stability, Congress should address constraints and legal restrictions, such as those related to closing facilities, so that USPS can take more aggressive action to reduce costs. Action is urgently needed as mail delivery is a vital part of this nation’s economy. The USPS Postmaster General has also presented strategies for improving USPS’s financial viability, recently stating that the agency’s focus should be on its core function of delivery, growing the package business, and aggressively controlling costs and consolidating postal networks to increase efficiency. Clearly, USPS’s delivery fleet is a vital component of a strategy focused on delivery. Delivery Fleet Primarily Consists of Aging Long-Life Vehicles and Alternative Fuel Vehicles Acquired to Meet Requirements, Which Have Presented Cost and Infrastructure Challenges As shown in figure 1, there are three principal components of USPS’s delivery fleet: about 141,000 “long-life vehicles” (LLV)—custom-built, right-hand-drive, light duty trucks with an aluminum body 16 to 23 years old, that are approaching the end of their expected 24-year operational lives; about 21,000 flex-fuel vehicles (FFV), also custom-built with right-hand drive, 9 and 10 years old, that are approaching the mid-point of their expected 24-year operational lives; and about 22,000 commercially-available, left-hand drive minivans that range in age from 2 to 13 years and have an expected operational life of 10 years. According to USPS officials, right-hand-drive vehicles are necessary for curbline delivery. In addition, USPS officials told us that the LLVs’ and FFVs’ standardized design minimizes training requirements, increases operational flexibility, and facilitates partnerships with parts suppliers. Moreover, LLVs and FFVs were made to withstand harsh operating conditions, resulting from an average of about 500 stops and starts per delivery route per day. As a result, the LLVs and FFVs are expected to last more than twice as long as the minivans, which were not built to withstand these operating conditions. USPS is subject to certain legislative requirements governing the federal fleet. For example, under the Energy Policy Act of 1992 (EPAct 1992), 75 percent of the light-duty vehicles that USPS acquires must be capable of using an alternative fuel such as ethanol, natural gas, propane, biodiesel, electricity, or hydrogen. Since 2000, USPS has consistently purchased delivery vehicles that can operate on gasoline or a mixture of gasoline and 85 percent ethanol (E85) to satisfy this requirement. These vehicles are known as dual-fueled vehicles. USPS officials stated that E85-capable vehicles were chosen because they were the least costly option for meeting federal fleet acquisition requirements. In addition, officials expected that E85 eventually would be widely available throughout the United States. However, according to Department of Energy (DOE) data, as of December 2009, E85 was not available at 99 percent of U.S. fueling stations. Subsequent legislation required that alternative fuel be used in all dual- fueled vehicles unless they have received a waiver from DOE. Because of E85’s limited availability, USPS has sought and obtained annual waivers from DOE—for example, in fiscal year 2010, about 54 percent of its E85- capable vehicles received waivers permitting them to be operated exclusively on gasoline. The remaining 46 percent of its E85-capable vehicles were expected to operate exclusively on E85. However, USPS officials acknowledged that USPS does not always fuel these vehicles with E85 because using E85 increases operational costs. Apart from its experiences with E85-capable vehicles, USPS has a variety of limited experiences with other types of alternative fuel delivery vehicles. Collectively, these vehicles accounted for about 2 percent (3,490 vehicles) of its delivery fleet as of September 30, 2010, as shown in table 1. According to USPS officials, to date, USPS has not invested more heavily in alternative technologies in part because alternative fuel vehicles likely would result in higher estimated lifecycle costs than gasoline-fueled vehicles. This is largely because any potential fuel savings from alternative fuel vehicles would be unlikely to offset generally higher acquisition costs over the vehicles’ operating lives, given that USPS’s delivery vehicles on average travel about 17 miles and its LLVs use the equivalent of about 2 gallons of gasoline per day. In addition, USPS officials told us that the limited availability of alternative fuels and the high costs of installing fueling infrastructure—such as on-site charging stations—have made it difficult to elect to invest in or operate these vehicles. Finally, they noted that USPS has experienced problems obtaining technological support and parts for its alternative fuel vehicles. USPS’s Approach for Addressing Its Delivery Fleet Needs Has Financial and Environmental Trade- offs USPS’s current approach is to sustain operations of its delivery fleet— through continued maintenance—for the next several years, while planning how to address its longer term delivery fleet needs. Under this approach, USPS anticipates purchasing limited numbers of new, commercially available minivans. According to USPS officials, this approach was adopted in December 2005 after senior management and a Board of Governors subcommittee decided not to initiate a major fleet replacement or refurbishment. At that time, USPS estimated that it would cost $5 billion to replace about 175,000 vehicles. Planning and executing a custom-built vehicle acquisition would take 5 to 6 years from initially identifying the vehicles’ specifications and negotiating with manufacturers through testing and deployment, according to USPS officials. USPS also elected not to refurbish its fleet, another option considered. According to a USPS contractor, in 2005, the agency could have delayed purchasing new vehicles for at least 15 years if it had refurbished its LLVs and FFVs (i.e., replaced nearly all parts subject to the effects of wear and aging) over a 10-year period—at a cost in 2005 of about $20,000 per vehicle—or a total of about $3.5 billion, assuming that 175,000 vehicles were refurbished. USPS officials said the agency chose to maintain its current delivery fleet rather than make a major capital investment given pending operational and financial developments and uncertainty about evolving vehicle technologies. We found that USPS’s maintenance program and well-established parts supply network have enabled it to maintain its current delivery fleet while avoiding the capital costs of a major vehicle replacement or refurbishment. The USPS Office of Inspector General recently reported that this approach is operationally viable and generally cost-effective, given USPS’s financial circumstances. Our analysis of a custom query of USPS’s vehicle database found that delivery vehicles’ direct maintenance costs averaged about $2,450 per vehicle in fiscal year 2007 and just under $2,600 per vehicle in fiscal year 2010 (in constant 2010 dollars). However, these direct maintenance costs are understated, in part because, according to USPS data, about 6 percent of total maintenance costs—all due to maintenance performed by contractors—were not entered into its database. USPS’s approach has trade-offs, including relatively high costs to maintain some delivery vehicles. Our analysis showed that while about 77 percent of its delivery vehicles incurred less than $3,500 in direct annual maintenance costs in fiscal year 2010, about 3 percent (5,349) of these vehicles required more than $7,000—and 662 vehicles required more than $10,500—in direct annual maintenance costs, or over one-third the $31,000 per vehicle replacement cost USPS currently estimates. USPS officials stated that in most cases, they repair an LLV or FFV rather than replace it with a minivan because of the continuing need for right-hand- drive vehicles. One reason that some vehicles are incurring high direct maintenance costs is that USPS has replaced—at a minimum—about 4,500 LLV frames in fiscal years 2008 through 2010 because of severe corrosion, at a cost of about $5,000 each. None of the fleet managers for Fed-Ex Express, United Parcel Service, or other companies we spoke with have replaced their vehicles’ frames, and some suggested that the need to do so is a key indication that it is time to replace—not repair—a vehicle. Another trade off of its current strategy is that USPS is increasingly incurring costs for unscheduled maintenance because of breakdowns. USPS’s goal is to ensure that no more than 20 percent of its total annual maintenance costs are for unscheduled maintenance. However, in fiscal year 2010, at least 31 percent of its vehicle maintenance costs were for unscheduled maintenance, 11 percentage points over its 20 percent goal. Unscheduled maintenance can result in delays in mail delivery and operational costs, such as overtime expenses. USPS employees at a majority of the eight vehicle maintenance facilities and some post offices we visited told us that they believe delivery vehicles can continue to deliver mail without major operational interruptions for at least several more years. At the same time, we identified some instances of maintenance problems during our site visits (our report being released today contains photographs and further discussion of these problems). For example, officials at a Minnesota vehicle maintenance facility told us that they are not following USPS’s requirements for replacing frames whose thickness in key spots indicates weakness because they do not have the resources to do so. Instead, they said, facility personnel replace frames only when the frames have one or more holes through the metal. In addition, when we visited a vehicle maintenance facility in New York state, technicians were replacing two severely corroded LLV frames with similar holes. The manager of this facility informed us that frames in this condition should have been replaced during a previous preventive maintenance inspection. As discussed, USPS’s financial condition has declined substantially, and although USPS issued a 10-year action plan in March 2010 for improving its financial viability, the plan did not address its fleet of delivery vehicles. USPS has not analyzed how operational changes proposed in its 10-year plan, including a potential shift in delivery from 6 to 5 days a week, would affect its delivery fleet needs, nor has it examined the consequences of its decision to delay the fleet’s replacement or refurbishment. In addition, it has not developed a fleet financing strategy. During our review, USPS officials told us that the agency is in the early stages of developing a proposal for addressing its delivery fleet needs. These officials stated that the proposal will likely explore alternatives, including maintaining the current fleet, refurbishing the LLVs and FFVs, or, possibly, undertaking a major acquisition of new vehicles. Furthermore, USPS officials stated that the proposal will discuss strategies for incorporating additional alternative fuel capabilities into its fleet. USPS expects to present its proposal to its Capital Investment Committee later this fiscal year. USPS officials said that the agency intends to examine ways to comply with EPAct 1992’s acquisition requirements in its next large-scale acquisition of delivery vehicles, but noted that life-cycle costs are significantly higher for nearly all currently available alternative fuel vehicles than for gasoline-powered vehicles. Consequently, these officials told us a large-scale acquisition of alternative fuel vehicles (other than E85-capable vehicles) is not likely to be financially viable. USPS officials stated that, in their view, the best way to meet national sustainability requirements for reduced emissions without incurring significant costs may be to invest in highly fuel-efficient gasoline-powered vehicles. Such an outcome could be possible given increased legislative flexibility in the definition of what constitutes an alternative fuel vehicle. Specifically, as a result of the National Defense Authorization Act of 2008, any vehicle determined by the Environmental Protection Agency (EPA) to be a low- greenhouse-gas-emitting vehicle in locations that qualify for a DOE waiver would be considered an alternative fuel vehicle. However, because EPA evaluates only commercially available vehicles, at present, there are no low-greenhouse-gas-emitting right-hand-drive vehicles available that have been determined to meet EPAct 1992’s fleet acquisition requirements for light-duty vehicles. Consequently, if USPS decides to pursue such a vehicle in its next acquisition of custom-built delivery vehicles, it would need to work with vehicle manufacturers, EPA, and DOE. Without Significant Improvement in USPS’s Financial Condition, There Are No Clear Options to Fund a Major Vehicle Replacement USPS’s financial condition poses a significant barrier to its ability to fund a major acquisition of its delivery fleet. Recently, USPS estimated that it would cost about $5.8 billion to replace about 185,000 delivery vehicles with new gasoline-powered custom-built vehicles, at about $31,000 per vehicle (in 2011 dollars). Further, officials from USPS, DOE, and an environmental organization, and operators of private fleets see little potential to finance a fleet replacement through grants or partnerships. A primary barrier to a joint procurement is USPS’s need for customized, right-hand-drive delivery vehicles (its competitors typically use larger vehicles that are not right-hand-drive). USPS and DOE officials also saw little likelihood that USPS could help finance a major delivery fleet acquisition through an energy savings performance contract, in which a federal agency enters into a long-term contract with a private energy company and shares energy-related cost savings. Given the low annual mileage of USPS’s delivery fleet, USPS and DOE officials stated that it is unlikely that the fuel savings generated from a more efficient fleet (whether consisting of gasoline-only vehicles or alternative fuel vehicles) would be sufficient, compared with the acquisition cost of the vehicles, to interest a private investor. If Congress and USPS reach agreement on a package of actions to move USPS toward financial viability, depending on the specific actions adopted, USPS’s follow-up, and the results, such an agreement could enhance USPS’s ability to invest in new delivery vehicles. While USPS’s efforts to maintain its current delivery fleet have worked thus far, the time soon will come when the cost and operational consequences of this approach will not allow further delays. When that time comes, USPS will need to know how it can best comply with federal requirements for acquiring alternative fuel vehicles while also meeting its operational requirements. However, until USPS defines its strategy for a major capital investment for its delivery vehicles, neither USPS nor Congress has sufficient information to fully consider its options. Consequently, USPS must develop a comprehensive strategy for dealing with this inevitability. In the report that this testimony is based on, we recommend that USPS develop a strategy and timeline for addressing its delivery fleet needs. Specifically, we recommend that this strategy address such issues as the effects of USPS’s proposed change from 6- to 5-day delivery and consolidation of its facilities, as well as the effects of continuing changes in its customers’ use of the mail on future delivery fleet requirements, along with an analysis of how it can best meet federal fleet requirements, given its budget constraints. USPS agreed with our findings and recommendation. USPS stated that it is developing a strategy to address the immediate and long-term needs of its delivery fleet, and that it plans to complete the strategy and associated timeline by the end of December 2011. Chairman Carper, Ranking Member Brown, and Members of the Subcommittee, this concludes my prepared statement. I would be pleased to answer any questions that you have. Contacts and Staff Acknowledgments For further information about this statement, please contact Phillip Herr at (202) 512-2834 or [email protected]. Individuals who made key contributions to this statement include Kathleen Turner (Assistant Director), Teresa Anderson, Joshua Bartzen, Bess Eisenstadt, Laura Erion, Alexander Lawrence, Margaret McDavid, Joshua Ormond, Robert Owens, Matthew Rosenberg, Kelly Rubin, Karla Springer, Crystal Wesco, and Alwynne Wilbur. This is a work of the U.S. government and is not subject to copyright protection in the United States. The published product may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
The United States Postal Service (USPS) is in financial crisis. It also has the world's largest civilian fleet, with many of its delivery vehicles reaching the end of their expected 24- year operational lives. USPS is subject to certain legislative requirements governing the federal fleet, including a requirement that 75 percent of USPS's vehicle acquisitions be capable of operating on an alternative fuel other than gasoline. This testimony addresses (1) USPS's financial condition; (2) USPS's delivery fleet profile, including how USPS has responded to alternative fuel vehicle requirements and its experiences with these vehicles; (3) trade-offs of USPS's approach for addressing its delivery fleet needs; and (4) options to fund a major acquisition of delivery vehicles. This testimony is primarily based on GAO-11-386 , which is being released today. For that report, GAO analyzed USPS data, visited USPS facilities, and interviewed USPS and other officials. GAO recommended in that report that USPS should develop a strategy for addressing its delivery fleet needs that considers the effects of likely operational changes, legislative fleet requirements, and other factors. USPS agreed with the recommendation. For this testimony, GAO also drew upon past and ongoing work on USPS's financial condition and updated USPS financial information. USPS's financial condition continues to deteriorate. For the first 6 months of fiscal year 2011, USPS reported a net loss of $2.6 billion--worse than it expected--and that, absent legislative change, it will have to default on payments to the government, including a $5.5 billion payment for its retiree health benefits. GAO has reported that Congress and USPS need to reach agreement on a package of actions to move USPS toward financial viability. USPS's delivery fleet is largely composed of custom-built, right-hand-drive vehicles designed to last for 24 years, including about 141,000 gasolinepowered vehicles (16 to 23 years old) and 21,000 flex-fuel vehicles capable of running on gasoline or 85-percent ethanol (E85) (about 10 years old). Its flexfuel vehicles and many of its 22,000 left-hand-drive minivans, which are also capable of running on E85, were purchased to comply with the 75 percent acquisition requirement for alternative fuel vehicles. Delivery vehicles travel about 17 miles and use the equivalent of about 2 gallons of gasoline on average per day. USPS has a variety of limited experiences with other alternative fuel vehicles, such as compressed natural gas and plug-in electric vehicles, most of which have higher life-cycle costs than gasoline vehicles. USPS's approach for addressing its delivery fleet needs is to maintain its current fleet until it determines how to address its longer term needs. USPS has incurred small increases in direct maintenance costs over the last 4 years, which were about $2,600 per vehicle in fiscal year 2010. However, it is increasingly incurring costs for unscheduled maintenance because of breakdowns, which can disrupt operations and increase costs. In fiscal year 2010, at least 31 percent of USPS's vehicle maintenance costs were for unscheduled maintenance, 11 percentage points over USPS's 20 percent goal. USPS's financial challenges limit options to fund a major delivery vehicle replacement or refurbishment, estimated to cost $5.8 billion and (in 2005) $3.5 billion, respectively. USPS and other federal and nonfederal officials see little potential to finance a fleet replacement through grants or partnerships. If Congress and USPS reach agreement on a package of actions to move USPS toward financial viability, such an agreement could potentially enhance USPS's ability to invest in new delivery vehicles.
GAO_GAO-04-147
Background To better focus its munitions cleanup activities under the Defense Environmental Restoration Program, DOD established the Military Munitions Response program in September 2001. The objectives of the program include compiling a comprehensive inventory of military munitions sites, developing a prioritization protocol for sequencing work at these sites, and establishing program goals and performance measures to evaluate progress. In December 2001, shortly after DOD established the program, the Congress passed the National Defense Authorization Act for Fiscal Year 2002, which among other things, required DOD to develop an initial inventory of sites that are known or suspected to contain military munitions by May 31, 2003, and to provide annual updates thereafter. DOD provides these updates as part of its Defense Environmental Restoration Program Annual Report to Congress. To clean up potentially contaminated sites, DOD generally follows the process established for cleanup actions under CERCLA, which includes the following phases and activities: Preliminary Assessment—Determine whether a potential military munitions hazard is present and whether further action is needed. Site Investigation—Inspect the site and search historical records to confirm the presence, extent, and source(s) of hazards. Remedial Investigation/Feasibility Study or Engineering Evaluation/Cost Analysis—Determine the nature and extent of contamination; determine whether cleanup action is needed and, if so, select alternative cleanup approaches. These could include removing the military munitions, limiting public contact with the site through signs and fences, or determining that no further action is warranted. Remedial Design/Remedial Action—Design the remedy and perform the cleanup or other response. Long-Term Monitoring—Periodically review the remedy in place to ensure its continued effectiveness, including checking for unexploded ordnance and public education. For sites thought to be formerly used defense sites, the Corps also performs an initial evaluation prior to the process above. In this initial evaluation, called a preliminary assessment of eligibility, the Corps determines if the property is a formerly used defense site. The Corps makes this determination based on whether there are records showing that DOD formerly owned, leased, possessed, operated, or otherwise controlled the property and whether hazards from DOD’s use are potentially present. If eligible, the site then follows the CERCLA assessment and cleanup process discussed earlier. When all of these steps have been completed for a given site and long-term monitoring is under way, or it has been determined that no cleanup action is needed, the services and the Corps consider the site to be “response complete.” DOD Has Made Limited Progress in Its Program to Identify, Assess, and Clean Up Potentially Contaminated Sites While DOD has identified 2,307 potentially contaminated sites as of September 2002, the department continues to identify additional sites, and it is not likely to have a firm inventory for several years (see table 1 for the distribution of these sites by service). Of the identified sites, DOD determined that 362 sites require no further study or cleanup action because it found little or no evidence of military munitions. For 1,387 sites, DOD either has not begun or not completed its initial evaluation, or has determined that further study is needed. DOD has completed an assessment of 558 sites, finding that 475 of these required no cleanup action. The remaining 83 sites require some cleanup action, of which DOD has completed 23. DOD had identified 2,307 sites potentially contaminated with military munitions, as of September 30, 2002, and it continues to identify additional sites. (Fig. 1 shows the distribution of these sites by state.) DOD officials acknowledge that they will not have a firm inventory for several years. For example, as of September 30, 2002, the Army had not completed a detailed inventory of closed ranges at 86 percent of active installations; the 105 sites identified by the Army represented sites on only 14 percent of the Army’s installations. The Army is working to identify sites on the remaining installations and plans to have 40 percent of its installations accounted for by the next Defense Environmental Restoration Program Annual Report to Congress in spring 2004. Similarly, the Corps recently identified 75 additional sites to be included in the inventory as a result of its effort to reevaluate sites previously determined not to need further action after the initial evaluation. Because not all of the sites have been identified, DOD has only a preliminary idea of the extent of cleanup that will be needed. To help complete the identification process, DOD has developed a Web site that stakeholders, such as states, tribes, and federal regulators, can use to suggest additions and revisions to the inventory. DOD plans to update the inventory in its future Defense Environmental Response Program Annual Report to Congress using, in part, the information collected from this Web site. Of the 2,307 sites identified, DOD has determined, based on an initial evaluation, that 362 do not require any further DOD action (see fig. 2). However, these 362 sites are formerly used defense sites, and the Corps’ evaluation of these sites was less comprehensive than other evaluations conducted by DOD under the CERCLA process. In making its determinations, the Corps conducted a preliminary assessment of eligibility and determined that the potential for military munitions hazard was not present. As a result of this determination, the sites were not evaluated further. The Corps is in the process of reviewing these determinations with local stakeholders to ensure that there was a sound basis for the original determination. It has recently decided that some of these sites need to be reassessed to determine if cleanup is needed. Of the 1,945 sites that required further action, DOD has either not begun or has not completed its study, or has determined that further study is needed, for 1,387 sites (see fig. 3). For example, 241 Air Force and 105 Army sites at closed ranges on active installations have not been evaluated. For other sites, primarily formerly used defense sites, DOD has completed its initial evaluation and determined that further investigation is needed. DOD has completed its assessment of 558 sites, nearly all of which are ranges on formerly used defense sites or closing installations, and determined that no cleanup action was needed for 475; the remaining 83 sites required some level of cleanup action. Of the 83 sites that required cleanup action, 60 have cleanup action planned or under way and 23 are complete. Actions taken at these 23 sites have been varied and include surface and subsurface removal of munitions, and institutional controls, such as the posting of warning signs or educational programs. See figure 4 for examples of cleanup actions at Military Munitions Response program sites. DOD Does Not Have a Complete and Viable Plan for Assessing and Cleaning Up Potentially Contaminated Sites In DOD’s Fiscal Year 2002 Defense Environmental Restoration Program Annual Report to Congress, DOD identified several elements integral to the success of the Military Munitions Response program: compiling a comprehensive inventory of sites; developing a new procedure to assess risk and prioritize sites; ensuring proper funding for accurate planning and program execution; and establishing program goals and performance measures. While DOD has established the basic framework to address these elements, DOD’s plan is lacking in three key respects. First, essential data for DOD’s plan may take years to develop. Second, DOD’s plan is contingent upon preliminary cost estimates that may change significantly and a reallocation of funds that may not be available. Finally, DOD’s plan lacks specific goals and performance measures to track progress. Essential Data for DOD’s Plan May Take Years to Develop DOD’s inventory of potentially contaminated sites serves as the basis for other elements of its plan, yet this inventory is incomplete. DOD’s inventory of 2,307 sites includes only those identified through September 30, 2002. As previously discussed, according to DOD officials, this inventory is not final; and DOD has not set a deadline to complete it. According to DOD, most of the ranges on formerly used defense sites and on military installations that are being closed have been identified and are being assessed or cleanup action is under way. The ranges yet to be identified are primarily located on active installations. For example, the Army, as of September 30, 2002, had completed a detailed inventory of potentially contaminated sites on only 14 percent of its active installations. Because the inventory serves as the basis for other elements of the plan, such as budget development and establishing program goals, most sites must first be identified in order for DOD to have a reasonable picture of the magnitude of the challenge ahead and to plan accordingly. Furthermore, DOD intends to use a new procedure to reassess the relative risk and priority for 1,387 sites needing further study and any new sites identified as part of the continuing inventory effort, but DOD is not scheduled to complete these reassessments until 2012. DOD recently developed this procedure for assigning each site in the inventory a priority level for cleanup action, based on the potential risk of exposure resulting from past munitions-related activities. Under this procedure, DOD plans to reevaluate the 1,387 sites for three potential hazard types: (1) explosive hazards posed by unexploded ordnance and discarded military munitions, (2) hazards associated with the effects of chemical warfare material, and (3) chronic health and environmental hazards posed by munitions constituents. Once assessed, each site’s relative risk-based priority will be the primary factor determining future cleanup order. DOD plans to require assessment of each site on the inventory for at least one of these hazard types by May 31, 2007, and for all three hazard types by May 31, 2012. Until all three hazard types are fully assessed, DOD cannot be assured that it is using its limited resources to clean up those sites that pose the greatest risk to safety, human health, and the environment. DOD’s Plan Relies on Preliminary Cost Estimates That Can Change Significantly and a Reallocation of Funds That May Not Be Available DOD’s plan to identify and address military munitions sites relies on preliminary cost estimates that were developed using incomplete information. The majority of the site estimates were developed using a cost-estimating tool that incorporates variables, such as the affected acreage; types, quantity, and location of munitions; and future land use. These variables can have a significant impact on cost, according to DOD. However, detailed site-specific information was not available for all sites. For example, as mentioned earlier, 105 Army and 241 Air Force sites at closed ranges on active installations have not had an initial evaluation. As a result, the Air Force used estimated, not actual, acreage figures, including assumptions regarding the amount of acreage known or suspected of containing military munitions when preparing its cost estimates. Because changes in acreage can greatly impact the final cost of site assessment and cleanup action, the estimates produced for these sites are likely to change when estimates based on more complete data or the actual cost figures are known. The following examples illustrate how cost estimates can change during the life of the cleanup as better information becomes available: Camp Maxey was a 41,128-acre Army post in Texas used from 1942 to 1945 for training infantry in live fire of weapons including pistols, rifles, machine guns, mortars, bazookas, and antitank guns. The Corps confirmed the presence of unexploded ordnance, and in 2000, estimated the cleanup cost for the land at $45 million. In DOD’s Fiscal Year 2002 Defense Environmental Restoration Program Annual Report to Congress, the estimated total cost of cleanup had grown to $130 million. A June 2003 cost estimate showed a decrease in total cost to about $73 million, but still 62 percent more than the original cost estimate in 2000. The main factors behind these shifting cost estimates, according to the project manager, were changes in the acreage requiring underground removal of ordnance and changes in the amount of ordnance found. Fort McClellan, Alabama, was among the installations recommended for closure under DOD’s base realignment and closure effort in 1995. This site had been used since the Spanish American War (1898), including as a World War I and II training range upon which grenades, mortars, and antiaircraft guns, were used. An April 2002 cost estimate prepared for one site on Fort McClellan requiring cleanup showed the anticipated cost of clearing the land of munitions as $11,390,250. A subsequent cost estimate prepared in May 2003, showed the cost of clearing this site at $22,562,200. According to the Army, the increase in estimated costs reflects a change in the final acreage recommended for clearance and the extent to which buried munitions would be searched for and removed. Moreover, until DOD and stakeholders agree upon a cleanup action, it is often difficult for them to predict the extent of the cleanup action required and cost estimates can change because of the cleanup action implemented at the site. For example, at the former Indian Rocks Range in Pinellas County, Florida, the Corps identified 178 acres that were used as an air-to- ground and antiaircraft gunnery range impact area from 1943 to 1947. Munitions used on this shoreline site included bullets, aircraft rockets, and small practice bombs. Much of the land had been developed, limiting the Corps ability to pursue the alternative of searching for and removing buried munitions. In 1995, the Corps analyzed a number of alternatives to address munitions contamination at the site and developed cost estimates for these alternatives. However, because the development was largely composed of hotels, condominiums, and single-family residences, the Corps chose the alternative of conducting a community education program. The total cost of this alternative was $21,219. If the Corps had decided to search for and remove the remaining munitions at this site, the cost could have approached $3 million, according to the prepared cost analysis. Furthermore, at an annual funding level of approximately $106 million (the average amount budgeted or spent annually from fiscal year 2002 to fiscal year 2004), cleanup at the remaining munitions sites in DOD’s current inventory could take from 75 to 330 years to complete. To reduce this timeline, DOD expects to use funds currently designated for hazardous, toxic, and radioactive waste cleanup after these cleanups are complete. However, these other cleanup efforts are not on schedule in all of the services and the Corps. For example, between fiscal years 2001 and 2002, the schedule to complete hazardous substance cleanups at formerly used defense sites slipped by more than 6 years. As a result, anticipated funds from completing hazardous substance cleanups at these sites may not become available to clean up munitions sites until 2021 or later. This delay is significant because, as of September 30, 2002, formerly used defense sites account for over 85 percent of DOD’s total anticipated costs to complete munitions cleanup, yet the Corps receives about 66 percent of the total munitions cleanup funds. Delays in the availability of anticipated funding from hazardous, toxic, and radioactive waste sites could greatly impair DOD’s ability to accurately plan for and make progress in cleaning up Military Munitions Response sites. DOD’s Plan Does Not Contain Goals or Measures for Site Assessment and Cleanup DOD has yet to establish specific program goals and performance measures in its plan. Specifically, DOD has yet to identify interim milestones and service-specific targets that will help it achieve overall program objectives. In September 2003, 2 years after the Military Munitions Response program was initiated, DOD established a workgroup tasked with recommending overall goals and measures for the program, near-term goals and measures to support its budgeting cycle for fiscal years 2006 to 2011, and a program completion date goal. DOD has asked the workgroup to accomplish these objectives by the end of calendar year 2003. According to DOD, these goals and measures, when developed, should help DOD track the progress of sites through the cleanup phases, and ensure that DOD responds to the sites with the greatest risk first. While it is important for DOD to establish goals and measures that will track overall program progress and ensure that the riskiest sites are assessed and cleaned up first, DOD will not have the information it needs to do this until 2012. As we discussed earlier, because DOD plans to reassess potentially contaminated sites using a new risk-based prioritization procedure, until these reassessments are complete, DOD will not have complete information on which of the sites pose the greatest risk. Consequently, goals and measures established in 2003 will be of limited use and may not reflect DOD’s true priorities. Moreover, according to DOD, the program goals and measures to be established by the workgroup will be agencywide, and not service-specific, although it may establish interim goals for the services and Corps. However, DOD has not yet decided what these goals will be based on, such as relative risk levels or cleanup phases. In the absence of service-specific goals, each service has implemented the program with a different level of effort. For example, the Air Force has not budgeted any funds to assess and clean up munitions sites, nor do they plan to do so through fiscal year 2004. As mentioned before, the Air Force also has not conducted initial evaluations on any of its 241 sites and has little site-specific information from which to create a reliable cost estimate. In contrast, the Army has undertaken a comprehensive inventory of ranges that will result in detailed site information, such as acreage and the types, quantity, and location of munitions, that can be used to, among other things, create more robust cost estimates. The Army has completed this comprehensive inventory on 14 percent of its installations as of September 2002, and has set a goal to complete this effort by December 2003. This uneven effort in implementing the Military Munitions Response program could continue through various program phases, such as preliminary assessments and site investigations, making it difficult for DOD to assure that each of the services and the Corps are making progress in cleaning up their potentially contaminated sites and achieving the overall goals of the program. Conclusions DOD has made limited progress in identifying, assessing, and cleaning up sites known or suspected to contain military munitions. Accomplishing this long and arduous task in a timely manner that best protects public safety, human health, and the environment will require a comprehensive approach that includes effective planning and budgeting. However, DOD lacks the data needed—such as a complete inventory, up-to-date prioritization, and reliable cost estimates—to establish a comprehensive approach. Without such an approach for identifying, assessing, and cleaning up potentially contaminated sites, DOD will be hampered in its efforts to achieve the program’s objectives. Recommendations To ensure that DOD has a comprehensive approach for identifying, assessing, and cleaning up military munitions at potentially contaminated sites, we recommend that the Secretary of Defense revise DOD’s plan to establish deadlines to complete the identification process and initial evaluations so that it knows the universe of sites that needs to be assessed, prioritized, and cleaned up; reassess the timetable proposed for completing its reevaluation of sites using the new risk assessment procedures so that it can more timely establish the order in which sites should be assessed and cleaned up, thereby focusing on the riskiest sites first; and establish interim goals for cleanup phases for the services and Corps to target. In addition, after DOD has revised its comprehensive plan, we recommend that it work with the Congress to develop realistic budget proposals that will allow DOD to complete cleanup activities on potentially contaminated sites in a timely manner. Agency Comments We provided DOD with a draft of this report for review and comment. In its comments, DOD concurred with our recommendation to work with the Congress to develop realistic budget proposals that will allow it to complete cleanup activities on potentially contaminated sites in a timely manner. DOD partially concurred with our recommendation to establish deadlines to complete the identification process and initial evaluations so that it knows the universe of sites. DOD stated that the military services and the Corps have been working, and will continue to work, with stakeholders to identify additional sites and add these sites to the inventory as appropriate. DOD also stated that it believes most of the remaining sites to be identified are located on active installations still under DOD control. While we have clarified this point in the report, we note that the number of formerly used defense sites identified has increased by about 75 sites since the current inventory was completed and an unknown but possibly significant number of sites may be added as the Army completes identification of sites on 86 percent of its installations. These sites and many others still need to undergo initial evaluations. Consequently, we continue to believe that it is important for DOD to establish deadlines to complete the identification and initial evaluations for all of the sites in its inventory in order to establish a reasonable approximation of the future workload it faces. DOD also partially concurred with our recommendation to reassess the timetable proposed for completing the reevaluation of sites using the new risk assessment procedure. DOD stated that the military services and the Corps would need sufficient time and resources to complete each risk assessment. However, DOD stated that it had recently established 2010 as the goal for completing the prioritization of sites, instead of 2012 which was the original goal set forth in the proposed regulation. While we agree that this is a step in the right direction, DOD should continue to look for other opportunities to accelerate these inspections and the prioritization of sites to help ensure that resources are being targeted toward the riskiest sites first. Finally, DOD partially concurred with our recommendation to establish interim goals for cleanup phases for the services and the Corps. DOD stated that it has established interim goals of completing all preliminary assessments by 2007 and all site inspections by 2010, and that these goals apply to all military components, thereby eliminating the need for separate service-specific goals. However, DOD noted that it is working with each military service to establish additional goals and measures to gauge progress. While we are encouraged by DOD’s efforts in this area, we believe that service-specific goals and measures, as they apply to the cleanup phases, will be essential for DOD to ensure that each of the services and the Corps are making progress in cleaning up potentially contaminated sites and achieving the overall goals of the program. In addition to its written comments on our draft report, DOD also provided a number of technical comments and clarifications, which we have incorporated in this report as appropriate. DOD’s written comments appear in appendix III. As agreed with your office, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees; the Secretary of Defense; Director, Office of Management and Budget; and other interested parties. We will also make copies available to others upon request. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staffs have any questions, please call me or Edward Zadjura at (202) 512-3841. Key contributors to this report are listed in appendix IV. Safety, Environmental, and Human Health Risks Military munitions can pose risks to public safety, human health, and the environment. In terms of the explosive hazard, unexploded ordnance poses an immediate safety risk of physical injury to those who encounter it. Military munitions may also pose a health and environmental risk because their use and disposal may release constituents that may contaminate soil, groundwater, and surface water. Ranges contaminated with military munitions, especially those located in ecologically sensitive wetlands and floodplains, may have soil, groundwater, and surface water contamination from any of the over 200 chemical munitions constituents that are associated with the ordnance and their usage. When exposed to some of these constituents, humans potentially face long-term health problems, such as cancer and damage to heart, liver, and kidneys. Of these constituents, there are 20 that are of greatest concern due to their widespread use and potential environmental impact. Table 2 contains a listing of these munitions constituents, and table 3 describes some of the potential health effects of five of them. Trinitrotoluene (TNT) 1,3-Dintrobenzene Nitrobenzene 2,4-Dinitrotoluene 2-Amino-4,6-Dinitrotoluene 2-Nitrotoluene 2,6-Dinitrotoluene 4-Amino-2,6-Dinitrotoluene 3-Nitrotoluene Octahydro-1,3,5,7-tetranitro-1,3,5,7-tetrazocine (HMX) 2,4-Diamino-6-nitrotoluene 4-Nitrotoluene Hexahydro-1,3,5-trinitro-1,3,5-triazine (RDX) 2,6-Diamino-4-nitrotoluene Methylnitrite Perchlorate 1,2,3-Propanetriol trinitrate (Nitroglycerine) Pentaerythritoltetranitrate (PETN) 1,3,5-Trinitrobenzene N,2,4,6-Tetranitro-N-methylaniline (Tetryl) (White Phosphorus) While many of these constituents have been an environmental concern to the Department of Defense (DOD) for more than 20 years, the current understanding of the causes, distribution, and potential impact of constituent releases into the environment remains limited. The nature of these impacts, and whether they pose an unacceptable risk to human health and the environment, depend upon the dose, duration, and pathway of exposure, as well as the sensitivity of the exposed populations. However, the link between such constituents and any potential health effects is not always clear and continues to be studied. Additional Details on Our Scope and Methodology The objectives of our review were to evaluate (1) DOD’s progress in implementing its program to identify, assess, and clean up sites containing military munitions and (2) DOD’s plans to clean up remaining sites in the future. To evaluate DOD’s progress in identifying, assessing, and cleaning up military munitions sites, we analyzed data provided to us by DOD’s Office of the Deputy Undersecretary of Defense (Installations and Environment) Cleanup Office from its database for sites identified under the Military Munitions Response program. This information includes the status of studies or cleanup actions, as well as cost estimates. The data are complete as of September 30, 2002, DOD’s most recent reporting cycle, and were used to develop DOD’s Fiscal Year 2002 Defense Environmental Restoration Program Annual Report to Congress. We also analyzed additional data on the status of studies or cleanup actions provided to us by the Army Corps of Engineers (the Corps) from its database of formerly used defense sites. We assessed the reliability of relevant fields in these databases by electronically testing for obvious errors in accuracy and completeness, reviewing information about the data and the system that produced them, and interviewing agency officials knowledgeable about the data. When we found inconsistencies, we worked with DOD and military service officials to correct the inconsistencies before conducting our analyses. We determined that the data needed for our review were sufficiently reliable for the purposes of our report. We also reviewed 38 of 75 project files at seven Corps districts where, according to DOD’s database, site cleanup action is either complete or under way. (See table 4 for a listing of these districts). We selected these districts based on the number of sites where cleanup was completed or under way and the estimated cost to complete cleanup, with some consideration given for geographic distribution. These files represented 52 percent of the 23 sites with a completed cleanup action and 50 percent of the 52 sites with a cleanup action under way. We used our file reviews to develop case example of changes in estimated costs to complete cleanup over time and cleanup actions taken. These case examples are for illustration only. To evaluate DOD’s plans for addressing the remaining sites, we analyzed the plans, as well as the assumptions upon which those plans are based, including cost and projected completion dates. In addition, we reviewed policies and program guidance, analyzed financial data, and interviewed program managers in DOD and the military services and the Corps. We conducted our work between November 2002 and October 2003 in accordance with generally accepted government auditing standards. Comments from the Department of Defense GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to those named above, Jack Burriesci, Elizabeth Erdmann, Sherry McDonald, and Matthew Reinhart made key contributions to this report. Also contributing to this report were Cynthia Norris, Rebecca Shea, and Ray Wessmiller. GAO’s Mission The General Accounting Office, the audit, evaluation and investigative arm of Congress, exists to support Congress in meeting its constitutional responsibilities and to help improve the performance and accountability of the federal government for the American people. GAO examines the use of public funds; evaluates federal programs and policies; and provides analyses, recommendations, and other assistance to help Congress make informed oversight, policy, and funding decisions. GAO’s commitment to good government is reflected in its core values of accountability, integrity, and reliability. Obtaining Copies of GAO Reports and Testimony The fastest and easiest way to obtain copies of GAO documents at no cost is through the Internet. GAO’s Web site (www.gao.gov) contains abstracts and full- text files of current reports and testimony and an expanding archive of older products. The Web site features a search engine to help you locate documents using key words and phrases. You can print these documents in their entirety, including charts and other graphics. Each day, GAO issues a list of newly released reports, testimony, and correspondence. GAO posts this list, known as “Today’s Reports,” on its Web site daily. The list contains links to the full-text document files. To have GAO e-mail this list to you every afternoon, go to www.gao.gov and select “Subscribe to e-mail alerts” under the “Order GAO Products” heading. Order by Mail or Phone To Report Fraud, Waste, and Abuse in Federal Programs Public Affairs
Over 15 million acres in the United States are suspected of being, or known to be, contaminated with military munitions. These sites include ranges on closing military installations, closed ranges on active installations, and formerly used defense sites. Under the Defense Environmental Restoration Program, established in 1986, the Department of Defense (DOD) must identify, assess, and clean up military munitions contamination at these sites. DOD estimates these activities will cost from $8 billion to $35 billion. Because of the magnitude of DOD's cleanup effort, both in terms of cost and affected acreage, as well as the significant public safety, health, and environmental risks that military munitions may pose, The Ranking Minority Member of the House Committee on Energy and Commerce asked us to evaluate (1) DOD's progress in implementing its program to identify, assess, and clean up military munitions sites and (2) DOD's plans to clean up remaining sites in the future. DOD has made limited progress in its program to identify, assess, and clean up sites that may be contaminated with military munitions. While DOD had identified 2,307 potentially contaminated sites as of September 2002, DOD officials said that they continue to identify additional sites and are not likely to have a firm inventory for several years. Of the identified sites, DOD had initially determined that 362 sites required no further study or cleanup action because it found little or no evidence of military munitions. For 1,387 sites, DOD either has not begun or not completed its initial evaluation or determined that further study is needed. DOD has completed its assessment of 558 sites, finding that 475 of these required no cleanup action. The remaining 83 sites required some cleanup action, of which DOD has completed 23. DOD does not yet have a complete and viable plan for cleaning up military munitions at remaining potentially contaminated sites. DOD's plan is lacking in several respects. Essential data for DOD's plan may take years to develop. Not all the potential sites have been identified, and DOD has set no deadline for doing so. Also, DOD intends to use a new procedure to assign a relative priority for the remaining 1,387 sites, but it will not complete the reassessments until 2012. Until these are done, DOD cannot be assured that it is using its limited resources to clean up the riskiest sites first. DOD's plan relies on preliminary cost estimates that can change greatly and the reallocation of funds that may not be available. For example, the Air Force used estimated, not actual, acreage to create its cost estimates, limiting the estimate's reliability and DOD's ability to plan and budget cleanup for these sites. Also, DOD expects additional funds will become available for munitions cleanup as other DOD hazardous waste cleanup efforts are completed. However, some of these efforts are behind schedule; therefore, funds may not become available as anticipated. DOD's plan does not contain goals or measures for site assessment and cleanup. DOD recently established a working group tasked with developing agencywide program goals and performance measures, but not service-specific targets, limiting DOD's ability to ensure that the services are making progress in cleaning the potentially contaminated sites and achieving the overall goals of the program as planned.
GAO_GAO-12-873
Background Federal Goals for Contracting with Small Businesses Under the Small Business Act, SBA plays an important role in ensuring that small businesses gain access to federal contracting opportunities. SBA negotiates specific agency-wide goals to ensure that the federal government collectively meets the 23 percent statutory goal for contract dollars awarded to small businesses. In addition, SBA negotiates goals for the socioeconomic categories of businesses. The current goals are: 5 percent of prime contracts and subcontract dollars are to be awarded to women-owned small businesses, 5 percent of prime contracts and subcontract dollars are to be awarded to small disadvantaged businesses, 3 percent of prime contracts and subcontract dollars are to be awarded to service-disabled veteran-owned small businesses, and 3 percent of prime and subcontract dollars are to be awarded to HUBZone small businesses. Appendix II provides more information on the extent that federal agencies obligated federal contract dollars to minority-owned businesses by various socioeconomic categories. Federal Agency Contracting Assistance for Small Businesses The federal government has established a number of programs that can assist small and small disadvantaged businesses—including those that may be minority-owned—that seek to contract with federal agencies. MBDA promotes the growth and competitiveness of minority-owned businesses of any size.clients identify federal procurement opportunities, analyze solicitations, and prepare bids and proposals. It also facilitates relationships between minority-owned businesses and federal agencies, and researches contracting trends at federal agencies. MBDA’s Federal Procurement Center (FPC) provides research on federal agency contracting trends, identifies large federal contracts, and helps minority-owned businesses identify possible contracting opportunities. MBDA’s network of business centers helps SBA administers programs that are targeted to small businesses and that provide assistance with federal contracting opportunities. SBA’s 8(a) Business Development Program is one of the federal government’s primary means of developing small businesses owned by socially and economically disadvantaged individuals. Participating businesses, which are generally referred to as 8(a) firms, are eligible to participate in the program for 9 years. Businesses receive technical assistance, mentoring, counseling, and financial assistance so that they can become competitive in the federal marketplace. Additionally, participating businesses may bid on competitive federal contracts that are open only to 8(a) firms as well as on noncompetitive federal contracts. SBA’s Procurement Center Representatives (PCR) and Commercial Market Representatives (CMR) play an important role in helping ensure that small businesses gain access to contracting and subcontracting opportunities. PCRs and CMRs are the primary SBA staff who implement SBA’s prime contracts and subcontracting assistance programs, which are intended to increase contracting opportunities for small businesses and help ensure that small businesses receive a fair and equitable opportunity to participate in federal prime contracts and subcontracts. PCRs also can make recommendations to agency contracting officers that proposed contracts be set aside for eligible small businesses. In particular, a PCR’s key responsibilities include reviewing potentially bundled or consolidated solicitations—those in which two or more procurement requirements previously provided or performed under separate smaller contracts are grouped into a solicitation for a single contractcontracting officers. —and making set-aside recommendations to agency The OSDBU within federal agencies advocate on behalf of small businesses. Section 15(k) of the Small Business Act describes the functions of OSDBU directors—which include implementing and executing the agency’s functions and duties related to the award of contracts and subcontracts to small and small disadvantaged businesses. Other responsibilities of the OSDBU include identifying bundled contracts, potentially revising them to encourage small business participation, and facilitating small business participation in the contracts. OSDBU directors also help small businesses obtain payments from agencies and subcontractors, recommend set-asides, coordinate with SBA, and oversee OSDBU personnel. Agencies also conduct outreach activities for small and small disadvantaged businesses, including minority-owned firms that are seeking federal contracts. Some agencies host monthly vendor outreach sessions, a series of appointments with either agency officials (such as small business or procurement officials) or prime contractors that have subcontracting needs. These sessions give the businesses an opportunity to discuss their capabilities and learn about potential contracting opportunities. One of MBDA’s primary outreach efforts is the Minority Enterprise Development Week Conference. During this conference, participants from minority-owned businesses that have been vetted and designated by MBDA are offered appointments with federal and corporate partners to discuss contracting opportunities that will be made available within the next 6 to 18 months. Finally, a number of online resources are also available to businesses seeking to contract with the federal government. For example, federal agencies list their contract solicitations of $25,000 or more on the Federal Business Opportunities website (www.FedBizOpps.gov)–managed by GSA. The website provides online business tools, training videos, and event announcements for small business owners. USA Spending, established by the Office of Management and Budget, also contains information on federal spending trends across the government, including grants and contracts. In addition, federal agencies such as SBA provide online contracting courses designed to help small businesses understand the basics of contracting with government agencies.provides a summary of selected programs, resources, and outreach activities available to minority-owned businesses. Agencies and Advocacy Groups Identified Various Contracting Challenges That Minority-Owned Businesses May Face Agency and advocacy group officials we interviewed identified a number of challenges that small businesses—including minority-owned businesses—may face when seeking to contract with the federal government. In particular, these officials generally agreed that the lack of performance history and knowledge of the federal contracting process were significant challenges minority-owned businesses may face in contracting with the federal government. However, the officials offered varying opinions on the extent to which minority-owned businesses faced other challenges, such as a lack of access to contracting officials and a lack of monitoring subcontracting plans, and difficulties accessing needed resources such as capital. Some agency officials we contacted indicated that outreach activities they conduct and practices they undertake in their contract solicitation activities address some of these challenges. Agency and Advocacy Group Officials Differed in Their Opinions on Contracting Challenges That Minority-Owned Businesses May Face Federal agency and advocacy group officials that we interviewed differed in their opinions on challenges that small businesses—including those that are minority-owned—may face when seeking to contract with the federal government. The challenges identified included a lack of performance history and knowledge of the federal contracting process, contract bundling, a lack of access to contracting officials, lack of monitoring of subcontracting plans, and difficulties assessing capital. Officials from federal agencies and advocacy groups we contacted cited the lack of a performance history and a full understanding of the federal contracting process as significant challenges that minority-owned businesses may face. According to the statement of Guiding Principles of the Federal Acquisition System, when selecting contactors to provide products or perform services, the government will use contractors that have a track record of successful past performance or that have demonstrated a current superior ability to perform. SBA officials told us that historically and currently, small, minority-owned businesses that lacked a performance history have had difficulty entering the federal contracting market. MBDA officials also said that lack of a past performance record with government contracts or private contracts of similar size, made obtaining federal contracts more difficult for minority- owned businesses because of the weight given to performance history. However, some agency officials, including those from two DHS contracting offices, noted that because prior commercial experience—not just government contracting experience—was considered, the lack of prior government experience would not necessarily make a minority- owned business noncompetitive. Officials from a GSA contracting office said that most small businesses seeking to contract with its office had a performance history with the private sector, not the federal government. The officials said that they considered past performance with the private sector when making contract award decisions, and thus would not consider lack of past performance history with the federal government as a challenge. Finally, officials from an HHS contracting office noted that the Federal Acquisition Regulation (FAR) requires that businesses receive a neutral rating if they do not have a performance history and that some small businesses may not be aware of this requirement. However, some advocacy group officials indicated that certain prerequisites and past performance requirements were difficult for minority-owned businesses to meet. For example, officials from one group said that these businesses might partner with other more established businesses to help meet the performance requirements. See 48 C.F.R. § 15.305(a)(2)(iv). The FAR states that offerors without a record of relevant past performance may not be evaluated favorably or unfavorably on past performance—in other words, they must be given a neutral rating for the past performance evaluation factor. bidding process works, and learning how to secure a government contract. Further, MBDA officials noted that the federal contracting process was very different from contracting with private sector companies. They added that although federal agencies spend time and money holding sessions on doing business with the federal government, these sessions offered general information that could not be transferred to bidding on specific projects. Similarly, agency officials also cited the lack of understanding of agencies’ contracting needs. For example, an OSDBU official from HHS emphasized that businesses that did not understand the mission of the agency with which they were seeking a contract or did not know what the agency bought and acquired might not know how to market their product or service appropriately to win the contract. Advocacy group officials cited contract bundling as a significant challenge, although a majority of agency officials disagreed. Advocacy group officials whom we interviewed said that contract bundling could reduce the number of contracting opportunities available for small and minority-owned businesses. MBDA officials said that they believe that many contracts are bundled unnecessarily and agreed that this practice limited minority-owned businesses’ ability to compete for these contracts. However, other federal agency officials we interviewed said that they did not believe that contract bundling was a significant challenge for minority- owned businesses at their agencies. In addition, some agency officials told us that they had specific policies regarding contract bundling. For example, HHS and DOD contracting officials noted that their offices had policies that prohibited contract bundling and added that small businesses could protest a contract that they believed was unjustifiably bundled. Further, officials from one HHS contracting office indicated that they worked with small business specialists to determine if contracts should be separated. Advocacy group officials cited a lack of access to contracting officials as a significant challenge. Officials from six advocacy groups that we interviewed stated that the agency officials present at outreach events, such as matchmaking events, often did not have the authority to make decisions about awarding a contract. However, with the exception of MBDA, none of the federal agency officials we contacted said that access to contracting officers was a challenge at their agencies. The officials emphasized efforts that their agencies were making to assist businesses. For example, officials participate in industry days, where businesses can meet prime contractors as well as interact with agency procurement staff, and also conduct one-on-one appointments with businesses that seek to contract with their agencies. Some federal contracting officials did note that limited resources might pose a challenge in accessing the contracting officers. For example, contracting officials from DHS and GSA indicated that any perceived access issues would be due to limited resources in contracting offices. GSA contracting officials said that when the office had a large number of contracts to complete, they could not meet with each business owner seeking contract opportunities. Advocacy group officials also cited a lack of monitoring of subcontracting plans by federal agencies as a significant challenge for minority-owned businesses, although SBA officials noted that this issue was a challenge for all small businesses, not just those owned by minorities. Officials from five advocacy groups described instances in which prime contractors did not use the small, minority-owned business subcontractors that they initially said they would use. Further, one advocacy group official said that because federal contracting officials generally had relationships with prime contractors and not subcontractors, small, minority-owned subcontractors often had no recourse when a problem arose. An official from another advocacy group stated that contracting officers have no accountability to federal agencies to justify any subcontractor changes. SBA officials noted that prime contractors’ “dropping” of subcontractors from their plans after the contracts were obligated was not an issue exclusive to minority-owned businesses but was a challenge for small subcontractors in general. In addition, we previously reported that CMRs cited a lack of authority to influence subcontracting opportunities. was difficult to enforce prime contractors’ performance under subcontracting plans because determining that a contractor was not acting in good faith was difficult. Officials from one DOD contracting office said that they did not communicate with subcontractors directly and that prime contractors did have the right to pick a subcontractor of their choice throughout the duration of a contract. An OSDBU official from DOD added that the contracting officer would review and approve a replacement subcontractor under certain circumstances. If a prime contractor’s subcontracting plan included a certain percentage of work that was designated for a small disadvantaged business, the contracting officer might not approve the proposed replacement subcontractor if the change did not adhere to the original percentage. GAO, Improvements Needed to Help Ensure Reliability of SBA’s Performance Data on Procurement Center Representatives, GAO-11-549R (Washington, D.C.: June 15, 2011). less likely to apply for loans because they feared their applications would be denied. Further, officials from two advocacy groups noted that bonding requirements could prevent small, minority-owned businesses from competing for large contracts. Bonding is required to compete for certain contracts to ensure that businesses have the financial capacity to perform the work and pay for labor and supplies. advocacy group indicated that to be considered for large contracts, businesses may be required to obtain $25 million to $50 million in bonding capacity. Since few small businesses can obtain this bonding capacity, this official said that these businesses rely on “teaming” arrangements—two or more businesses that collectively pursue larger procurement contracts—to expand their opportunities. For example, an official at one In general, advocacy groups identified linguistic and cultural barriers as a challenge for minority-owned businesses on a limited basis. One advocacy group official said that linguistic barriers may be a challenge because business owners with strong accents could have difficulty communicating. Officials from a few Asian-American advocacy groups noted that business owners with limited English proficiency (LEP) may experience challenges. For example, one official said that business owners in the construction industry may have difficulty obtaining a required design certification if English was not the business owner’s first language. Another advocacy group official cited challenges such as discrimination against subcontractors by prime contractors because of accents or LEP. Officials from advocacy groups also cited examples of cultural barriers. For example, one noted that some first generation Americans might have an aversion to working with the federal government and therefore would not be willing to seek government contracts. Some officials from Hispanic advocacy groups said Hispanic contracting officials were underrepresented in the federal government. A surety bond is form of insurance that guarantees contract completion. Officials from another group also said that some minority groups, including those in nonmetropolitan areas, could lack the infrastructure needed (e.g., Internet service and transportation) to conduct business in these areas. Officials from all but one federal agency—SBA—that we contacted said that they did not know of any linguistic or cultural issues that posed a barrier for minority-owned businesses seeking to contract with the government. SBA officials told us that cultural barriers may be a challenge for minority-owned businesses seeking federal government contracts and emphasized that minority-owned businesses would be hesitant to reveal any linguistic barriers. The officials noted that some cultural barriers existed for Asian-Americans, Alaskan Natives, Native- Americans, and Native Hawaiians, because their traditional ways of conducting business involved intangibles that did not translate well into a “faceless” electronic contracting community. These officials also said that some minority-owned businesses may have informal business practices—for example, they may obtain financing from a friend or family member instead of through a bank—and therefore a business owner might not have the documentation required by some federal programs. Agency Outreach Efforts Help Address Some Challenges Facing Small and Minority-Owned Businesses As we have previously noted, federal agencies conduct outreach to help minority-owned businesses seeking federal government contracts. For example, federal contracting officials with whom we spoke cited “industry days,” conferences, and meetings with businesses as efforts to help businesses address challenges they could face in seeking federal contracts. During industry days small businesses are invited to meet prime contractors in their industries and potentially obtain subcontracts. Businesses can also interact directly with contracting office staff. For example, contracting officers said that they participated in panel discussions to provide business owners with information on the acquisition process and forecasts of contract opportunities. Contracting officers also accept requests from business owners that schedule meetings to discuss their business capabilities. Many agency officials, including an OSDBU official and contracting officials, told us they also work with and refer businesses to Procurement Technical Assistance Centers (PTAC) so that the businesses may receive one-on-one assistance. Agency outreach to businesses is generally directed by agency OSDBUs, the agencies’ advocates for small businesses. OSDBU directors use a variety of methods—including internal and external collaboration, outreach to small businesses, and oversight of agency small business contracting—to help small businesses overcome challenges they may face such as understanding the federal contracting process. OSDBU officials from three federal agencies we contacted indicated that they collaborate with several agency offices, such as acquisition and small business specialists, and with organizations such as MBDA. We previously reported that nearly all of the OSDBU directors saw outreach activities as a function of their office. For example, 23 of the 25 OSDBU directors we surveyed between November and December 2010 viewed hosting conferences for small businesses as one of their responsibilities, and 23 had hosted such conferences. More specifically, these 23 agencies had hosted an average of 20 conferences within the previous 2 years. In addition, 20 of the 25 OSDBU directors surveyed saw sponsoring training programs for small businesses as one of their responsibilities, and 18 had hosted such events in the last 2 years. Federal Agencies Collect Some Information on Contracting Assistance Provided to Minority-Owned Businesses Federal agencies we contacted generally collect and report information on contracting assistance they provide to small and small disadvantaged businesses. Federal agencies are required to report annually to SBA on participation in the agency’s contracting activities by small disadvantaged businesses, veteran-owned small businesses (including service-disabled veterans), qualified HUBZone small businesses, and women-owned small businesses. SBA compiles and analyzes the information and reports the results to the President and Congress.report to SBA plans to achieve their contracting goals, which can include outreach activities. In addition, Executive Order 11,625 requires the Secretary of Commerce—the umbrella agency of MBDA—and other agencies to report annually on activities related to minority business development and to provide other information as requested. Finally, federal agencies are also required to develop and implement systematic Agencies are also required to data collection processes and provide MBDA with current data that will help in evaluating and promoting minority business development efforts. A majority of the federal agencies we contacted told us that the extent to which they met SBA prime and subcontracting goals for the various socioeconomic categories of businesses (including the small disadvantaged business goal) provided a measure of their efforts to assist minority-owned businesses in contracting with the federal government. As figure 1 shows, in fiscal year 2011 the federal government met its 5 percent goal for prime contracting and subcontracting with small disadvantaged businesses. In addition, all four agencies we reviewed met their prime contracting goals of 5 percent, and three met their 5 percent subcontracting goals for this category. Contracting officials at these agencies generally attributed their success in contracting with small businesses—including small disadvantaged businesses—to a variety of factors, including support from the agency OSDBU and upper management, staff commitment, and the use of set-asides. They also noted several other factors that contributed to their contracting performance, including market research, a strategy for small businesses, and outreach efforts. Federal agency officials also said that some outreach activities might be targeted to certain socioeconomic categories to assist in meeting agency SBA goals. For example, DHS contracting office officials said that as a result of monitoring their progress in meeting SBA goals, they conducted outreach to women-owned and HUBZone businesses with contract set-asides. SBA also issues an annual scorecard as an assessment tool to measure how well federal agencies reach their small business and socioeconomic prime contracting and subcontracting goals, to provide accurate and transparent contracting data, and to report agency-specific progress. An overall grade assesses an agency’s entire small business procurement performance, and three quantitative measures show achievements in prime contracting, subcontracting, and plan progress, or an agency’s efforts and practices to meet its contracting goals.given for government-wide performance, and individual agencies receive their own grades. For fiscal year 2011, SBA gave a grade of “B” for overall government-wide performance. For the federal agencies that we included in our analysis, GSA scored an overall grade of A+, DHS and HHS scored an overall grade of A, and DOD scored an overall grade of B. Two agencies we reviewed collected and reported data by minority group. For example, MBDA reports data categorized by minority group, on contracting assistance that its business centers provide as required by executive order. For fiscal year 2011, MBDA reported that its business centers helped minority-owned businesses obtain 1,108 transactions (the sum of contracts and financings) totaling over $3.9 billion (see table 1). SBA also collects some information for its various programs, including information by minority group for the 8(a) Business Development Program, as required by statute. For example, SBA reported that of the 7,814 8(a) program participants in fiscal year 2011—the most recent data available—more than 90 percent of the participants were minority-owned businesses (see fig. 2). SBA also reported that 8(a) program participants reported total year-end revenues exceeding $21.7 billion in fiscal year 2010, with 43.4 percent of these revenues coming from 8(a) contracts. During that same year, SBA provided technical assistance to 2,000 8(a) businesses. SBA officials we interviewed said that SBA generally did not collect information by minority group for any of its other programs. Most federal agencies that we contacted indicated that they collected some general information on outreach events and activities and some demographic data, although collecting such data was not required. For example, for outreach events such as the Minority Enterprise Development week conference, MBDA officials told us that they collect general demographic information from participants on their businesses and experience, but not by minority group. The officials told us that they also collect aggregated data on its outreach activities for minority-owned businesses, such as number of meetings and participants. For example, MBDA officials told us that they conducted 119 of the 129 one-on-one meetings scheduled between minority-owned and small businesses and corporations and prime contractors during this event. Officials from DOD, DHS, GSA, and HHS said that they asked participants in their outreach events questions (sometimes by survey or evaluation) about the value or helpfulness of the events. Officials from three agencies noted that they used the survey results to determine the effectiveness of, or how to improve, the event. In addition, agencies may ask questions to obtain general information about a business and potentially its socioeconomic status. Officials also said that they collected some information by socioeconomic group, but none by minority group. Finally, the OSDBU Council—which comprises OSDBU officials from various federal agencies—hosts an annual procurement conference that provides assistance to businesses seeking federal government contracts, and some information is collected for this event. According to the council’s website, more than 3,500 people registered for the 2012 conference, and more than 130 matchmaking sessions were conducted. According to the council’s president, 2012 is the first year that such information was collected. Agency Comments and Our Evaluation We provided a draft of this report to Commerce, DHS, DOD, GSA, HHS, and SBA for review and comment and received comments only from Commerce. Commerce provided written comments which are reprinted in appendix V. Commerce made two observations on our draft report. First, the department stated that the report was a good start at capturing the federal government’s effort to support small, minority-owned businesses, but did not include all federal programs that supported federal contracting with minority-owned businesses. The department added that GAO had missed an opportunity to provide a more comprehensive picture of the federal government’s efforts in this area, noting, for example, that the Departments of Agriculture, Housing and Urban Development, and Transportation had programs (other than OSDBUs) geared toward increasing federal contracts with minority-owned firms. In addition, the department stated that an Office of Minority and Women Inclusion was recently established at each of the financial regulatory agencies. While providing support to minority-owned businesses, these agencies and offices were outside of the scope of our review, which as we stated in our report, focused on the four agencies—DHS, DOD, GSA, and HHS—that accounted for about 70 percent of total federal obligations to small, minority-owned businesses in fiscal year 2010. We also included SBA and Commerce’s MBDA in our review because of their roles in assisting minority-owned businesses. We are reviewing the efforts of the Office of Minority and Women Inclusion in an ongoing study that will be issued in 2013. Second, Commerce noted that although the dollar amount of federal contracts obligated to small, minority-owned businesses was encouraging, the report did not analyze the number of minority-owned firms that actually secured federal contracts. The department said that it was possible that a handful of minority-owned firms had secured sizable federal contracts but that the majority of minority-owned firms continued to fail in obtaining them. However, data are not available on the total universe of small, minority-owned businesses that entered bids in response to federal contract solicitations. Just as with our reporting of funds obligated for contracts, data on the number of minority-owned businesses that secured federal contracts would not provide information on the number of such businesses that did not obtain them. Likewise, while we do report MBDA’s statistics on contracting assistance provided to minority-owned businesses, again such data do not provide information on how many businesses sought but did not obtain federal contracts. We conducted interviews with officials from MBDA, SBA, contracting offices at the federal agencies in our scope, and advocacy groups to obtain their perspectives on the challenges minority-owned businesses may face in seeking to contract with the federal government. We are sending copies of this report to appropriate congressional committees; the Attorney General; the Secretaries of Defense, Homeland Security, and Health and Human Services; the Acting Secretary of Commerce; and the Administrators of the General Services Administration and Small Business Administration. In addition, this report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions on the matters discussed in this report, please contact me at (202) 512-8678 or by email at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology Our objectives were to describe: (1) what federal agency officials and advocacy groups identified as challenges that small, minority-owned businesses may face in seeking to contract with the federal government— including any linguistic or cultural barriers—and agencies’ efforts to address them, and (2) what information is available on federal efforts to assist small, minority-owned businesses in contracting with the federal government. To determine which programs and resources to include in our scope, we conducted a web-based search for initial information on programs and resources available from federal government agencies using terms such as contracting assistance for minorities. We analyzed information on programs that provide federal contracting assistance and resources on contracting opportunities, and are available to minority-owned businesses. We describe programs and resources provided by the Minority Business Development Agency (MBDA) as they are tasked with the growth and promotion of minority-owned businesses. We also describe programs and resources available from the Small Business Administration (SBA), as this agency is responsible for providing assistance to small businesses—which can be minority owned—and programs and resources available from other selected federal agencies based on the criteria described below. Finally, we interviewed officials from these selected agencies and advocacy groups that provide assistance to businesses owned by Asian-, Black-, Hispanic-, and Native- Americans. We selected these minority groups because they received the largest share of federal obligations to small, minority-owned businesses based on business owners self-identifying as a member of these groups. To select agencies to include in our scope, we reviewed data from Federal Procurement Data System-Next Generation (FPDS-NG) on contract awards to small businesses owned by the minority groups in our scope by federal agency. Although we could not independently verify the reliability of these data, we reviewed system documentation and conducted electronic data testing for obvious errors in accuracy and completeness. On the basis of these efforts, we determined that the FPDS-NG data on federal contract dollars to socioeconomic groups by self-reported minority group were sufficiently reliable for purposes of our review. We selected the top four agencies that accounted for about 70 percent of total federal obligations to small, minority-owned businesses in fiscal year 2010—the most recent data available at the time of our selections. These agencies were the Departments of Defense (DOD), Health and Human Services (HHS), and Homeland Security (DHS), and, the General Services Administration (GSA). To select a purposive, non-representative sample of contracting offices for purposes of conducting interviews, we first selected the top two divisions within DOD, DHS, and HHS in terms of the percentage of their agency’s obligations to small, minority-owned businesses. Those divisions included the Departments of the Army and Navy for DOD; the Bureau of Customs and Border Protection and the United States Coast Guard for DHS, and the National Institute of Health and the Centers for Medicaid and Medicare for HHS. We selected only one division for GSA—the Public Buildings Service—as this division represented over 76 percent of GSA’s funds obligated for contracts to small, minority-owned businesses. Using this approach, we selected a total of seven divisions within the four agencies in our scope. Within each division, we selected one of the top contracting offices based on the office’s percentage of their division’s obligations to businesses owned by the minority groups in our scope. We selected two contracting offices from the Department of the Army because the percentage of obligations to small, minority-owned businesses by any of its top contracting offices was small. Our final sample consisted of eight contracting offices. To describe the challenges that small, minority-owned businesses may face in contracting with the federal government, we interviewed agency officials—including those from contracting offices and the Office of Small Disadvantaged Business Utilization—from the purposive, non- representative sample of eight contracting offices. We also interviewed officials from MBDA and SBA. Further, we conducted interviews with officials from 12 advocacy groups. We selected groups that provided assistance to businesses owned by the minority groups in our scope based on a web-based search on national organizations that represent and provide assistance to minority-owned businesses in obtaining federal contracts. To describe information on improving access to services for persons with limited English proficiency, we reviewed Executive Order 13,166—Improving Access to Services for Persons with Limited English Proficiency (LEP)—to understand its applicability to outreach activities associated with federal contracting. We reviewed guidance from the Department of Justice (DOJ), as well as existing LEP plans for each agency in our scope. We could not review the LEP plans for DOD and for SBA, as the plan for each agency had not yet been completed. We also obtained and reviewed written responses from DOJ. To describe the information available on the extent of federal efforts to assist small, minority-owned businesses in contracting with the federal government, we reviewed federal government prime contracting and subcontracting goals and SBA procurement scorecards for fiscal year 2011 for DOD, HHS, DHS and GSA. We also reviewed documentation for programs that assist small businesses owned and controlled by socially and economically disadvantaged individuals—which can include businesses that are minority-owned—to determine the types of contracting assistance available. We conducted interviews with officials from the selected agencies and contracting offices to identify and obtain available information on their outreach efforts to assist minority-owned businesses. In addition, we conducted interviews with officials from 12 advocacy groups that provide contracting assistance to the minority groups in our scope. For information on the percentage of funds obligated for contracts in fiscal year 2011 to each socioeconomic category of small businesses by minority group—including small disadvantaged, women-owned, service- disabled veteran-owned, and Historically Underutilized Business Zone (HUBZone)—we analyzed data from FPDS-NG, which receives data from the Central Contractor Registration System (CCR)—the system in which all businesses seeking federal government contracts must register. In CCR, registrants (i.e., business owners) can self identify as minority- owned and can specify a minority group(s). Registrants can select from the following six categories: Asian Pacific, Subcontinent Asian, Black- American, Hispanic-American, Native-American, and Other. We conducted electronic testing for obvious errors in accuracy and completeness. As a part of this assessment, we analyzed the FPDS-NG data to determine cases in which contracting firms were identified as belonging to a particular minority group, such as Subcontinent Asian, but did not designate the firm as being minority-owned. This occurred in less than 3 percent of the cases. We conducted the same assessment within different socioeconomic categories, such as small disadvantaged business, and found a potential undercount of the minority-owned designation in less than 4 percent of the cases. In addition, businesses that selected “other minority” and those that self-identified as more than one minority group were categorized as other minority. We determined the minority-owned designations data were sufficiently reliable for the purposes of our report. However, because we cannot verify the minority group that contractors self-report, we characterize these data as self- reported. We conducted this performance audit from November 2011 through September 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Percentage of Obligated Funds for Contracts to Socioeconomic Categories by Minority Group, Fiscal Year 2011 We analyzed data from the Federal Procurement Data System – Next Generation to determine the amount of obligated funds for contracts that federal agencies made to small businesses by minority group for fiscal year 2011. As figure 3 shows, the federal government obligated over $36 billion (35.1 percent) to small, minority-owned businesses in fiscal year 2011. Figure 4 shows the amount of federal obligated funds for contracts to small disadvantaged businesses. For example, about $28.8 billion (85.6 percent) was obligated to small disadvantaged businesses that were minority-owned. Figures 5 shows the amount of federal obligated funds for contracts to small women-owned businesses. For example, $8.2 billion (45.7) percent were obligated to small women-owned businesses that were minority- owned. Figure 6 shows the amount of federal obligated funds for contracts to small HUBZone businesses. For example, nearly $5.5 billion (54.3 percent) were obligated to small HUBZone businesses that were minority- owned. Figure 7 shows the amount of federal obligated funds for contracts to small service-disabled veteran-owned businesses. For example, nearly $3.9 billion (33 percent) were obligated to small service-disabled veteran- owned businesses that were minority-owned. Appendix III: Selected Federal Government Contracting Programs, Resources, and Outreach Activities This table shows programs, resources, outreach activities, and examples of contracting assistance that agencies provide to assist minority-owned businesses in contracting with the federal government. Appendix IV: Potential Applicability of the Limited English Proficiency Executive Order to Federal Contracting Executive Order 13,166, Improving Access to Services for Persons with Limited English Proficiency, issued on August 11, 2000, requires federal agencies to prepare a plan to improve access to federally conducted programs and activities for those with limited English proficiency (LEP). Under the order, federal agencies must take reasonable steps to provide meaningful access to persons with LEP for federally conducted programs and activities. In addition, the Department of Justice (DOJ) serves as a central repository for agency plans to address LEP and provides guidance to agencies for developing such plans. According to DOJ guidance issued on August 16, 2000 and available at LEP.GOV, the four factors to be considered in determining what constitutes “reasonable steps to ensure meaningful access” include (1) the number or proportion of such individuals in the eligible population, (2) the frequency with which they come into contact with the program, (3) the importance of the service provided by the program, and (4) the resources available to the recipients. In May 2011, DOJ also issued a Language Access Assessment and Planning Tool for Federally Conducted and Federally Assisted Programs to provide guidance to recipients of federal financial assistance and federal agencies. The first step in the assessment tool is a self-assessment that determines what type of contact an agency has with the LEP population and describes the elements that are part of effective language access policy directives and implementation plans. “Generally, current practice with regard to announcing federal government contracts and grants would not be altered under the Executive Order. In determining what is required, the focus of the analysis in this situation is on the first factor—the number or proportion of eligible LEP persons. Except, perhaps, in territories, it is reasonable to expect that the number or proportion of eligible contract or grant recipients who are LEP and are themselves attempting to find and respond to announcements of grants and contracts is negligible.” Federal agency officials and advocacy groups we spoke with cited linguistic barriers as a challenge on a limited basis. In addition, few agencies had taken action to address possible linguistic barriers, and most told us that they had not taken such action because they had not encountered this challenge. For example, based on its efforts as of July 2012, GSA found that only one region reported significant contact with persons with LEP. Appendix V: Comments from the Department of Commerce Appendix VI: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Marshall Hamlett (Assistant Director), Emily Chalmers, Pamela Davidson, Meredith Graves, Julia Kennon, Shamiah T. Kerney, Katherine Leigey, and Andrew J. Stephens made key contributions to this report.
Each year, the government obligates billions in contracts to businesses—nearly $537 billion in fiscal year 2011. About $104 billion (19.4 percent) was obligated to small businesses, and over $36 billion of this amount was obligated to small businesses that identified themselves as minority-owned (see figure). In this report, GAO describes (1) what federal agency officials and advocacy groups identified as challenges small, minority-owned businesses may face in seeking federal government contracts—including any linguistic or cultural barriers—and agencies’ efforts to address them, and (2) what information is available on federal efforts to assist small, minority-owned businesses in contracting with the federal government. For selected agencies, GAO analyzed data on obligations to minority-owned businesses, reviewed information on programs and resources that can assist minority-owned businesses, reviewed relevant information from the Department of Justice on agencies’ Limited English Proficiency plans, and interviewed officials from selected federal agencies and advocacy groups that provide assistance to minority-owned businesses. In written comments, Commerce said that GAO had not covered all federal efforts to support small, minority-owned business contracting. As GAO noted in the report, this study focused on selected agencies and contracting activities that accounted for about 70 percent of total federal obligations to small, minority-owned businesses in fiscal year 2010. While their views varied to some degree, federal agency officials and advocacy groups GAO contacted identified a number of challenges that small, minority-owned businesses may face in pursuing federal government contracts. For example, officials and advocacy groups pointed to a lack of performance history and knowledge of the federal contracting process as significant barriers. Officials from advocacy groups cited additional challenges, such as difficulty gaining access to contracting officials and decreased contracting opportunities resulting from contract bundling—the consolidation of two or more contracts previously performed under smaller contracts, into a single contract. Officials from agencies that accounted for 70 percent of federal contracting with small, minority-owned businesses—(the Departments of Defense, Health and Human Services, and Homeland Security, and the General Services Administration) told GAO that they conducted outreach to help small, minority-owned businesses with these challenges. Their outreach efforts include one-on-one interviews between contracting office staff and businesses seeking federal contracts. Linguistic and cultural barriers were identified as a challenge on a limited basis. Federal agencies GAO contacted collected and reported some information on the contracting assistance provided to small disadvantaged businesses—including those that are minority-owned. Two agencies GAO reviewed collected and reported data by minority group. The Minority Business Development Agency in the Department of Commerce—created to foster the growth of minority-owned businesses of all sizes—reported that its business centers helped these businesses obtain 1,108 financings and contracts worth over $3.9 billion in fiscal year 2011. For the same fiscal year, the Small Business Administration (SBA) reported that more than 90 percent of its primary business development program participants were minority-owned businesses. Federal agencies that GAO contacted said that the goals SBA negotiated with federal agencies for contracting with various socioeconomic categories, including small disadvantaged businesses, provided some information on efforts to assist minority-owned businesses. In fiscal year 2011, agencies GAO contacted met their prime contracting goal and three out of four agencies met their subcontracting goals. GAO generally found limited data on participants in agency outreach efforts because the agencies are not required to, and therefore generally do not, collect data on the minority group or socioeconomic category of businesses that participate in outreach events for federal contracting opportunities.
GAO_GGD-99-120
Background The administration launched NPR in March 1993, when President Clinton announced a 6-month review of the federal government to be led by Vice President Gore. The first NPR report was released in September 1993 and made recommendations intended to make the government “work better and cost less.” The first NPR phase, called NPR I, included recommendations to reinvent individual agencies’ programs and organizations. It also included governmentwide recommendations for (1) reducing the size of the federal bureaucracy, (2) reducing procurement costs through streamlining, (3) reengineering processes through the better use of information technology, and (4) reducing administrative costs. The estimates for NPR I savings covered fiscal years 1994 through 1999. Vice President Gore launched the second NPR phase (called NPR II) in December 1994 and reported on this phase’s savings estimates in September 1995.According to NPR, this second phase expanded the agenda for governmental reform. NPR II efforts focused on identifying additional programs that could be reinvented, terminated, or privatized, as well as on reinventing the federal regulatory process. The estimates for NPR II savings covered fiscal years 1996 through 2000. As shown in table I, NPR claimed estimated savings of $82.2 billion from NPR I recommendations. NPR similarly reported that $29.6 billion had been “locked into place” from program changes in individual agencies under NPR II. In addition to the $111.8 billion NPR claimed from the NPR I and II recommendations, NPR claimed $24.9 billion in estimated savings based on reinvention principles. These additional claimed savings included, for example, $23.1 billion from the Federal Communications Commission’s auctions of wireless licenses. This $24.9 billion brings the total amount of reinvention savings claimed by NPR to about $137 billion. This $137 billion savings figure is the one NPR most commonly cites as the savings it has achieved. NPR relied on OMB to estimate the savings it claimed from its NPR I and II recommendations. OMB’s program examiners were responsible for developing the estimates in their role as focal points for all matters pertaining to their specific assignment area. One of the examiners’ major duties is to oversee the formulation and execution of the budget process. They are also expected to perform legislative, economic, policy, program, organizational, and management analyses. OMB made the initial estimates for the 1993 and 1995 NPR reports and updated its database on the savings claimed in the summers of 1996 and 1997. These updates, according to an OMB official, were primarily to ensure that all the estimates for recommendations that NPR considered implemented were included in the total amount of savings claimed. Scope and Methodology To describe and assess how OMB estimated the savings from NPR, we focused on three agencies (USDA, DOE, and NASA), where relatively large savings were claimed and that provided a variety of types of actions taken. At each agency, we selected two recommendations with claims of at least $700 million in savings each. The six recommendations we reviewed accounted for $10.45 billion of the $14.7 billion claimed from changes in individual agencies under NPR I and $19.17 billion of the $29.6 billion claimed from NPR II savings. Overall, the claimed savings from the recommendations we reviewed accounted for over two-thirds of the $44.3 billion in savings claimed from NPR’s recommendations to individual agencies and 22 percent of the total amount of NPR’s savings claims. Following is a listing of the six recommendations we reviewed and the estimated savings, in millions of dollars, that NPR claimed for each of those recommendations. (See apps. I through VI for detailed information on each of the recommendations.) reorganize USDA to better accomplish its mission, streamline its field structure, and improve service to its customers ($770 million); end USDA’s wool and mohair subsidy ($702 million); redirect DOE laboratories to post-Cold War priorities ($6,996 million); realign DOE, including terminating the Clean Coal Technology Program; privatizing the naval petroleum reserves in Elk Hills, CA; selling uranium no longer needed for national defense purposes; reducing costs in DOE’s applied research programs; improving program effectiveness and efficiencies in environmental management of nuclear waste cleanups; and strategically aligning headquarters and field operations ($10,649 million); strengthen and restructure NASA management, both overall management and management of the space station program ($1,982 million); and reinvent NASA, including eliminating duplication and overlap between NASA centers, transferring functions to universities or the private sector, reducing civil service involvement with and expecting more accountability from NASA contractors, emphasizing objective contracting, using private sector capabilities, changing NASA regulations, and returning NASA to its status as a research and development center ($8,519 million). Since these recommendations are not representative of all NPR recommendations, our findings cannot be generalized to apply to other savings claimed by NPR. As agreed with your office, we did not independently estimate the actual amount of savings achieved from these six recommendations. We interviewed NPR and OMB officials about how they estimated and claimed savings and also requested relevant documentation. We also examined a database OMB maintained showing the amount of savings achieved from the recommendations. The NPR I data covered fiscal years 1994 through 1999, and the NPR II data covered fiscal years 1996 through 2000. Both sets of data were most recently updated in the summer of 1997. We reviewed NPR data, including a description of the status of the recommendations and reports containing background information, on the three agencies and the relevant recommendations where available. We also interviewed officials from these agencies about the savings OMB estimated for the recommendations we reviewed. We conducted our review in Washington, D.C., from April 1998 through February 1999 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from OMB, the Departments of Agriculture and Energy, and NASA. These comments are discussed at the end of this letter. The Relationship Between NPR Recommendations and Agency Savings Claims Was Not Clear OMB generally did not distinguish between NPR’s and other contributions for the agency-specific recommendations we reviewed. NPR attempted to build on prior management reforms and operated in an atmosphere where other factors, such as agencies’ ongoing reforms as well as the political environment, also influenced actions taken to address NPR’s recommendations. The relationship between the recommendations we reviewed and the savings claimed was not clear because OMB attributed a broad range of changes to NPR. Savings estimated from the recommendation to reinvent NASA illustrate how OMB attributed a broad range of changes to NPR and did not distinguish NPR’s contribution from other factors. To estimate savings for that recommendation, OMB consolidated seven somewhat related recommendations into one savings estimate of $8.519 billion. OMB estimated savings by totaling expected reductions to NASA’s entire budget for fiscal years 1996 through 2000. According to a NASA official, NASA’s funding during this period was limited as the result of several initiatives, including direction from the NASA administrator that began before NPR was initiated and congressionally imposed spending caps. Nevertheless, OMB attributed all of the $8.519 billion in savings from estimated reductions in the entire NASA budget to NPR. OMB followed similar procedures in estimating savings from the other NPR recommendation concerning NASA that we reviewed—the recommendation to strengthen and restructure NASA management. The examiner estimated savings of $1.982 billion on the basis of expected reductions in funding levels for one of NASA’s three budget accounts for fiscal years 1994 through 1999. The estimated savings were based on expectations for lower levels of budget authority due to the combined effects of several factors, such as budgetary spending caps and ongoing NASA management reform efforts. In the case of the NPR recommendation for DOE to shift its laboratory facilities’ priorities in response to conditions that accompanied the end of the Cold War, NPR recognized that changes were already under way. For example, as part of this recommendation, NPR called for DOE to “continue” the reduction of funding for nuclear weapons production, research, testing programs, and infrastructure. Considering the comprehensive nuclear test ban treaty agreements and other factors, it was apparent that the DOE laboratories’ priorities would have changed regardless of whether NPR had made the recommendation. However, as figure 1 shows, when OMB estimated savings from this recommendation, it credited all savings from estimated reductions in the weapons activity budget account ($6.996 billion) to NPR. Similarly, efforts related to NPR’s recommendation to reorganize USDA were under way prior to or simultaneously with the NPR recommendation. These efforts included USDA reorganization plans and the introduction of legislation to streamline USDA. For example, the Federal Crop Insurance Reform and Department of Agriculture Reorganization Act of 1994 (P.L. 103-354, Oct. 13, 1994), required USDA to reduce the number of federal staff years by at least 7,500 by the end of fiscal year 1999. Therefore, USDA’s reorganization could be viewed as part of a continuous management improvement effort. OMB attributed the entire $770 million in estimated savings it reported from USDA’s staffing reductions to NPR. In contrast, the relationship between the recommended action and the estimated savings was relatively straightforward for the NPR recommendation to end USDA’s wool and mohair subsidy program. In that case, program costs, primarily subsidy amounts that were reduced by phasing out the program and subsequently eliminated by ending the program, were counted as savings. OMB Used Standard Budget Estimating Techniques to Estimate Savings According to OMB, the procedures and techniques it used to estimate NPR savings were those that it commonly follows in developing the President’s budget. Therefore the NPR savings estimates were to provide a “snapshot” showing the amount of savings OMB expected would result from the recommendations. For example, in 1993, OMB projected savings from the NPR recommendation to strengthen and restructure NASA management covering fiscal years 1994 through 1999. OMB characterized these estimated savings as achieved in 1996, and NPR has continued to report these estimated savings (based on the 1993 estimate) as achieved. More generally, OMB’s savings estimates for agency-specific recommendations included about $34.3 billion in savings that were not expected to be realized until fiscal years 1999 and 2000. OMB last updated its estimates in 1997, so any changes that have occurred since then are not reflected in NPR’s claimed savings. OMB’s budget estimating procedures and techniques use policies and economic forecasts in effect at a given time. The estimates OMB prepared for NPR initiatives involved projecting changes from a given baseline and identifying the difference as savings. OMB said that it generally used a fiscal year 1994 current services baseline for the NPR I agency-specific recommendations and a fiscal year 1996 Omnibus Budget Reconciliation Act baseline for the NPR II recommendations. OMB said, however, that in both cases, program examiners could use other savings baselines where the examiners believed it made better sense for a particular program or NPR recommendation. The OMB examiners also had latitude in determining the most appropriate analytical approach to use, based on their knowledge of the agency and the specific characteristics of the individual NPR recommendation. Our prior reviews of budget estimates have shown that it is difficult to reconstruct the specific assumptions used or to track savings for estimates produced several years ago. As we reported in 1996, once an estimate is prepared and time passes, it becomes difficult to retrace the original steps and reconstruct events in order to replicate the original estimate.Moreover, it is often difficult to isolate the impacts of particular proposals on actual savings achieved due to the multiple factors involved. In our 1994 report on issues concerning the 1990 Reconciliation Act, we found that it is generally not possible to identify or track precise savings by isolating the budgetary effects of individual provisions from the effects of other factors such as intervening actions or subsequent legislation. Some Estimated NPR Savings Were Claimed Twice In two instances, OMB counted some of the estimated savings NPR claimed twice. In the first instance, OMB counted the same estimated savings on two different NPR I initiatives—once for agency-specific changes (from reorganizing USDA) and again as part of a NPR governmentwide initiative to reduce the bureaucracy. In the second instance, OMB appears to have counted the same savings twice when separately estimating savings from the two NASA recommendations we reviewed. Therefore, the total estimated savings NPR claimed in both of these instances were overstated. OMB estimated that $770 million in cost savings resulted from NPR’s recommendation to reorganize USDA on the basis of workforce reductions. OMB also counted these workforce reductions when estimating the total of $54.8 billion in savings NPR claimed from its governmentwide initiative to reduce the size of the bureaucracy. While acknowledging that this occurred, OMB officials stated that the level of double counting appeared to be quite small in relation to total NPR savings—less than 1 percent of the total savings claimed from NPR recommendations. They said that the double counting was small because the recommendation to reorganize USDA was the only agency-specific recommendation with a significant staffing effect. However, OMB officials told us that they had not established a mechanism to prevent double counting from savings claimed for the agency-specific recommendations and from the governmentwide initiative. Officials from the other two agencies we reviewed—DOE and NASA—said that their staff also had been reduced and counted as part of the savings claimed for the agency-specific recommendations to streamline DOE and strengthen NASA management. Therefore, in the absence of OMB processes to guard against including savings from personnel reductions in both agency- specific and governmentwide savings claims, additional double counting of workforce reductions could have occurred. In the second instance, a portion of the estimated savings appears to have been counted twice for two NASA recommendations we reviewed, one from NPR I (to strengthen and restructure NASA management) and the other from NPR II (to reinvent NASA). Some of the actions NPR recommended, such as restructuring NASA centers, were components of both the NPR I and NPR II recommendations. The OMB examiner acknowledged that some of the savings could have been counted twice and that there was no mechanism to distinguish the sweeping changes that were occurring at NASA. She said that the NPR II recommendation built on the NPR I recommendation. NASA officials also said that they were not able to assign savings precisely to one recommendation or the other because the recommendations were similar and there was no clear demarcation where one ended and the other began. OMB estimated savings from the NPR I recommendation for fiscal years 1994 through 1999 and from the NPR II recommendation for fiscal years 1996 through 2000. Estimated savings from both recommendations were included when OMB aggregated total NPR-estimated savings. As figure 2 shows, a portion of the savings claimed from these two recommendations overlapped during fiscal years 1996 through 1999. For those years, claimed savings totaled about $7 billion (about $1.6 billion from the NPR I recommendation and about $5.4 billion from the NPR II recommendation). OMB appears to have counted some portion of that amount twice— potentially up to $1.4 billion. The NPR savings claims in these two instances were overstated. Offsetting Costs May Not Have Been Fully Included OMB and CBO both estimated savings for the recommendation to streamline USDA, and these estimates differed. While OMB estimated the savings to be $770 million, CBO’s estimate was $446 million—a difference of $324 million. We did not evaluate the differences between these estimates. However, according to a November 15, 1993, letter from the CBO director to the then House Minority Leader, CBO’s estimate differed from OMB’s “. . . with respect to the costs associated with severance benefits and relocation. While the administration assumes that agency baseline budgets are large enough to absorb these costs, CBO assumes that the costs would reduce the potential savings. The administration also estimates larger savings in employee overhead costs.” According to CBO, due to the differences in the consideration of offsetting costs, OMB’s estimate for the 5-year budget period (fiscal years 1994 through 1998) exceeded CBO’s estimate by $324 million. OMB provided documentation showing that, in fiscal year 1996 and again in fiscal year 1997, OMB factored “up-front” costs of $40 million into the estimates it reported. The responsible OMB branch chief stated that although he did not recall what the up-front costs for this recommendation specifically encompassed, these costs typically consist of buyouts (i.e., financial incentives of up to $25,000 for employees who voluntarily leave the federal government), lease breakage costs, and moving expenses. OMB Lacked Documentation and Reported Estimates Incorrectly According to OMB, consistent with its normal budget practices, OMB examiners generally did not retain documentation for NPR savings estimates. The budget examiners were unable to document estimates for four of the six recommendations we reviewed, constituting $21.8 billion in savings claims. Instead, the OMB examiners attempted to reconstruct how they thought the savings had been estimated through approximating rather than replicating savings estimates. OMB did, however, provide documentation on the estimated savings for two of the six recommendations we reviewed. These recommendations were to reorganize USDA (with estimated savings of $770 million) and to redirect the DOE laboratories’ priorities (with estimated savings of $6.996 billion). Even when documentation for the NPR savings estimates was available, OMB could not always provide complete information about how the estimates were calculated. For example, the responsible OMB branch chief could not specifically remember what comprised the up-front costs shown on documentation for the recommendation to reorganize USDA. The NPR savings claims for both cases where OMB provided documentation were reported incorrectly. These errors led NPR to understate the estimated savings from those recommendations. One of the errors involved a math mistake that affected the amount of savings claimed. When updating the estimate, a subtraction error led to $10 million in estimated savings being omitted from the total claimed for the recommendation to redirect the DOE laboratories’ priorities. The other error occurred because savings of $1.859 billion that the examiner estimated would occur from the recommendation to reorganize USDA for fiscal years 1997 through 1999 were not reported. Conclusions NPR claimed savings from agency-specific recommendations that could not be fully attributed to its efforts. In general, the savings estimates we reviewed could not be replicated, and there was no way to substantiate the savings claimed. We also found that some savings were overstated because OMB counted savings twice, and two of the estimates were reported incorrectly, resulting in claims that were understated. Agency Comments and Our Evaluation We requested comments on a draft of this report from the Director of OMB, the Secretaries of Agriculture and Energy, and the NASA Administrator, or their designees. On June 14, 1999, we met with OMB staff who provided clarifying and technical comments on the draft report. We incorporated their suggestions in this report where appropriate. We obtained written comments on the draft report from the Director of USDA’s Office of Budget and Program Analysis. He said that a loan program for mohair producers established in fiscal year 1999 provides substantially different incentives than the original wool and mohair program. His letter stated that the costs associated with the 1999 program did not negate the savings derived from eliminating the earlier program. As a result, we eliminated our discussion concerning this loan program from the report. We also obtained written comments on the draft report from DOE’s Controller. She said that OMB’s use of the weapons activity budget account to estimate savings from the recommendation to redirect the energy laboratories to post-Cold War priorities is more reasonable than is implied by the report. She explained that while the title of the NPR recommendation suggests that only laboratories would be affected by the recommendation, related NPR information indicates that the recommendation affected facilities beyond just the laboratories. We added language to the report recognizing that the recommendation, although focused on the laboratories, did include actions to reduce the production and testing of nuclear weapons. Secondly, she said that DOE had progressed beyond the status NPR reported for the initiatives included in the recommendation to realign DOE, and we included the updated information in appendix IV. On June 2, 1999, a NASA official reported that NASA had no comments on our draft report. As agreed, unless you announce the contents of this report earlier, we plan no further distribution until 30 days from the date of this letter. At that time, we will send copies of this report to Representative Henry A. Waxman, Ranking Minority Member of the House Government Reform Committee and to Senator Fred Thompson, Chairman, and Senator Joseph I. Lieberman, Ranking Minority Member, of the Senate Governmental Affairs Committee. We will also send copies to the Honorable Jacob J. Lew, Director of OMB; Mr. Morley Winograd, Director of NPR; the Honorable Daniel R. Glickman, Secretary of Agriculture; the Honorable Bill Richardson, Secretary of Energy; and the Honorable Daniel S. Goldin, Administrator of NASA. We will also make copies available to others on request. Major contributors to this report appear in appendix VII. Please contact me or Susan Ragland, Assistant Director, on (202) 512-8676 if you have questions about this report. Reorganize USDA NPR Recommendation In September 1993, NPR recommended that USDA be reorganized to better accomplish its mission, streamline its field structure, and improve service to its customers. NPR had recommended that USDA reorganize its structure, submit legislation to enact the reorganization, and review its field office structure to eliminate and restructure those elements no longer appropriate. Key Actions Reported NPR reported that USDA has made progress towards reorganizing at its headquarters and field office structure. USDA submitted reorganization legislation, and Congress enacted the Federal Crop Insurance Reform and Department of Agriculture Reorganization Act of 1994 (P.L. 103-354) on October 13, 1994. The reorganization at the headquarters level has reduced the number of agencies from 43 to 29 and has established 7 “mission areas” to carry out program responsibilities. USDA also implemented a field office streamlining plan that consolidates the county-based agencies to provide services to customers from all agencies at one location. This effort is to result in streamlining the number of field office locations from over 3,700 to 2,550. As of May 1998, the total number of field office locations had been reduced to about 2,700. Methodology Used to Estimate Savings OMB officials stated that savings for this recommendation were derived solely from the number of full-time equivalent (FTE) reductions USDA made. OMB took the difference between the fiscal year 1994 current services baseline and actual and updated reductions and then multiplied that amount by an average salary that was comprised of both headquarters and field office salary data. From that amount, OMB subtracted offsetting costs. OMB officials provided documentation on how these savings were estimated. End the Wool and Mohair Subsidy NPR Recommendation In September 1993, NPR recommended that USDA end this subsidy program, which was implemented in 1954 to increase domestic production of wool by providing direct payments to farmers. At that time, Congress declared wool a strategic commodity to reduce dependence on foreign fibers, which was caused by imports needed during World War II and the Korean conflict. NPR said that this subsidy was outdated, since wool was no longer a strategic commodity. Key Actions Reported NPR reported that this subsidy had been eliminated as a result of legislation amending the National Wool Act of 1954 (P.L. 103-130, November 1, 1993). The act mandated the reduction of subsidies during fiscal years 1994 and 1995 and the elimination of subsidies for fiscal year 1996. Payments were reduced by 25 percent in fiscal year 1994, 50 percent in fiscal year 1995, and eliminated entirely beginning in fiscal year 1996. Methodology Used to Estimate Savings In response to our questions, although they were unable to provide documentation on how savings were estimated, OMB generally could reconstruct how savings would have been estimated. This involved subtracting the payments that farmers were receiving as a result of the subsidy reductions mandated in P.L.103-130 from the amount of subsidies that were projected to have been paid, had the legislation not been enacted, for fiscal years 1994 through 1999. OMB said that the source of the projected subsidy information was 1993 data from the Commodity Credit Corporation, which analyzes budget projections and assumptions. Redirect DOE's Laboratories to Post-Cold War Priorities NPR Recommendation In 1993, NPR recommended that DOE shift laboratory facilities’ priorities to accommodate conditions that accompanied the end of the Cold War— such as the dramatic decrease in the arms race and cutbacks in defense- related energy and nuclear research funding. NPR recommended, among other things, that DOE continue to reduce funding for nuclear weapons production, research, testing programs, and infrastructure that are needed to meet current defense requirements; develop a vision for the DOE laboratory complex; and encourage laboratory managers to work with the private sector on high-priority research and development (R&D) needs. Key Actions Reported NPR reported that DOE is restructuring and refocusing its laboratories by developing new strategic plans and implementing a science-based stockpile stewardship program. The stockpile stewardship program is designed to support the testing of nuclear weapons in a safe manner as directed by the comprehensive nuclear test ban treaty, which banned the production of nuclear weapons after the Cold War. DOE has also established the Laboratory Operations Board and the R&D Council. These organizations study the use of government/private partnerships to increase productivity of DOE R&D programs. Methodology Used to Estimate Savings OMB calculated savings for redirecting energy laboratories to post-Cold War priorities by taking the difference in the weapons activities budget account between the fiscal year 1994 current services baseline and the actual appropriations for that fiscal year and counting the savings through fiscal year 1999. The DOE laboratories’ budget is subsumed within the weapons activities account of the President’s budget. This account includes R&D to support the safety and reliability of the nuclear weapons stockpile as well as personnel and contractual services for certain defense programs’ missions. OMB considered DOE laboratories as well as the entire weapons complex, of which the laboratories are a component, when estimating savings for this recommendation. OMB officials provided documentation on how these savings were estimated. Summary of Savings From Realigning DOE NPR Recommendation In 1995, NPR had six recommendations concerning realignment of DOE. NPR consolidated reporting on these recommendations for purposes of developing savings estimates. These recommendations were to (1) terminate the Clean Coal Technology Program; (2) privatize the naval petroleum reserves in Elk Hills, CA; (3) sell uranium no longer needed for national defense purposes; (4) reduce costs in DOE’s applied research programs; (5) improve program effectiveness and efficiencies in environmental management of nuclear waste cleanups; and (6) strategically align headquarters and field operations. Key Actions Reported NPR reported that DOE has implemented actions consistent with these recommendations. For instance, NPR reported that DOE is planning to terminate the Clean Coal Technology Program by September 2000. DOE has reorganized the department by implementing the Strategic Alignment Initiative, which is intended to reduce staffing, support service contracting, and travel costs; streamline the National Environmental Policy Act; increase asset sales; and improve information resources management. DOE has also established performance measures to improve effectiveness of nuclear waste cleanups, developed a plan for selling uranium reserves, and is developing ways to reduce administrative costs in DOE’s research programs. More recently, DOE noted that the Elk Hills Naval Petroleum Reserves were sold in February 1998 for $3.1 billion. Similarly, in fiscal year 1998, $0.4 billion was realized due to DOE’s uranium being a part of the sale of the United States Enrichment Corporation. Methodology Used to Estimate Savings OMB could not reconstruct calculations for the savings estimated for this recommendation. Strengthen and Restructure NASA Management NPR Recommendation In September 1993, NPR recommended that NASA take a number of restructuring steps, both in overall management and in the management of the space station program. It recommended that NASA, among other things, aggressively complete its overhaul of space station program management, implement the management principles developed for the redesigned space station program, and formally institute its Program Management Council (PMC), an organization charged with improving NASA’s internal management processes. Key Actions Reported NPR reported that NASA has taken and is continuing to take steps to improve the management of the agency and the space station. According to NPR, NASA’s overhaul of space station program management was accomplished through enactment of the fiscal year 1995 Appropriations Act (P.L. 103-327, September 28, 1994). Also, the PMC was established in June 1993 and is fully operational. Methodology Used to Estimate Savings In response to our questions, OMB attempted to reconstruct how savings were estimated, but could not provide documentation to support its calculation. OMB officials said the methodology they would have used to estimate savings for this recommendation was to take the difference between the fiscal year 1994 current services baseline and the actual appropriations for that fiscal year and count the savings through fiscal year 1999. Reinvent NASA NPR Recommendation In 1995, NPR recommended that NASA be reinvented. This recommendation built on the earlier NPR recommendation to strengthen and restructure NASA management. OMB consolidated seven recommendations that related to reinventing NASA for developing savings estimates. These recommendations included (1) eliminating duplication and overlap between NASA centers; (2) transferring functions to universities or the private sector; (3) reducing civil service involvement with and expecting more accountability from NASA contractors; (4) emphasizing objective contracting; (5) using private sector capabilities; (6) changing NASA regulations; and (7) returning NASA to its status as a research and development agency. Key Actions Reported NPR reported that NASA has completed actions consistent with this consolidated recommendation. For instance, NPR reported that NASA has restructured its centers to eliminate overlap and duplication of functions and has implemented techniques, such as forming partnerships and outsourcing functions. NPR also reported that NASA was creating alliances with academic and industrial centers and consolidating all space shuttle operations management under a single, private sector prime contractor. In addition, NPR reported that NASA has implemented a performance-based contracting initiative. Methodology Used to Estimate Savings In response to our questions, OMB attempted to reconstruct how savings were estimated, but could not provide documentation to support its calculation. An OMB official said she took the difference between the fiscal year 1996 current services baseline and the actual appropriations for that fiscal year and counted savings through fiscal year 2000. GAO Contacts and Staff Acknowledgments GAO Contacts Acknowledgments In addition to those named above, Carole Buncher, Lauren Alpert, and Jenny Kao made key contributions to this report. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touch-tone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the savings claims from the National Performance Review (NPR), which has been renamed the National Partnership for Reinventing Government, focusing on how the Office of Management and Budget (OMB) estimated savings from selected NPR recommendations targeted toward individual agencies. GAO noted that: (1) NPR claimed savings from agency-specific recommendations that could not be fully attributed to its efforts; (2) OMB generally did not distinguish NPR's contributions from other initiatives or factors that influenced budget reductions at the agencies GAO reviewed; (3) therefore, the relationship between the recommended action and the savings claimed from the recommendations GAO reviewed was not clear; (4) to estimate the savings from the agency-specific recommendations, OMB said it used the same types of procedures and analytic techniques that have long been used in developing the President's budget; (5) as GAO's previous reviews of budget estimates have shown, it is difficult to reconstruct the specific assumptions used and track savings for estimates produced several years ago; (6) moreover, GAO has reported that it is often impossible to isolate the impacts of particular proposals or recommendations on actual savings achieved due to the multiple factors involved; (7) OMB relied on these point-in-time estimates rather than attempting to measure actual savings; (8) OMB last updated its estimates in 1997, so any changes that have occurred since then are not reflected in NPR's claimed savings; (9) GAO identified two instances where OMB counted at least part of the estimated savings twice; (10) therefore, the total estimated savings from the Department of Agriculture (USDA) and National Aeronautics and Space Administration recommendations were overstated; (11) for one recommendation where OMB and the Congressional Budget Office (CBO) both estimated savings, GAO found that offsetting program costs may not have been fully included in OMB's estimates; (12) in estimating savings that resulted from personnel reductions at USDA, OMB and CBO considered different offsetting costs and arrived at different estimates, with CBO's estimate being $324 million less than OMB's $770 million estimate; (13) according to CBO, it assumed that severance benefits and relocation costs would reduce the potential savings, while OMB assumed that the agency could absorb those costs; (14) consistent with OMB's normal budget practices, OMB examiners generally did not retain documentation when estimating NPR savings; (15) instead, the OMB examiners attempted to reconstruct how they thought the savings had been estimated through approximating rather than replicating savings estimates; (16) OMB had documentation for two of the recommendations GAO reviewed; and (17) however, GAO found that the savings were reported incorrectly.
GAO_GAO-05-840T
Background Radiation detection equipment can detect radioactive materials used in medicine and industry; in commodities that are sources of naturally occurring radiation, such as kitty litter; and in nuclear materials that could be used in a nuclear weapon. The capability of the equipment to detect nuclear material depends on many factors, including the amount of material, the size and capacity of the detection device, the distance from the detection device to the nuclear material, and whether the material is shielded from detection. Detecting actual cases of illicit trafficking in weapons-usable nuclear material is complicated because one of the materials that is of greatest concern—highly enriched uranium—is among the most difficult materials to detect because of its relatively low level of radioactivity. In contrast, medical and industrial radioactive sources, which could be used in a radiological dispersion device (or “dirty bomb”), are highly radioactive and easier to detect. Because of the complexities of detecting and identifying nuclear material, customs officers and border guards who are responsible for operating detection equipment must also be trained in using handheld radiation detectors to pinpoint the source of an alarm, identify false alarms, and respond to cases of nuclear smuggling. Several U.S. Agencies Have Programs to Combat Nuclear Smuggling Four U.S. agencies have implemented programs to combat nuclear smuggling both domestically and in other countries by providing radiation detection equipment and training to border security personnel. From fiscal year 1994 through fiscal year 2005, the Congress has appropriated about $800 million for these efforts, including about $500 million to DOE, DOD, and State for international efforts and about $300 million to DHS for installing radiation detection equipment at U.S. points of entry. Initial concerns about the threat posed by nuclear smuggling were focused on nuclear materials originating in the former Soviet Union. As a result, the first major initiatives to combat nuclear smuggling during the late 1990s concentrated on deploying radiation detection equipment at borders in countries of the former Soviet Union and in Central and Eastern Europe. Assistance included providing these countries with commercially available radiation detection equipment such as portal monitors (stationary equipment designed to detect radioactive materials carried by pedestrians or vehicles) and smaller, portable radiation detectors. In addition, U.S. agencies provided technical support to promote the development and enforcement of laws and regulations governing the export of nuclear­ related technology and other equipment and training to generally improve these countries’ ability to interdict nuclear smuggling. One of the main U.S. efforts providing radiation detection equipment to foreign governments is DOE’s Second Line of Defense program, which began installing equipment at key border crossing sites in Russia in 1998. According to DOE, through the end of fiscal year 2004, the Second Line of Defense program had completed installations at 66 sites, mostly in Russia. Additionally, in 2003, DOE began its Megaports Initiative, which seeks to install radiation detection equipment at major foreign seaports to enable foreign government personnel to screen shipping containers entering and leaving these ports for nuclear and other radioactive material. In March 2005, we reported that the Megaports Initiative had completed installations at two foreign ports and is currently working to equip five others with radiation detection equipment. Other U.S. agencies also have programs to provide radiation detection equipment and training to foreign governments, including two programs at the Department of State—the Nonproliferation and Disarmament Fund and Export Control and Related Border Security program—and two programs at DOD—the International Counterproliferation Program and the Weapons of Mass Destruction Proliferation Prevention Initiative. In addition to these efforts at foreign borders, the U.S. Customs Service began providing its inspectors at U.S. borders and points of entry with small handheld radiation detection devices, known as radiation pagers, in fiscal year 1998. After September 11, 2001, this effort was expanded by DHS’s Bureau of Customs and Border Patrol. In the spring of 2002, DHS conducted a pilot project to test the use of radiation portal monitors­ larger-scale radiation detection equipment that can be used to screen vehicles and cargo. In October 2002, DHS began its deployment of portal monitors at U.S. points of entry. In May 2005, DHS reported that it has installed more than 470 radiation portal monitors nationwide at sites including international mail and package handling facilities, land border crossings, and seaports. U.S. Programs to Combat Nuclear Smuggling in the United States and Other Countries Have Lacked Effective Planning and Coordination A common problem faced by U.S. programs to combat nuclear smuggling both domestically and in other countries is the lack of effective planning and coordination among the agencies responsible for implementing these programs. Regarding assistance to foreign countries, we reported in 2002 that there was no overall governmentwide plan to guide U.S. efforts, some programs were duplicative, and coordination among the U.S. agencies was not effective. We found that the most troubling consequence of this lack of effective planning and coordination was that DOE, State, and DOD were pursuing separate approaches to enhancing other countries’ border crossings. Specifically, radiation portal monitors installed in more than 20 countries by State are less sophisticated than those installed by DOE and DOD. As a result, some border crossings where U.S. agencies have installed radiation detection equipment are more vulnerable to nuclear smuggling than others. We found that there were two offices within DOE that were providing radiation detection equipment and two offices within State that have funded similar types of equipment for various countries. We made several recommendations to correct these problems and, since the issuance of our report, a governmentwide plan encompassing U.S. efforts to combat nuclear smuggling in other countries has been developed; some duplicative programs have been consolidated; and coordination among the agencies, although still a concern, has improved. Regarding efforts to deploy radiation detection equipment at U.S. points of entry, we reported that DHS had not coordinated with other federal agencies and DOE national laboratories on longer-term objectives such as attempting to improve the radiation detection technology used in portal monitors. We also noted that DHS was not sharing data generated by portal monitors installed at U.S. points of entry with DOE national laboratories other than Pacific Northwest National Laboratory, which is DHS’s primary contractor for deploying radiation detection equipment at U.S. points of entry. Experts from DOE’s national laboratories told us that achieving improvements to existing radiation detection technologies largely depends on analyzing data on the types of radioactive cargo passing through deployed portal monitors. We found that a number of factors hindered coordination, including competition between the DOE national laboratories and the emerging missions of various federal agencies with regard to radiation detection. DHS agreed with our assessment and told us that it would be taking corrective actions. Additionally, other DOE national laboratories and federal agencies are independently testing numerous different radiation portal monitors using a variety of nuclear and radiological materials and simulating possible smuggling scenarios. However, they are not sharing lessons learned or the results of these tests with other federal agencies. For example, DOD’s Defense Threat Reduction Agency has a large testing facility near Sandia National Laboratories in New Mexico and has pilot tested radiation detection equipment at entrances to certain military bases. However, it is unclear how and with whom the results of such testing are shared to facilitate the development of improved radiation detection technologies. In April 2005, DHS announced its intent to create the Domestic Nuclear Detection Office (DNDO) to coordinate U.S. efforts to develop improved radiation detection technologies. DHS has requested over $227 million in fiscal year 2006 to initiate this effort. Through DNDO, DHS plans to lead the development of a national test bed for radiation detection technologies at the Nevada Test Site. Currently Deployed Radiation Detection Equipment Has Limitations Recently, concerns have been raised about the ability of radiation detection equipment to detect illicitly trafficked nuclear material. As we have reported in the past, certain factors can affect the general capability of radiation detection equipment. In particular, nuclear materials are more difficult to detect if lead or other metal is used to shield them. For example, we reported in March 2005 that a cargo container containing a radioactive source passed through radiation detection equipment that DOE had installed at a foreign seaport without being detected because of the presence of large amounts of scrap metal in the container. Additionally, detecting actual cases of illicit trafficking in weapons-usable nuclear material is complicated because one of the materials of greatest concern in terms of proliferation—highly enriched uranium—is among the most difficult materials to detect due to its relatively low level of radioactivity. The manner in which radiation detection equipment is deployed, operated, and maintained can also limit its effectiveness. Given the inherent limitations of currently deployed radiation detection equipment and difficulties in detecting certain nuclear materials, it is important that it be installed, operated, and maintained in a way that optimizes authorities’ ability to interdict illicit nuclear materials. In our past reports, we have noted many problems with the radiation detection equipment currently deployed at U.S. and foreign borders. Specifically, in October 2002, we testified that radiation detection pagers have severe limitations and are inappropriate for some tasks. DOE officials told us that the pagers have a limited range and are not designed to detect weapons-usable nuclear material. According to U.S. radiation detection vendors and DOE national laboratory specialists, pagers are more effectively used in conjunction with other radiation detection equipment, such as portal monitors. In addition, the manner in which DHS had deployed radiation detection equipment at some U.S. points of entry reduced its effectiveness. Specifically, we identified a wide range of problems, such as (1) allowing trucks to pass through portal monitors at speeds higher than what experts consider optimal for detecting nuclear material, (2) reducing the sensitivity of the portal monitors in an attempt to limit the number of nuisance alarms from naturally occurring radioactive materials, such as kitty litter and ceramics, and (3) not deploying enough handheld radiation detection equipment to certain border sites, which limited the ability of inspectors to perform secondary inspections on suspicious cargo or vehicles. Regarding problems with the U.S. programs to deploy radiation detection equipment in other countries, we reported that: About half of the portal monitors provided to one country in the former Soviet Union were never installed or were not operational. Officials from this country told us that they were given more equipment than they could use. A radiation portal monitor provided to Bulgaria by the Department of State was installed on an unused road that was not expected to be completed for 1-1/2 years. Mobile vans equipped with radiation detection equipment furnished by the Department of State have limited utility because they cannot operate effectively in cold climates or are otherwise not suitable for conditions in some countries. DOE has found that environmental conditions at many seaports, such as the existence of high winds and sea spray, can affect radiation detection equipment’s performance and sustainability. Environmental conditions are not the only challenge facing DOE and DHS in installing radiation detection equipment at seaports in the United States and other countries. One of the biggest challenges at seaports is adapting the equipment to the port environment while minimizing the impact on the flow of commerce and people. DOE’s Megaports Initiative had made limited progress in installing radiation detection equipment at foreign seaports it had identified as highest priority largely due to concerns of some countries about the impact of radiation detection equipment on the flow of commerce through their ports. DHS has faced similar concerns from port operators in the United States. It is important to note that radiation detection equipment is only one of the tools in the toolbox that customs inspectors and border guards must use to combat nuclear smuggling. Combating nuclear smuggling requires an integrated approach that includes equipment, proper training, and intelligence gathering on smuggling operations. In the past, most known interdictions of weapons-usable nuclear materials have resulted from police investigations rather than from detection by radiation detection equipment installed at border crossings. However, there have been recent reports of incidents where radioactive materials were discovered and seized as a result of alarms raised by radiation detection equipment. Because of the complexity of detecting nuclear material, the customs officers or border guards who are responsible for operating radiation detection equipment must also be well-trained in using handheld radiation detectors to pinpoint the source of an alarm, identifying false alarms, and responding to cases of nuclear smuggling. Without a clear understanding of how radiation detection equipment works and its limitations, inspectors may not be using the equipment as effectively as possible. Although efforts to combat nuclear smuggling through the installation of radiation detection equipment are important, the United States should not and does not rely upon radiation detection equipment at foreign or U.S. borders as its sole means for preventing nuclear materials or a nuclear warhead from reaching the United States. Recognizing the need for a broad approach to the problem, the U.S. government has multiple initiatives that are designed to complement each other. For example, DOE is securing nuclear material at its source through the Material Protection, Control, and Accounting program, which seeks to improve the physical security of nuclear facilities in the former Soviet Union. In addition, DHS has other initiatives to identify containers at foreign seaports that are considered high risk for containing smuggled goods, such as nuclear material and other dangerous materials. Supporting all of these programs is intelligence information that can give us advanced notice of nuclear material smuggling and is a critical component to prevent dangerous materials from entering the United States. This concludes my prepared statement. I would be happy to respond to any questions that you or other Members of the Subcommittees may have. Contact and Staff Acknowledgments For further information about this testimony, please contact me at (202) 512-3841 or at [email protected]. R. Stockton Butler, Julie Chamberlain, Nancy Crothers, Christopher Ferencik, Emily Gupta, Jennifer Harman, Winston Le, Glen Levis, F. James Shafer, Jr., and Gene Wisnoski made key contributions to this statement. Related GAO Products Preventing Nuclear Smuggling: DOE Has Made Limited Progress in Installing Radiation Detection Equipment at Highest Priority Foreign Seaports. GAO-05-375. Washington, D.C.: March 31, 2005. Weapons of Mass Destruction: Nonproliferation Programs Need Better Integration. GAO-05-157. Washington, D.C.: January 28, 2005. Customs Service: Acquisition and Deployment of Radiation Detection Equipment. GAO-03-235T. Washington, D.C.: October 17, 2002. Container Security: Current Efforts to Detect Nuclear Materials, New Initiatives, and Challenges. GAO-03-297T. Washington, D.C.: November 18, 2002. Nuclear Nonproliferation: U.S. Efforts to Combat Nuclear Smuggling. GAO-02-989T. Washington, D.C.: July 30, 2002. Nuclear Nonproliferation: U.S. Efforts to Help Other Countries Combat Nuclear Smuggling Need Strengthened Coordination and Planning. GAO-02-426. Washington, D.C.: May 16, 2002. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
According to the International Atomic Energy Agency, between 1993 and 2004, there were 650 confirmed cases of illicit trafficking in nuclear and radiological materials worldwide. A significant number of the cases involved material that could be used to produce either a nuclear weapon or a device that uses conventional explosives with radioactive material (known as a "dirty bomb"). Over the past decade, the United States has become increasingly concerned about the danger that unsecured weapons-usable nuclear material could fall into the hands of terrorists or countries of concern. In the aftermath of September 11, 2001, there is heightened concern that terrorists may try to smuggle nuclear materials or a nuclear weapon into the United States. This testimony summarizes the results of our previous reports on various U.S. efforts to combat nuclear smuggling both in the United States and abroad. Specifically, this testimony discusses (1) the different U.S. federal agencies tasked with installing radiation detection equipment both domestically and in other countries, (2) problems with coordination among these agencies and programs, and (3) the effectiveness of radiation detection equipment deployed in the United States and other countries. Four U.S. agencies, the Departments of Energy (DOE), Defense (DOD), State, and Homeland Security (DHS), are implementing programs to combat nuclear smuggling by providing radiation detection equipment and training to border security personnel. From fiscal year 1994 through fiscal year 2005, the Congress has appropriated about $800 million for these efforts, including about $500 million to DOE, DOD, and State for international efforts and about $300 million to DHS for installing radiation detection equipment at U.S. points of entry. The first major initiatives to combat nuclear smuggling concentrated on deploying radiation detection equipment at borders in countries of the former Soviet Union. In particular, in 1998, DOE established the Second Line of Defense program, which has installed equipment at 66 sites mostly in Russia through the end of fiscal year 2004. In 2003, DOE began its Megaports Initiative to focus on the threat posed by nuclear smuggling at major foreign seaports and to date has completed installations at two ports. Regarding efforts at U.S. points of entry, the U.S. Customs Service began providing its inspectors with portable radiation detection devices in 1998 and expanded its efforts to include larger-scale radiation detection equipment after September 11, 2001. This program is continuing under DHS, which reported in May 2005 that it has installed more than 470 radiation portal monitors nationwide at mail facilities, land border crossings, and seaports. A common problem faced by U.S. programs to combat nuclear smuggling is the lack of effective planning and coordination among the responsible agencies. For example, we reported in 2002 that there was no overall governmentwide plan to guide U.S. efforts, some programs were duplicative, and coordination among U.S. agencies was not effective. We found that the most troubling consequence of this lack of effective planning and coordination was that the Department of State had installed less sophisticated equipment in some countries leaving those countries' borders more vulnerable to nuclear smuggling than countries where DOE and DOD had deployed equipment. Since the issuance of our report, the agencies involved have made some progress in addressing these issues. Regarding the deployment of equipment in the United States, we reported that DHS had not effectively coordinated with other federal agencies and DOE national laboratories on longer-term objectives, such as attempting to improve the radiation detection technology. We found that a number of factors hindered coordination, including competition between DOE national laboratories and the emerging missions of various federal agencies with regard to radiation detection. The effectiveness of the current generation of radiation detection equipment is limited in its ability to detect illicitly trafficked nuclear material, especially if it is shielded by lead or other metal. Given the inherent limitations of radiation detection equipment and difficulties in detecting certain materials, it is important that the equipment be installed, operated, and maintained in a way that optimizes its usefulness. It is also important to note that the deployment of radiation detection equipment--regardless of how well such equipment works--is not a panacea for the problem of nuclear smuggling. Rather, combating nuclear smuggling requires an integrated approach that includes equipment, proper training of border security personnel in the use of radiation detection equipment, and intelligence gathering on potential nuclear smuggling operations.
GAO_GAO-02-160T
The Nature of the Threat Facing the United States The United States and other nations face increasingly diffuse threats in the post-Cold War era. In the future, potential adversaries are more likely to strike vulnerable civilian or military targets in nontraditional ways to avoid direct confrontation with our military forces on the battlefield. The December 2000 national security strategy states that porous borders, rapid technological change, greater information flow, and the destructive power of weapons now within the reach of small states, groups, and individuals make such threats more viable and endanger our values, way of life, and the personal security of our citizens. Hostile nations, terrorist groups, transnational criminals, and individuals may target American people, institutions, and infrastructure with cyber attacks, weapons of mass destruction, or bioterrorism. International criminal activities such as money laundering, arms smuggling, and drug trafficking can undermine the stability of social and financial institutions and the health of our citizens. Other national emergencies may arise from naturally occurring or unintentional sources such as outbreaks of infectious disease. As we witnessed in the tragic events of September 11, 2001, some of the emerging threats can produce mass casualties. They can lead to mass disruption of critical infrastructure, involve the use of biological or chemical weapons, and can have serious implications for both our domestic and the global economy. The integrity of our mail has already been compromised. Terrorists could also attempt to compromise the integrity or delivery of water or electricity to our citizens, compromise the safety of the traveling public, and undermine the soundness of government and commercial data systems supporting many activities. Key Elements to Improve Homeland Security A fundamental role of the federal government under our Constitution is to protect America and its citizens from both foreign and domestic threats. The government must be able to prevent and deter threats to our homeland as well as detect impending danger before attacks or incidents occur. We also must be ready to manage the crises and consequences of an event, to treat casualties, reconstitute damaged infrastructure, and move the nation forward. Finally, the government must be prepared to retaliate against the responsible parties in the event of an attack. To accomplish this role and address our new priority on homeland security, several critical elements must be put in place. First, effective leadership is needed to guide our efforts as well as secure and direct related resources across the many boundaries within and outside of the federal government. Second, a comprehensive homeland security strategy is needed to prevent, deter, and mitigate terrorism and terrorist acts, including the means to measure effectiveness. Third, managing the risks of terrorism and prioritizing the application of resources will require a careful assessment of the threats we face, our vulnerabilities, and the most critical infrastructure within our borders. Leadership Provided by the Office of Homeland Security On September 20, 2001, we issued a report that discussed a range of challenges confronting policymakers in the war on terrorism and offered a series of recommendations. We recommended that the government needs clearly defined and effective leadership to develop a comprehensive strategy for combating terrorism, to oversee development of a new national-threat and risk assessment, and to coordinate implementation among federal agencies. In addition, we recommended that the government address the broader issue of homeland security. We also noted that overall leadership and management efforts to combat terrorism are fragmented because no single focal point manages and oversees the many functions conducted by more than 40 different federal departments and agencies. For example, we have reported that many leadership and coordination functions for combating terrorism were not given to the National Coordinator for Security, Infrastructure Protection and Counterterrorism within the Executive Office of the President. Rather, these leadership and coordination functions are spread among several agencies, including the Department of Justice, the Federal Bureau of Investigation (FBI), the Federal Emergency Management Agency, and the Office of Management and Budget. In addition, we reported that federal training programs on preparedness against weapons of mass destruction were not well coordinated among agencies resulting in inefficiencies and concerns among rescue crews in the first responder community. The Department of Defense, Department of Justice, and the Federal Emergency Management Agency have taken steps to reduce duplication and improve coordination. Despite these efforts, state and local officials and organizations representing first responders indicate that there is still confusion about these programs. We made recommendations to consolidate certain activities, but have not received full agreement from the respective agencies on these matters. In his September 20, 2001, address to the Congress, President Bush announced that he was appointing Pennsylvania Governor Thomas Ridge to provide a focus to homeland security. As outlined in the President’s speech and confirmed in a recent executive order, the new Homeland Security Adviser will be responsible for coordinating federal, state, and local efforts and for leading, overseeing, and coordinating a comprehensive national strategy to safeguard the nation against terrorism and respond to any attacks that may occur. Both the focus of the executive order and the appointment of a coordinator within the Executive Office of the President fit the need to act rapidly in response to the threats that surfaced in the events of September 11 and the anthrax issues we continue to face. Although this was a good first step, a number of important questions related to institutionalizing and sustaining the effort over the long term remain, including: What will be included in the definition of homeland security? What are the specific homeland security goals and objectives? How can the coordinator identify and prioritize programs that are spread across numerous agencies at all levels of government? What criteria will be established to determine whether an activity does or does not qualify as related to homeland security? How can the coordinator have a real impact in the budget and resource allocation process? Should the coordinator’s roles and responsibilities be based on specific statutory authority? And if so, what functions should be under the coordinator’s control? Depending on the basis, scope, structure, and organizational location of this new position and entity, what are the implications for the Congress and its ability to conduct effective oversight? A similar approach was pursued to address the potential for computer failures at the start of the new millennium, an issue that came to be known as Y2K. A massive mobilization, led by an assistant to the President, was undertaken. This effort coordinated all federal, state, and local activities, and established public-private partnerships. In addition, the Congress provided emergency funding to be allocated by the Office of Management and Budget after congressional consideration of the proposed allocations. Many of the lessons learned and practices used in this effort can be applied to the new homeland security effort. At the same time, the Y2K effort was finite in nature and not nearly as extensive in scope or as important and visible to the general public as homeland security. The long-term, expansive nature of the homeland security issue suggests the need for a more sustained and institutionalized approach. Developing a Comprehensive Homeland Security Strategy I would like to discuss some elements that need to be included in the development of the national strategy for homeland security and a means to assign roles to federal, state, and local governments and the private sector. Our national preparedness related to homeland security starts with defense of our homeland but does not stop there. Besides involving military, law enforcement, and intelligence agencies, it also entails all levels of government – federal, state, and local – and private individuals and businesses to coordinate efforts to protect the personal safety and financial interests of United States citizens, businesses, and allies, both at home and throughout the world. To be comprehensive in nature, our strategy should include steps designed to reduce our vulnerability to threats; use intelligence assets and other broad-based information sources to identify threats and share such information as appropriate; stop incidents before they occur; manage the consequences of an incident; and in the case of terrorist attacks, respond by all means available, including economic, diplomatic, and military actions that, when appropriate, are coordinated with other nations. An effective homeland security strategy must involve all levels of government and the private sector. While the federal government can assign roles to federal agencies under the strategy, it will need to reach consensus with the other levels of government and with the private sector on their respective roles. In pursuing all elements of the strategy, the federal government will also need to closely coordinate with the governments and financial institutions of other nations. As the President has said, we will need their help. This need is especially true with regard to the multi-dimensional approach to preventing, deterring, and responding to incidents, which crosses economic, diplomatic, and military lines and is global in nature. Managing Risks to Homeland Security The United States does not currently have a comprehensive risk management approach to help guide federal programs for homeland security and apply our resources efficiently and to best effect. “Risk management” is a systematic, analytical process to determine the likelihood that a threat will harm physical assets or individuals and then to identify actions to reduce risk and mitigate the consequences of an attack. The principles of risk management acknowledge that while risk generally cannot be eliminated, enhancing protection from known or potential threats can serve to significantly reduce risk. We have identified a risk management approach used by the Department of Defense to defend against terrorism that might have relevance for the entire federal government to enhance levels of preparedness to respond to national emergencies whether man-made or unintentional in nature. The approach is based on assessing threats, vulnerabilities, and the importance of assets (criticality). The results of the assessments are used to balance threats and vulnerabilities and to define and prioritize related resource and operational requirements. Threat assessments identify and evaluate potential threats on the basis of such factors as capabilities, intentions, and past activities. These assessments represent a systematic approach to identifying potential threats before they materialize. However, even if updated often, threat assessments might not adequately capture some emerging threats. The risk management approach therefore uses the vulnerability and criticality assessments discussed below as additional input to the decision-making process. Vulnerability assessments identify weaknesses that may be exploited by identified threats and suggest options that address those weaknesses. For example, a vulnerability assessment might reveal weaknesses in an organization’s security systems, financial management processes, computer networks, or unprotected key infrastructure such as water supplies, bridges, and tunnels. In general, teams of experts skilled in such areas as structural engineering, physical security, and other disciplines conduct these assessments. Criticality assessments evaluate and prioritize important assets and functions in terms of such factors as mission and significance as a target. For example, certain power plants, bridges, computer networks, or population centers might be identified as important to national security, economic security, or public health and safety. Criticality assessments provide a basis for identifying which assets and structures are relatively more important to protect from attack. In so doing, the assessments help determine operational requirements and provide information on where to prioritize and target resources while reducing the potential to target resources on lower priority assets. We recognize that a national-level risk management approach that includes balanced assessments of threats, vulnerabilities, and criticality will not be a panacea for all the problems in providing homeland security. However, if applied conscientiously and consistently, a balanced approach— consistent with the elements I have described—could provide a framework for action. It would also facilitate multidisciplinary and multi-organizational participation in planning, developing, and implementing programs and strategies to enhance the security of our homeland while applying the resources of the federal government in the most efficient and effective manner possible. Given the tragic events of Tuesday, September 11, 2001, a comprehensive risk management approach that addresses all threats has become an imperative. As this nation implements a strategy for homeland security, we will encounter many of the long-standing performance and accountability challenges being faced throughout the federal government. For example, we will be challenged to look across the federal government itself to bring more coherence to the operations of many agencies and programs. We must also address human capital issues to determine if we have the right people with the right skills and knowledge in the right places. Coordination across all levels of government will be required as will adequately defining performance goals and measuring success. In addressing these issues, we will also need to keep in mind that our homeland security priorities will have to be accomplished against the backdrop of the long-term fiscal challenges that loom just over the 10-year budget window. Short- and Long-Term Fiscal Implications The challenges of combating terrorism and otherwise addressing homeland security have come to the fore as urgent claims on the federal budget. As figure 2 shows, our past history suggests that when our national security or the state of the nation’s economy was at issue, we have incurred sizable deficits. Many would argue that today we are facing both these challenges. We are fortunate to be facing them at a time when we have some near-term budgetary flexibility. The budgetary surpluses of recent years that were achieved by fiscal discipline and strong economic growth put us in a stronger position to respond both to the events of September 11 and to the economic slowdown than would otherwise have been the case. I ask you to recall the last recession in the early 1990s where our triple-digit deficits limited us from considering a major fiscal stimulus to jump start the economy due to well- founded fears about the impact of such measures on interest rates that were already quite high. In contrast, the fiscal restraint of recent years has given us the flexibility we need to both respond to the security crisis and consider short-term stimulus efforts. As we respond to the urgent priorities of today, we need to do so with an eye to the significant long-term fiscal challenges we face just over the 10- year budget horizon. I know that you and your counterparts in the Senate have given a great deal of thought to how the Congress and the President might balance today’s immediate needs against our long-term fiscal challenges. This is an important note to sound—while some short-term actions are understandable and necessary, long-term fiscal discipline is still an essential need. As we seek to meet today’s urgent needs, it is important to be mindful of the collective impact of our decisions on the overall short- and long-term fiscal position of the government. For the short term, we should be wary of building in large permanent structural deficits that may drive up interest rates, thereby offsetting the potential economic stimulus Congress provides. For the longer term, known demographic trends (e.g., the aging of our population) and rising health care costs will place increasing claims on future federal budgets–reclaiming the fiscal flexibility necessary to address these and other emerging challenges is a major task facing this generation. None of the changes since September 11 have lessened these long-term pressures on the budget. In fact, the events of September 11 have served to increase our long-range challenges. The baby boom generation is aging and is projected to enjoy greater life expectancy. As the share of the population over 65 climbs, federal spending on the elderly will absorb larger and ultimately unsustainable shares of the federal budget. Federal health and retirement spending are expected to surge as people live longer and spend more time in retirement. In addition, advances in medical technology are likely to keep pushing up the cost of providing health care. Absent substantive change in related entitlement programs, we face the potential return of large deficits requiring unprecedented spending cuts in other areas or unprecedented tax increases. As you know, the Director of the Congressional Budget Office (CBO) has recently suggested the possibility of a federal budget deficit in fiscal year 2002, and other budget analysts appear to be in agreement. While we do not know today what the 10-year budget projections will be in the next updates by CBO and the Office of Management and Budget (OMB), we do know the direction: they will be considerably less optimistic than before September 11, and the long-term outlook will look correspondingly worse. For example, if we assume that the 10-year surpluses CBO projected in August are eliminated, by 2030 absent changes in the structure of Social Security and Medicare, there would be virtually no room for any other federal spending priorities, including national defense, education, and law enforcement. (See fig. 3.) The resource demands that come from the events of September 11—and the need to address the gaps these events surfaced—will demand tough choices. Part of that response must be to deal with the threats to our long-term fiscal health. Ultimately, restoring our long-term fiscal flexibility will involve both promoting higher long- term economic growth and reforming the federal entitlement programs. When Congress returns for its next session, these issues should be placed back on the national agenda. With this long-term outlook as backdrop, an ideal fiscal response to a short-term economic downturn would be temporary and targeted, and avoid worsening the longer-term structural pressures on the budget. However, you have been called upon not merely to respond to a short-term economic downturn but also to the homeland security needs so tragically highlighted on September 11. This response will appropriately consist of both temporary and longer-term commitments. While we might all hope that the struggle against terrorism might be brought to a swift conclusion, prudence dictates that we plan for a longer-term horizon in this complex conflict. Given the long-term fiscal challenge driven by the coming change in our demographics, you might think about the options you face in responding to short-term economic weakness in terms of a range or portfolio of fiscal actions balancing today’s urgent needs with tomorrow’s fiscal challenges. In my testimony last February before the Senate Budget Committee, I suggested that fiscal actions could be described as a continuum by the degree of long-term fiscal risk they present. At one end, debt reduction and entitlement reform actually increase future fiscal flexibility by freeing up resources. One-time actions—either on the tax or spending side of the budget—may have limited impact on future flexibility. At the other end of the fiscal risk spectrum, permanent or open-ended fiscal actions on the spending side or tax side of the budget can reduce future fiscal flexibility—although they may have salutary effects on longer-term economic growth depending on their design and implementation. I have suggested before that increasing entitlement spending arguably presents the highest risk to our long-range fiscal outlook. Whatever choices the Congress decides to make, approaches should be explored to mitigate risk to the long term. For example, provisions with plausible expiration dates—on the spending and/or the tax side—may prompt re-examination taking into account any changes in fiscal circumstances. In addition, a mix of temporary and permanent actions can also serve to reduce risk. As we move beyond the immediate threats, it will be important for the Congress and the President to take a hard look at competing claims on the federal fisc. I don’t need to remind this Committee that a big contributor to deficit reduction in the 1990s was the decline in defense spending. Given recent events, it is pretty clear that the defense budget is not a likely source for future budget reductions. (See fig. 4.) Once the economy rebounds, returning to surpluses will take place against the backdrop of greater competition of claims within the budget. The new commitments that we need to undertake to protect this nation against the threats stemming from terrorism will compete with other priorities. Subjecting both new proposals and existing programs to scrutiny would increase the ability to accommodate any new needs. A fundamental review of existing programs and operations can create much needed fiscal flexibility to address emerging needs by weeding out programs that have proven to be outdated, poorly targeted or inefficient in their design and management. Many programs were designed years ago to respond to earlier challenges. Obviously many things have changed. It should be the norm to reconsider the relevance or “fit” of any federal program or activity in today’s world and for the future. In fact, we have a stewardship responsibility to both today’s taxpayers and tomorrow’s to reexamine and update our priorities, programs, and agency operations. Given the significant events since the last CBO 10-year budget projections, it is clear that the time has come to conduct a comprehensive review of existing agencies and programs—which are often considered to be “in the base”—while exercising continued prudence and fiscal discipline in connection with new initiatives. In particular, agencies will need to reassess their strategic goals and priorities to enable them to better target available resources to address urgent national preparedness needs. The terrorist attacks, in fact, may provide a window of opportunity for certain agencies to rethink approaches to longstanding problems and concerns. For instance, the threat to air travel has already prompted attention to chronic problems with airport security that we and others have been pointing to for years. Moreover, the crisis might prompt a healthy reassessment of our broader transportation policy framework with an eye to improving the integration of air, rail, and highway systems to better move people and goods. Other longstanding problems also take on increased relevance in today’s world. Take, for example, food safety. Problems such as overlapping and duplicative inspections, poor coordination and the inefficient allocation of resources are not new. However, they take on a new meaning—and could receive increased attention—given increased awareness of bioterrorism issues. GAO has identified a number of areas warranting reconsideration based on program performance, targeting, and costs. Every year, we issue a report identifying specific options, many scored by CBO, for congressional consideration stemming from our audit and evaluation work. This report provides opportunities for (1) reassessing objectives of specific federal programs, (2) improved targeting of benefits and (3) improving the efficiency and management of federal initiatives. This same stewardship responsibility applies to our oversight of the funds recently provided to respond to the events of September 11. Rapid action in response to an emergency does not eliminate the need for review of how the funds are used. As you move ahead in the coming years, there will be proposals for new or expanded federal activities, but we must seek to distinguish the infinite variety of “wants” from those investments that have greater promise to effectively address more critical “needs.” In sorting through these proposals, we might apply certain investment criteria in making our choices. Well-chosen enhancements to the nation’s infrastructure are an important part of our national preparedness strategy. Investments in human capital for certain areas such as intelligence, public health and airport security will also be necessary as well to foster and maintain the skill sets needed to respond to the threats facing us. As we have seen with the airline industry, we may even be called upon to provide targeted and temporary assistance to certain vital sectors of our economy affected by this crisis. A variety of governmental tools will be proposed to address these challenges—grants, loans, tax expenditures, direct federal administration. The involvement of a wide range of third parties—state and local governments, nonprofits, private corporations, and even other nations—will be a vital part of the national response as well. In the short term, we have to do what is necessary to get this nation back on its feet and compassionately deal with the human tragedies left in its wake. However, as we think about our longer-term preparedness and develop a comprehensive homeland security strategy, we can and should select those programs and tools that promise to provide the most cost- effective approaches to achieve our goals. Some of the key questions that should be asked include the following: Does the proposed activity address a vital national preparedness mission and do the benefits of the proposal exceed its costs? To what extent can the participation of other sectors of the economy, including state and local governments, be considered; and how can we select and design tools to best leverage and coordinate the efforts of numerous governmental and private entities? Is the proposal designed to prevent other sectors or governments from reducing their investments as a result of federal involvement? How can we ensure that the various federal tools and programs addressing the objective are coherently designed and integrated so that they work in a synergistic rather than a fragmented fashion? Do proposals to assist critical sectors in the recovery from terrorist attacks appropriately distinguish between temporary losses directly attributable to the crisis and longer-term costs stemming from broader and more enduring shifts in markets and other forces? Are the proposal’s time frames, cost projections, and promises realistic in light of past experience and the capacity of administrators at all levels to implement? We will face the challenge of sorting out these many claims on the federal budget without the fiscal benchmarks and rules that have guided us through the years of deficit reduction into surplus. Your job therefore has become much more difficult. Ultimately, as this Committee recommended on October 4, we should attempt to return to a position of surplus as the economy returns to a higher growth path. Although budget balance may have been the desired fiscal position in past decades, nothing short of surpluses are needed to promote the level of savings and investment necessary to help future generations better afford the commitments of an aging society. As you seek to develop new fiscal benchmarks to guide policy, you may want to look at approaches taken by other countries. Certain nations in the Organization for Economic Cooperation and Development, such as Sweden and Norway, have gone beyond a fiscal policy of balance to one of surplus over the business cycle. Norway has adopted a policy of aiming for budget surpluses to help better prepare for the fiscal challenges stemming from an aging society. Others have established a specific ratio of debt to gross domestic product as a fiscal target. Conclusion The terrorist attack on September 11, 2001, was a defining moment for our nation, our government, and, in some respects, the world. The initial response by the President and the Congress has shown the capacity of our government to act quickly. However, it will be important to follow up on these initial steps to institutionalize and sustain our ability to deal with a threat that is widely recognized as a complex and longer-term challenge. As the President and the Congress—and the American people—recognize, the need to improve homeland security is not a short-term emergency. It will continue even if we are fortunate enough to have the threats moved off the front page of our daily papers. As I noted earlier, implementing a successful homeland security strategy will encounter many of the same performance and accountability challenges that we have identified throughout the federal government. These include bringing more coherence to the operations of many agencies and programs, dealing with human capital issues, and adequately defining performance goals and measuring success. The appointment of former Governor Ridge to head an Office of Homeland Security within the Executive Office of the President is a promising first step in marshalling the resources necessary to address our homeland security requirements. It can be argued, however, that statutory underpinnings and effective congressional oversight are critical to sustaining broad scale initiatives over the long term. Therefore, as we move beyond the immediate response to the design of a longer-lasting approach to homeland security, I urge you to consider the implications of different structures and statutory frameworks for accountability and your ability to conduct effective oversight. Needless to say, I am also interested in the impact of various approaches on GAO’s ability to assist you in this task. You are faced with a difficult challenge: to respond to legitimate short- term needs while remaining mindful of our significant and continuing long- term fiscal challenges. While the Congress understandably needs to focus on the current urgent priorities of combating international terrorism, securing our homeland, and stimulating our economy, it ultimately needs to return to a variety of other challenges, including our long-range fiscal challenge. Unfortunately, our long-range challenge has become more difficult, and our window of opportunity to address our entitlement challenges is narrowing. As a result it will be important to return to these issues when the Congress reconvenes next year. We in GAO stand ready to help you address these important issues both now and in the future. I would be happy to answer any questions that you may have. Appendix I: Prior GAO Work Related to Homeland Security GAO has completed several congressionally requested efforts on numerous topics related to homeland security. Some of the work that we have done relates to the areas of combating terrorism, aviation security, transnational crime, protection of critical infrastructure, and public health. The summaries describe recommendations made before the President established the Office of Homeland Security. Combating Terrorism Given concerns about the preparedness of the federal government and state and local emergency responders to cope with a large-scale terrorist attack involving the use of weapons of mass destruction, we reviewed the plans, policies, and programs for combating domestic terrorism involving weapons of mass destruction that were in place prior to the tragic events of September 11. Our report, Combating Terrorism: Selected Challenges and Related Recommendations, which was issued September 20, 2001, updates our extensive evaluations in recent years of federal programs to combat domestic terrorism and protect critical infrastructure. Progress has been made since we first began looking at these issues in 1995. Interagency coordination has improved, and interagency and intergovernmental command and control now is regularly included in exercises. Agencies also have completed operational guidance and related plans. Federal assistance to state and local governments to prepare for terrorist incidents has resulted in training for thousands of first responders, many of whom went into action at the World Trade Center and at the Pentagon on September 11, 2001. We also recommended that the President designate a single focal point with responsibility and authority for all critical functions necessary to provide overall leadership and coordination of federal programs to combat terrorism. The focal point should oversee a comprehensive national-level threat assessment on likely weapons, including weapons of mass destruction, that might be used by terrorists and should lead the development of a national strategy to combat terrorism and oversee its implementation. With the President’s appointment of the Homeland Security Adviser, that step has been taken. Furthermore, we recommended that the Assistant to the President for Science and Technology complete a strategy to coordinate research and development to improve federal capabilities and avoid duplication. Aviation Security Since 1996, we have presented numerous reports and testimonies and identified numerous weaknesses that we found in the commercial aviation security system. For example, we reported that airport passenger screeners do not perform well in detecting dangerous objects, and Federal Aviation Administration tests showed that as testing gets more realistic— that is, as tests more closely approximate how a terrorist might attempt to penetrate a checkpoint—screener performance declines significantly. In addition, we were able to penetrate airport security ourselves by having our investigators create fake credentials from the Internet and declare themselves law enforcement officers. They were then permitted to bypass security screening and go directly to waiting passenger aircraft. In 1996, we outlined a number of steps that required immediate action, including identifying vulnerabilities in the system; developing a short-term approach to correct significant security weaknesses; and developing a long-term, comprehensive national strategy that combines new technology, procedures, and better training for security personnel. Cyber Attacks on Critical Infrastructure Federal critical infrastructure-protection initiatives have focused on preventing mass disruption that can occur when information systems are compromised because of computer-based attacks. Such attacks are of growing concern due to the nation’s increasing reliance on interconnected computer systems that can be accessed remotely and anonymously from virtually anywhere in the world. In accordance with Presidential Decision Directive 63, issued in 1998, and other information-security requirements outlined in laws and federal guidance, an array of efforts has been undertaken to address these risks. However, progress has been slow. For example, federal agencies have taken initial steps to develop critical infrastructure plans, but independent audits continue to identify persistent, significant information security weaknesses that place many major federal agencies’ operations at high risk of tampering and disruption. In addition, while federal outreach efforts have raised awareness and prompted information sharing among government and private sector entities, substantive analysis of infrastructure components to identify interdependencies and related vulnerabilities has been limited. An underlying deficiency impeding progress is the lack of a national plan that fully defines the roles and responsibilities of key participants and establishes interim objectives. Accordingly, we have recommended that the Assistant to the President for National Security Affairs ensure that the government’s critical infrastructure strategy clearly define specific roles and responsibilities, develop interim objectives and milestones for achieving adequate protection, and define performance measures for accountability. The administration has been reviewing and considering adjustments to the government’s critical infrastructure-protection strategy and last week, announced appointment of a Special Advisor to the President for Cyberspace Security. International Crime Control On September 20, 2001, we publicly released a report on international crime control and reported that individual federal entities have developed strategies to address a variety of international crime issues, and for some crimes, integrated mechanisms exist to coordinate efforts across agencies. However, we found that without an up-to-date and integrated strategy and sustained top-level leadership to implement and monitor the strategy, the risk is high that scarce resources will be wasted, overall effectiveness will be limited or not known, and accountability will not be ensured. We recommended that the Assistant to the President for National Security Affairs take appropriate action to ensure sustained executive-level coordination and assessment of multi-agency federal efforts in connection with international crime, including efforts to combat money laundering. Some of the individual actions we recommended were to update the existing governmentwide international crime threat assessment, to update or develop a new International Crime Control Strategy to include prioritized goals as well as implementing objectives, and to designate responsibility for executing the strategy and resolving any jurisdictional issues. Public Health The spread of infectious diseases is a growing concern. Whether a disease outbreak is intentional or naturally occurring, the public health response to determine its causes and contain its spread is largely the same. Because a bioterrorist event could look like a natural outbreak, bioterrorism preparedness rests in large part on public health preparedness. We reported in September 2001 that concerns remain regarding preparedness at state and local levels and that coordination of federal terrorism research, preparedness, and response programs is fragmented. In our review last year of the West Nile virus outbreak in New York, we also found problems related to communication and coordination among and between federal, state, and local authorities. Although this outbreak was relatively small in terms of the number of human cases, it taxed the resources of one of the nation’s largest local health departments. In 1999, we reported that surveillance for important emerging infectious diseases is not comprehensive in all states, leaving gaps in the nation’s surveillance network. Laboratory capacity could be inadequate in any large outbreak, with insufficient trained personnel to perform laboratory tests and insufficient computer systems to rapidly share information. Earlier this year, we reported that federal agencies have made progress in improving their management of the stockpiles of pharmaceutical and medical supplies that would be needed in a bioterrorist event, but that some problems still remained. There are also widespread concerns that hospital emergency departments generally are not prepared in an organized fashion to treat victims of biological terrorism and that hospital emergency capacity is already strained, with emergency rooms in major metropolitan areas routinely filled and unable to accept patients in need of urgent care. To improve the nation’s public health surveillance of infectious diseases and help ensure adequate public protection, we recommended that the Director of the Centers for Disease Control and Prevention lead an effort to help federal, state, and local public health officials achieve consensus on the core capacities needed at each level of government. We advised that consensus be reached on such matters as the number and qualifications of laboratory and epidemiological staff as well as laboratory and information technology resources. Related GAO Products Homeland Security: A Risk Management Approach Can Guide Preparedness Efforts (GAO-02-208T, Oct. 31, 2001). Homeland Security: Need to Consider VA’s Role in Strengthening Federal Preparedness (GAO-02-145T, Oct. 15, 2001). Homeland Security: Key Elements of a Risk Management Approach (GAO-02-150T, Oct. 12, 2001). Homeland Security: A Framework for Addressing the Nation’s Efforts, (GAO-01-1158T, Sept. 21, 2001). Combating Terrorism Combating Terrorism: Considerations for Investing Resources in Chemical and Biological Preparedness (GAO-02-162T, Oct. 17, 2001). Combating Terrorism: Selected Challenges and Related Recommendations (GAO-01-822, Sept. 20, 2001). Combating Terrorism: Actions Needed to Improve DOD’s Antiterrorism Program Implementation and Management (GAO-01-909, Sept. 19, 2001). Combating Terrorism: Comments on H.R. 525 to Create a President’s Council on Domestic Preparedness (GAO-01-555T, May 9, 2001). Combating Terrorism: Observations on Options to Improve the Federal Response (GAO-01-660T, Apr. 24, 2001). Combating Terrorism: Accountability Over Medical Supplies Needs Further Improvement (GAO-01-463, Mar. 30, 2001). Combating Terrorism: Comments on Counterterrorism Leadership and National Strategy (GAO-01-556T, Mar. 27, 2001). Combating Terrorism: FEMA Continues to Make Progress in Coordinating Preparedness and Response (GAO-01-15, Mar. 20, 2001). Combating Terrorism: Federal Response Teams Provide Varied Capabilities; Opportunities Remain to Improve Coordination (GAO-01- 14, Nov. 30, 2000). Combating Terrorism: Linking Threats to Strategies and Resources (GAO/T-NSIAD-00-218, July 26, 2000). Combating Terrorism: Action Taken but Considerable Risks Remain for Forces Overseas (GAO/NSIAD-00-181, July 19, 2000). Weapons of Mass Destruction: DOD’s Actions to Combat Weapons Use Should Be More Integrated and Focused (GAO/NSIAD-00-97, May 26, 2000). Combating Terrorism: Comments on Bill H.R. 4210 to Manage Selected Counterterrorist Programs (GAO/T-NSIAD-00-172, May 4, 2000). Combating Terrorism: How Five Foreign Countries Are Organized to Combat Terrorism (GAO/NSIAD-00-85, Apr. 7, 2000). Combating Terrorism: Issues in Managing Counterterrorist Programs (GAO/T-NSIAD-00-145, Apr. 6, 2000). Combating Terrorism: Need to Eliminate Duplicate Federal Weapons of Mass Destruction Training (GAO/NSIAD-00-64, Mar. 21, 2000). Combating Terrorism: Chemical and Biological Medical Supplies are Poorly Managed (GAO/HEHS/AIMD-00-36, Oct. 29, 1999). Combating Terrorism: Observations on the Threat of Chemical and Biological Terrorism (GAO/T-NSIAD-00-50, Oct. 20, 1999). Combating Terrorism: Need for Comprehensive Threat and Risk Assessments of Chemical and Biological Attack (GAO/NSIAD-99-163, Sept. 7, 1999). Combating Terrorism: Analysis of Federal Counterterrorist Exercises (GAO/NSIAD-99-157BR, June 25, 1999). Combating Terrorism: Observations on Growth in Federal Programs (GAO/T-NSIAD-99-181, June 9, 1999). Combating Terrorism: Analysis of Potential Emergency Response Equipment and Sustainment Costs (GAO/NSIAD-99-151, June 9, 1999). Combating Terrorism: Use of National Guard Response Teams Is Unclear (GAO/NSIAD-99-110, May 21, 1999). Combating Terrorism: Issues to Be Resolved to Improve Counterterrorist Operations (GAO/NSIAD-99-135, May 13, 1999). Combating Terrorism: Observations on Biological Terrorism and Public Health Initiatives (GAO/T-NSIAD-99-112, Mar. 16, 1999). Combating Terrorism: Observations on Federal Spending to Combat Terrorism (GAO/T-NSIAD/GGD-99-107, Mar. 11, 1999). Combating Terrorism: FBI's Use of Federal Funds for Counterterrorism-Related Activities (FYs 1995-98) (GAO/GGD-99-7, Nov. 20, 1998). Combating Terrorism: Opportunities to Improve Domestic Preparedness Program Focus and Efficiency (GAO/NSIAD-99-3, Nov. 12, 1998). Combating Terrorism: Observations on the Nunn-Lugar-Domenici Domestic Preparedness Program (GAO/T-NSIAD-99-16, Oct. 2, 1998). Combating Terrorism: Observations on Crosscutting Issues (GAO/T- NSIAD-98-164, Apr. 23, 1998). Combating Terrorism: Threat and Risk Assessments Can Help Prioritize and Target Program Investments (GAO/NSIAD-98-74, Apr. 9, 1998). Combating Terrorism: Spending on Governmentwide Programs Requires Better Management and Coordination (GAO/NSIAD-98-39, Dec. 1, 1997). Combating Terrorism: Federal Agencies' Efforts to Implement National Policy and Strategy (GAO/NSIAD-97-254, Sept. 26, 1997). Combating Terrorism: Status of DOD Efforts to Protect Its Forces Overseas (GAO/NSIAD-97-207, July 21, 1997). Terrorism and Drug Trafficking: Responsibilities for Developing Explosives and Narcotics Detection Technologies (GAO/NSIAD-97-95, Apr. 15, 1997). Federal Law Enforcement: Investigative Authority and Personnel at 13 Agencies (GAO/GGD-96-154, Sept. 30, 1996). Terrorism and Drug Trafficking: Technologies for Detecting Explosives and Narcotics (GAO/NSIAD/RCED-96-252, Sept. 4, 1996). Terrorism and Drug Trafficking: Threats and Roles of Explosives and Narcotics Detection Technology (GAO/NSIAD/RCED-96-76BR, Mar. 27, 1996). Aviation Security Aviation Security: Vulnerabilities in, and Alternatives for, Preboard Screening Security Operations, (GAO-01-1171T, Sept. 25, 2001). Aviation Security: Weaknesses in Airport Security and Options for Assigning Screening Responsibilities, (GAO-01-1165T, Sept. 21, 2001). Aviation Security: Terrorist Acts Demonstrate Urgent Need to Improve Security at the Nation’s Airports (GAO-01-1162T, Sept. 20, 2001). Responses of Federal Agencies and Airports We Surveyed About Access Security Improvements (GAO-01-1069R, Aug. 31, 2001). Aviation Security: Additional Controls Needed to Address Weaknesses in Carriage of Weapons Regulations (GAO/RCED-00-181, Sept. 29, 2000). Aviation Security: Long-Standing Problems Impair Airport Screeners’ Performance (GAO/RCED-00-75, June 28, 2000). Aviation Security: Breaches at Federal Agencies and Airports (GAO/T- OSI-00-10, May 25, 2000). Aviation Security: Vulnerabilities Still Exist in the Aviation Security System (GAO/T-RCED/AIMD-00-142, Apr. 6, 2000). Aviation Security: Slow Progress in Addressing Long-Standing Screener Performance Problems (GAO/T-RCED-00-125, Mar. 16, 2000). Aviation Security: FAA’s Actions to Study Responsibilities and Funding for Airport Security and to Certify Screening Companies (GAO/RCED- 99-53, Feb. 25, 1999). Aviation Security: Progress Being Made, but Long-term Attention Is Needed (GAO/T-RCED-98-190, May 14, 1998). Aviation Security: FAA's Procurement of Explosives Detection Devices (GAO/RCED-97-111R, May 1, 1997). Aviation Safety and Security: Challenges to Implementing the Recommendations of the White House Commission on Aviation Safety and Security (GAO/T-RCED-97-90, Mar. 5, 1997). Aviation Security: Technology’s Role in Addressing Vulnerabilities (GAO/T-RCED/NSIAD-96-262, Sept. 19, 1996). Aviation Security: Urgent Issues Need to Be Addressed (GAO/T- RCED/NSIAD-96-151, Sept. 11, 1996). Aviation Security: Immediate Action Needed to Improve Security (GAO/T-RCED/NSIAD-96-237, Aug. 1, 1996). Aviation Security: Development of New Security Technology Has Not Met Expectations (GAO/RCED-94-142, May 19, 1994). Aviation Security: Additional Actions Needed to Meet Domestic and International Challenges (GAO/RCED-94-38, Jan. 27, 1994). Cyber Attacks on Critical Infrastructure Information Sharing: Practices That Can Benefit Critical Infrastructure Protection (GAO-02-24, Oct. 15, 2001). Critical Infrastructure Protection: Significant Challenges in Safeguarding Government and Privately-Controlled Systems from Computer-Based Attacks, (GAO-01-1168T, Sept. 26, 2001). Critical Infrastructure Protection: Significant Challenges in Protecting Federal Systems and Developing Analysis and Warning Capabilities (GAO-01-1132T, Sept. 12, 2001). Information Security: Serious and Widespread Weaknesses Persist at Federal Agencies (GAO/AIMD-00-295, Sept. 6, 2000). Critical Infrastructure Protection: Significant Challenges in Developing Analysis, Warning, and Response Capabilities (GAO-01-769T, May 22, 2001). Critical Infrastructure Protection: Significant Challenges in Developing National Capabilities (GAO-01-232, Apr. 25, 2001). Critical Infrastructure Protection: Challenges to Building a Comprehensive Strategy for Information Sharing and Coordination (GAO/T-AIMD-00-268, July 26, 2000). Security Protection: Standardization Issues Regarding Protection of Executive Branch Officials (GAO/GGD/OSI-00-139, July 11, 2000 and GAO/T-GGD/OSI-00-177, July 27, 2000). Critical Infrastructure Protection: Comments on the Proposed Cyber Security Information Act of 2000 (GAO/T-AIMD-00-229, June 22, 2000). Critical Infrastructure Protection: “I LOVE YOU” Computer Virus Highlights Need for Improved Alert and Coordination Capabilities (GAO/T-AIMD-00-181, May 18, 2000). Critical Infrastructure Protection: National Plan for Information Systems Protection (GAO/AIMD-00-90R, Feb. 11, 2000). Critical Infrastructure Protection: Comments on the National Plan for Information Systems Protection (GAO/T-AIMD-00-72, Feb. 1, 2000). Critical Infrastructure Protection: Fundamental Improvements Needed to Assure Security of Federal Operations (GAO/T-AIMD-00-7, Oct. 6, 1999). Critical Infrastructure Protection: The Status of Computer Security at the Department of Veterans Affairs (GAO/AIMD-00-5, Oct. 4, 1999). Critical Infrastructure Protection: Comprehensive Strategy Can Draw on Year 2000 Experiences (GAO/AIMD-00-1, Oct. 1, 1999). Information Security: The Proposed Computer Security Enhancement Act of 1999 (GAO/T-AIMD-99-302, Sept. 30, 1999). Information Security: NRC’s Computer Intrusion Detection Capabilities (GAO/AIMD-99-273R, Aug. 27, 1999). Electricity Supply: Efforts Underway to Improve Federal Electrical Disruption Preparedness (GAO/RCED-92-125, Apr. 20, 1992) Public Health Anthrax Vaccine: Changes to the Manufacturing Process (GAO-02-181T, Oct. 23, 2001). Bioterrorism: Public Health and Medical Preparedness, (GAO-02-141T, Oct. 9, 2001). Bioterrorism: Coordination and Preparedness, (GAO-02-129T, Oct. 5, 2001). Bioterrorism: Federal Research and Preparedness Activities (GAO-01- 915, Sept. 28, 2001). West Nile Virus Outbreak: Lessons for Public Health Preparedness (GAO/HEHS-00-180, Sept. 11, 2000). Food Safety: Agencies Should Further Test Plans for Responding to Deliberate Contamination (GAO/RCED-00-3, Oct. 27, 1999). Emerging Infectious Diseases: Consensus on Needed Laboratory Capacity Could Strengthen Surveillance (GAO/HEHS-99-26, Feb. 5, 1999). International Crime Control International Crime Controls: Sustained Executive Level Coordination of Federal Response Needed (GAO-01-629, Sept. 20, 2001). Alien Smuggling: Management and Operational Improvements Needed to Address Growing Problem (GAO/GGD-00-103, May 1, 2000). Criminal Aliens: INS Efforts to Identify and Remove Imprisoned Aliens Continue to Need Improvement (GAO/T-GGD-99-47, Feb. 25, 1999). Criminal Aliens: INS Efforts to Remove Imprisoned Aliens Continue to Need Improvement (GAO/GGD-99-3, Oct. 16, 1998). Immigration and Naturalization Immigration and Naturalization Service: Overview of Management and Program Challenges (GAO/T-GGD-99-148, July 29, 1999). Illegal Immigration: Status of Southwest Border Strategy Implementation (GAO/GGD-99-44, May 19, 1999). Illegal Immigration: Southwest Border Strategy Results Inconclusive; More Evaluation Needed (GAO/GGD-98-21, Dec. 11, 1997).
The United States now confronts a range of diffuse threats that put increased destructive power into the hands of small states, groups, and individuals. These threats include terrorist attacks on critical infrastructure and computer systems, the potential use of weapons of mass destruction, and the spread of infectious diseases. Addressing these challenges will require leadership to develop and implement a homeland security strategy in coordination with all relevant partners, and to marshal and direct the necessary resources. The recent establishment of the Office of Homeland Security is a good first step, but questions remain about how this office will be structured, what authority its Director will have, and how this effort can be institutionalized and sustained over time. Although homeland security is an urgent and vital national priority, the United States still must address short-term and long-term fiscal challenges that were present before September 11.
GAO_GAO-08-845
Background The U.S. airline industry is principally composed of legacy, low-cost, and regional airlines, and while it is largely free of economic regulation, it remains regulated in other respects, most notably safety, security, and operating standards. Legacy airlines—sometimes called network airlines— are essentially those airlines that were in operation before the Airline Deregulation Act of 1978 and whose goal is to provide service from “anywhere to everywhere.” To meet that goal, these airlines support large, complex hub-and-spoke operations with thousands of employees and hundreds of aircraft (of various types), with service at numerous fare levels to domestic communities of all sizes and to international destinations. To enhance revenues without expending capital, legacy airlines have entered into domestic (and international) alliances that give them access to some portion of each others’ networks. Low-cost airlines generally entered the marketplace after deregulation and primarily operate less costly point-to-point service using fewer types of aircraft. Low-cost airlines typically offer simplified fare structures, which were originally aimed at leisure passengers but are increasingly attractive to business passengers because they typically do not have restrictive ticketing rules, which make it significantly more expensive to purchase tickets within 2 weeks of the flight or make changes to an existing itinerary. Regional airlines generally operate smaller aircraft—turboprops or regional jets with up to 100 seats—and provide service under code-sharing arrangements with larger legacy airlines on a cost-plus or fee-for-departure basis to smaller communities. Some regional airlines are owned by a legacy parent, while others are independent. For example, American Eagle is the regional partner for American Airlines, while independent Sky West Airlines operates on a fee-per-departure agreement with Delta Air Lines, United Airlines, and Midwest Airlines. The airline industry has experienced considerable merger and acquisition activity since its early years, especially immediately following deregulation in 1978 (fig. 1 provides a timeline of mergers and acquisitions for the eight largest surviving airlines). There was a flurry of mergers and acquisitions during the 1980s, when Delta Air Lines and Western Airlines merged, United Airlines acquired Pan Am’s Pacific routes, Northwest acquired Republic Airlines, and American and Air California merged. In 1988, merger and acquisition review authority was transferred from DOT to DOJ. Since 1998, and despite tumultuous financial periods, fewer mergers and acquisitions have occurred. In 2001, American Airlines acquired the bankrupt airline TWA, and in 2005 America West acquired US Airways while the latter was in bankruptcy. Certain other attempts at merging during that time period failed because of opposition from DOJ or employees and creditors. For example, in 2000, an agreement was reached that allowed Northwest Airlines to acquire a 50 percent stake in Continental Airlines (with limited voting power) to resolve the antitrust suit brought by DOJ against Northwest’s proposed acquisition of a controlling interest in Continental. A proposed merger of United Airlines and US Airways in 2000 also resulted in opposition from DOJ, which found that, in its view, the merger would violate antitrust laws by reducing competition, increasing air fares, and harming consumers on airline routes throughout the United States. Although DOJ expressed its intent to sue to block the transaction, the parties abandoned the transaction before a suit was filed. More recently, the 2006 proposed merger of US Airways and Delta Air Lines fell apart because of opposition from Delta’s pilots and some of its creditors, as well as its senior management. Since the airline industry was deregulated in 1978, its earnings have been extremely volatile. In fact, despite considerable periods of strong growth and increased earnings, airlines have at times suffered such substantial financial distress that the industry has experienced recurrent bankruptcies and has failed to earn sufficient returns to cover capital costs in the long run. Many analysts view the industry as inherently unstable due to key demand and cost characteristics. In particular, demand for air travel is highly cyclical, not only in relation to the state of the economy, but also with respect to political, international, and even health-related events. Yet the cost characteristics of the industry appear to make it difficult for firms to rapidly contract in the face of declining demand. In particular, aircraft are expensive, long-lived capital assets. And as demand declines, airlines cannot easily reduce flight schedules in the very near term because passengers are already booked on flights for months in advance, nor can they quickly change their aircraft fleets. That is, airplane costs are largely fixed and unavoidable in the near term. Moreover, even though labor is generally viewed as a variable cost, airline employees are mostly unionized, and airlines find that they cannot reduce employment costs very quickly when demand for air travel slows. These cost characteristics can thus lead to considerable excess capacity in the face of declining demand. Finally, the industry is also susceptible to certain external shocks—such as those caused by fuel price volatility. In 2006 and 2007, the airline industry generally regained profitability after several very difficult years. However, these underlying fundamental characteristics of the industry suggest that it will remain an industry susceptible to rapid swings in its financial health. Since deregulation in 1978, the financial stability of the airline industry has become a considerable concern for the federal government due to the level of financial assistance it has provided to the industry through assuming terminated pension plans and other forms of assistance. Since 1978 there have been over 160 airline bankruptcies. While most of these bankruptcies affected small airlines that were eventually liquidated, 4 of the more recent bankruptcies (Delta, Northwest, United, and US Airways) are among the largest corporate bankruptcies ever, excluding financial services firms. During these bankruptcies, United Airlines and US Airways terminated their pension plans and $9.7 billion in claims were shifted to the Pension Benefit Guarantee Corporation (PGBC). Further, to respond to the shock to the industry from the September 11, 2001, terrorist attacks, the federal government provided airlines with $7.4 billion in direct assistance and authorized $1.6 billion (of $10 billion available) in loan guarantees to six airlines. Although the airline industry has experienced numerous mergers and bankruptcies since deregulation, growth of existing airlines and the entry of new airlines have contributed to a steady increase in capacity. Previously, GAO reported that although one airline may reduce capacity or leave the market, capacity returns relatively quickly. Likewise, while past mergers and acquisitions have, at least in part, sought to reduce capacity, any resulting declines in industry capacity have been short-lived, as existing airlines have expanded or new airlines have expanded. Capacity growth has slowed or declined just before and during recessions, but not as a result of large airline liquidations. Figure 2 shows capacity trends since 1979 and the dates of major mergers and acquisitions. U.S. Airlines’ Financial Condition Has Improved, but It Appears to Be Short- lived The U.S. passenger airline industry has generally improved its financial condition in recent years, but its recovery appears short-lived because of rapidly increasing fuel prices. The U.S. airline industry recorded a net operating profit of $2.2 billion and $2.8 billion in 2006 and 2007, respectively, the first time since 2000 that it had earned a profit. Legacy airlines—which lost nearly $33 billion between 2001 and 2005—returned to profitability in 2006 owing to increased passenger traffic, restrained capacity, and restructured costs. Meanwhile, low-cost airlines, which also saw increased passenger traffic, remained profitable overall by continuing to keep costs low, as compared to costs at the legacy airlines, and managing their growth. The airline industry’s financial future remains uncertain and vulnerable to a number of internal and external events— particularly the rapidly increasing costs of fuel. Both Legacy and Low-Cost Airlines Improved Their Financial Positions in 2006 and 2007 The airline industry achieved modest profitability in 2006 and continued that trend through 2007. The seven legacy airlines had operating profits of $1.1 billion in 2006 and $1.8 billion in 2007, after losses totaling nearly $33 billion from 2001 through 2005. The seven low-cost airlines, after reaching an operating profit low of nearly $55 million in 2004, also saw improvement, posting operating profits of almost $958 million in 2006 and $1 billion in 2007. Figure 3 shows U.S. airline operating profits since 1998. Increased Passenger Traffic and Capacity Restraint Have Improved Airline Revenues An increase in passenger traffic since 2003 has helped improve airline revenues. Passenger traffic—as measured by revenue passenger miles (RPM)—increased for both legacy and low-cost airlines, as illustrated by figure 4. Legacy airlines’ RPMs rose 11 percent from 2003 through 2007, while low-cost airlines’ RPMs grew 24 percent during the same period. Airline revenues have also improved owing to domestic capacity restraint. Some past airline industry recoveries have been stalled because airlines grew their capacity too quickly in an effort to gain market share, and too much capacity undermined their ability to charge profitable fares. Total domestic capacity, as measured by available seat miles (ASM), increased 9 percent, from 696 billion ASMs in 2003 to 757 billion ASMs in 2007. However, legacy airlines’ ASMs declined 18 percent, from 460 billion in 2003 to 375 billion in 2007, as illustrated by figure 5. Industry experts and airline officials told us that legacy airlines reduced their domestic capacity, in part, by shifting capacity to their regional airline partners and to international routes. Even the faster growing low-cost airline segment saw a decline in ASMs in 2006 and 2007. Since 2004, legacy airlines have shifted portions of their domestic capacity to more profitable international routes. From 1998 through 2003, the legacy airlines maintained virtually the same 30/70 percent capacity allocation split between international and domestic capacity. However, during the period from 2004 to 2007, legacy airlines increased their international capacity by 7 percentage points to a 37/63 percent split between international and domestic capacities. International expansion has proven to be a source of substantial new revenues for the legacy airlines because they often face less competition on international routes. Moreover, international routes generate additional passenger flow (and revenues) through their domestic networks, helping to support service over routes where competition from low-cost airlines has otherwise reduced legacy airlines’ domestic revenues. Cost Reduction and Bankruptcy Restructuring Efforts Have Also Improved Airline Financial Positions The airlines have also undertaken cost reduction efforts—much of which occurred through the bankruptcy process—in an attempt to improve their financial positions and better insulate themselves from the cyclical nature of the industry. Excluding fuel, unit operating costs for the industry, typically measured by cost per available seat mile, have decreased 16 percent since reaching peak levels around 2001. A number of experts have pointed out that the legacy airlines have likely made most of the cost reductions that can be made without affecting safety or service; however, as figure 6 illustrates, a significant gap remains between legacy and low- cost airlines’ unit costs. A recent expert study examining industry trends in competition and financial condition found similar results, also noting that the cost gap between legacy and low-cost airlines still exists. Many airlines achieved dramatic cuts in their operational costs by negotiating contract and pay concessions from their labor unions and through bankruptcy restructuring and personnel reductions. For example, Northwest Airlines pilots agreed to two pay cuts—15 percent in 2004 and an additional 23.9 percent in 2006, while in bankruptcy—to help the airline dramatically reduce operating expenses. Bankruptcy also allowed several airlines to significantly reduce their pension expenses, as some airlines terminated and shifted their pension obligations to PBGC. Legacy airlines in particular reduced personnel as another means of reducing costs. The average number of employees per legacy airline has decreased 26 percent, from 42,558 in 1998 to 31,346 in 2006. Low-cost airlines, on the other hand, have added personnel; however, they have done so in keeping with their increases in capacity. In fact, although total low-cost airline labor costs (including salaries and benefits) steadily increased from 1998 through 2007—from $2.8 billion to $5.0 billion—labor costs have accounted for roughly the same percentage (33 percent) of total operating expenses (including fuel) throughout the time period. Although cost restructuring—achieved both through Chapter 11 bankruptcy reorganizations and outside of that process—has enabled most legacy airlines to improve their balance sheets in recent years, it still leaves the industry highly leveraged. Legacy airlines have significantly increased their total cash reserves from $2.7 billion in 1998 to $24 billion in 2007, thereby strengthening their cash and liquidity positions. Low-cost airlines also increased their total cash reserves. Industry experts we spoke with stated that this buildup of cash reserves is a strategic move to help the airlines withstand future industry shocks, as well as to pay down debts or return value to stockholders. Experts, however, also agreed that debt is still a problem within the industry, particularly for the legacy airlines. For example, legacy airlines’ assets-to-liabilities ratio (a measure of a firm’s long-term solvency) is still less than 1 (assets less than liabilities). In 1998, legacy airlines’ average ratio was 0.70, which improved only slightly to 0.74 in 2007. In contrast, while low-cost airlines have also added significant liabilities owing to their growth, their assets-to-liabilities ratio remains better than that of legacy airlines, increasing from 0.75 in 1998 to 1 in 2007. Airlines’ Financial Turnaround May Be Short-lived Because the financial condition of the airline industry remains vulnerable to external shocks—such as the rising cost of fuel, economic downturns, or terrorist attacks—the near-term and longer-term financial health of the industry remains uncertain. In light of increased fuel prices and softening passenger demand, the profit and earnings outlook has reversed itself, and airlines may incur record losses in 2008. Although the industry saw profits in 2007 and some were predicting even larger profits in 2008, experts and industry analysts now estimate that the industry could incur significant losses in 2008. In fact, although estimates vary, one analyst recently projected $2.8 billion in industry losses, while another analyst put industrywide losses between $4 billion and $9 billion for the year, depending on demand trends. More recently, the airline trade association, the Air Transport Association, estimated losses of between $5 billion and $10 billion this year, primarily due to escalating fuel prices. For the first quarter of 2008, airlines reported net operating losses of more than $1.4 billion. Fuel Costs Are Increasing and Other Costs May Increase Many experts cite rising fuel costs as a key obstacle facing the airlines for the foreseeable future. The cost of jet fuel has become an ever-increasing challenge for airlines, as jet fuel climbed to over $2.85 per gallon in early 2008, and has continued to increase. By comparison, jet fuel was $1.11 per gallon in 2000, in 2008 dollars (Fig. 7 illustrates the increase in jet fuel prices since 2000). Some airlines, particularly Southwest Airlines, reduced the impact of rising fuel prices on their costs through fuel hedges; however, most of those airlines’ hedges are limited or, in the case of Southwest, will expire within the next few years and may be replaced with new but more expensive hedges. In an attempt to curtail operating losses linked to higher fuel costs, most of the largest airlines have already announced plans to trim domestic capacity during 2008, and some have added baggage and other fees to their fares. Additionally, nine airlines have already filed for bankruptcy or ceased operations since December 2007, with many citing the significant increase in fuel costs as a contributing factor. In addition to rising fuel costs, other factors may strain airlines’ financial health in the coming years. Labor contract issues are building at several of the legacy airlines, as labor groups seek to reverse some of the financial sacrifices that they made to help the airlines avoid or emerge from bankruptcy. Additionally, because bankruptcies required the airlines to reduce capital expenditures in order to bolster their balance sheets, needed investments in fleet renewal, new technologies, and product enhancements were delayed. Despite their generally sound financial condition as a group, some low-cost airlines may be facing cost increases as well. Airline analysts told us that some low-cost airline cost advantages may diminish as low-cost airlines begin to face cost pressures similar to those of the legacy airlines, including aging fleets—and their associated increased maintenance costs—and workforces with growing experience and seniority demanding higher pay. Industry Faces Challenging Revenue Environment from Economic Downturns and Consumer Fare Expectations The recent economic downturn and the long-term downward trend in fares create a challenging environment for revenue generation. Macroeconomic troubles—such as the recent tightening credit market and housing slump—have generally served as early indicators of reduced airline passenger demand. Currently, airlines are anticipating reduced demand by the fall of 2008. Additionally, domestic expansion of low-cost airline operations, as well as an increased ability of consumers to shop for lower fares more easily in recent years, has not only led to lower fares in general, but has also contributed to fare “compression”—that is, fewer very high-priced tickets are sold today than in the past. The downward pressure on ticket prices created by the increase of low-cost airline offerings is pervasive, according to a recent study and DOT testimony. Experts we spoke with explained that the increased penetration of low- fare airlines, combined with much greater transparency in fare pricing, has increased consumer resistance to higher fares. Domestic Airline Competition Increased from 1998 through 2006, as Low- Cost Airlines Expanded Competition within the U.S. domestic airline market increased from 1998 through 2006 as reflected by an increase in the average number of competitors in the top 5,000 city-pair markets, the presence of low-cost airlines in more of these markets, lower fares, fewer dominated city-pair markets, and a shrinking dominance by a single airline at some of the nation’s largest airports. The average number of competitors has increased in these markets from 2.9 in 1998 to 3.3 in 2006. The number of these markets served by low-cost airlines increased by nearly 60 percent, from nearly 1,300 to approximately 2,000 from 1998 through 2006. Average round trip fares fell 20 percent, after adjusting for inflation, during the same period. Furthermore, approximately 500 fewer city-pair markets (15 percent) are dominated by a single airline. Similarly, competition has increased at the nation’s 30 largest airports. Average Number of Competitors and Low-Cost Airline Penetration Has Increased in the Top 5,000 Markets The average number of competitors in the largest 5,000 city-pair market has increased since 1998. Overall, the average number of effective competitors—any airline that carries at least 5 percent of the traffic in that market—in the top 5,000 markets rose from 2.9 in 1998 to 3.3 in 2006. As figure 8 shows, the number of single airline (monopoly) markets decreased to less than 10 percent of the top 5,000 markets, while the number of markets with three or more airlines grew to almost 70 percent in 2006. Monopoly markets are generally the smallest city-pair markets, which lack enough traffic to support more than one airline. Longer-distance markets are more competitive than shorter-distance markets. For example, among the top 5,000 markets in 2006, longer- distance markets (greater than 1,000 miles) had on average 3.9 competitors, while routes of less than 250 miles had on average only 1.7 competitors (fig. 9). The difference exists in large part because longer- distance markets have more viable options for connecting over more hubs. For example, a passenger on a long-haul flight from Allentown, Pennsylvania, to Los Angeles, California—a distance of over 2,300 miles— would have options of connecting through 10 different hubs, including Cincinnati, Chicago, and Detroit. By comparison, a passenger from Seattle to Portland, Oregon—a distance of just under 300 miles—has no connection options, nor would connections be as attractive to passengers in short-haul markets. Low-Cost Airlines Have Increased Their Presence among the Top 5,000 Markets Low-cost airlines have increased the number of markets and passengers served and their overall market share since 1998. The number of the top 5,000 markets served by a low-cost airline jumped from approximately 1,300 to over 2,000 from 1998 through 2006, an increase of nearly 60 percent. Most of that increase is the result of low-cost airlines expanding their service into longer-haul markets than they typically served in 1998. Specifically, the number of markets served by low-cost airlines that were longer than 1,000 miles has increased by nearly 45 percent since 1998. For example, in 1998 Southwest Airlines served about 360 markets over 1,000 miles, and by 2006 it served over 670 such markets. Low-cost airlines’ expansion increased the extent to which they competed directly with legacy airlines. In 1998, low-cost airlines operated in 25 percent of the top 5,000 markets served by legacy airlines and provided a low-cost alternative to approximately 60 percent of passengers. By 2006, low-cost airlines were competing directly with legacy airlines in 42 percent of the top 5,000 markets (an additional 756 markets) and provided a low- cost alternative to approximately 80 percent of passengers. In all, the growth of low-cost airlines into more markets and providing service to more passengers contributed to the shift in passenger traffic between legacy and low-cost airlines. Overall, low-cost airlines’ share of passenger traffic increased from 25 percent in 1998 to 33 percent in 2006, while legacy airlines’ domestic share of passenger traffic fell from 70 percent to 65 percent from 1998 through 2006 (see fig. 10). Low-cost airlines carried 78 million passengers in 1998 and 125 million in 2006—an increase of 59 percent. Average Fares Have Declined for Both Legacy and Low-Cost Airlines Airfares in the top 5,000 markets, one of the key gauges of competition, have fallen in real terms since 1998. From 1998 through 2006, the round- trip average airfare fell from $198 to $161 (in 2006 dollars), a decrease of nearly 20 percent. As figure 11 shows, average fares have fallen across all distances. In 1998, average fares ranged from $257 for trips longer than 1,000 miles to $129 for trips of 250 miles or less. Since that time, however, fares have fallen considerably on the longest trips, and as of 2006, averaged just $183, a drop of 29 percent since 1998. Average fares for the shortest trips have not fallen as much. For trips of 250 miles or less, average fares as of 2006 have fallen 6 percent, to $121. Average fares tend to be lower in markets where low-cost airlines are present. Prior studies have shown that the presence of low-cost airlines in a market is associated with lower fares for all passengers in that market. In 1998, over 1,300 of the top 5,000 markets had a low-cost airline present, with an average fare of $167, as opposed to the 3,800 markets without low- cost competition, where the average fares averaged around $250. This same relationship was maintained in 2006, when low-cost airlines’ presence grew to over 2,000 markets, and the average fare in these markets was $153, while the average fare in 2006 legacy airline-only markets was $194. Fewer Markets Are Dominated by a Single Airline The number of the top 5,000 markets dominated by a single airline has declined. Since 1998, the number of dominated markets—markets with one airline with more than 50 percent of passengers—declined as competitors expanded into more markets. The number of dominated markets declined by approximately 500 markets, from 3,500 to 3,000 (or 15 percent) from 1998 through 2006, while the number of nondominated markets correspondingly rose by approximately 500, from approximately 1,400 to 1,900 markets (or 37 percent). (See fig. 12.) Although there are fewer dominated markets among the top 5,000 markets, further analysis shows that low-cost airlines have increased their share of dominated markets while legacy airlines lost share. In 1998 legacy airlines dominated approximately 3,000 of the top 5,000 markets, but in 2006 that number fell to approximately 2,400. At the same time, low-cost airlines increased their share of dominated markets from about 300 markets in 1998 to approximately 500 markets. Appendix III shows the number of dominated markets by airline in 2006. Low-cost airlines tend to operate in larger dominated markets than legacy airlines. For example, in 2006, legacy airlines carried an average of 55,000 passengers per dominated market, while low-cost airlines carried an average of 165,000 passengers per dominated market. This difference reflects the low-cost airlines’ targeting of high-density markets and the nature of hub-and-spoke networks operated by legacy airlines. Competition Has Increased at the Nation’s Largest Airports Competition has generally increased at the nation’s largest airports. Airline dominance at many of the largest domestic airports in the United States has decreased as competition has increased in the industry. Although legacy airlines have a dominant position—carrying at least 50 percent of passenger traffic—at 16 of the nation’s 30 largest airports. One-half of these 16 dominated airports saw a decline in passenger traffic from 1998 through 2006 (see app. III). Of the 16 airports dominated by a single airline, 14 were dominated by legacy airlines. At 9 of these airports, the second largest airline carried less than 10 percent of passenger traffic, while at the other 5 airports a low-cost airline carried 10 percent or more of passenger traffic. Airlines Seek to Combine to Increase Profits and Improve Financial Viability, but Challenges Exist Airlines seek mergers and acquisitions as a means to increase profitability and long-term financial viability, but must weigh those potential benefits against the operational and regulatory costs and challenges posed by combinations. A merger’s or acquisition’s potential to increase short-term profitability and long-term financial viability stems from both anticipated cost reductions and increased revenues. Cost reductions may be achieved through merger-generated operating efficiencies—for example, through the elimination of duplicative operations. Cost savings may also flow from adjusting or reducing the combined airline’s capacity and adjusting its mix of aircraft. Airlines may also seek mergers and acquisitions as a means to increase their revenues through increased fares in some markets— stemming from capacity reductions and increased market share in existing markets—and an expanded network, which creates more market pairs both domestically and internationally. Nonetheless, increased fares in these markets may be temporary because other airlines could enter the affected markets and drive fares back down. Mergers and acquisitions also present several potential challenges to airline partners, including labor and other integration issues—which may not only delay (or even preclude) consolidation, but also offset intended gains. DOJ antitrust review is another potential challenge, and one that we discuss in greater detail in the next section. Airline Mergers and Acquisitions Aim to Increase Profitability by Reducing Costs and Increasing Revenues A merger or acquisition may produce cost savings by enabling an airline to reduce or eliminate duplicative operating costs. Based on past mergers and acquisitions and experts we consulted, a range of potential cost reductions can result, such as the elimination of duplicative service, labor, and operations—including inefficient (or redundant) hubs or routes—and operational efficiencies from the integration of computer systems, and similar airline fleets. Other cost savings may stem from facility consolidation, procurement savings, and working capital and balance sheet restructuring, such as renegotiating aircraft leases. According to US Airways officials and analyst reports, for example, the merger of America West and US Airways generated $750 million in cost savings through the integration of information technology, combined overhead operations, and facilities closings. Airlines may also pursue mergers or acquisitions to more efficiently manage capacity—both to reduce operating costs and to generate revenue—in their networks. A number of experts we spoke with stated that given recent economic pressures, particularly increased fuel costs, one motive for mergers and acquisitions is the opportunity to lower costs by reducing redundant capacity. Experts have said that industry mergers and acquisitions could lay the foundation for more rational capacity reductions in highly competitive domestic markets and could help mitigate the impact of economic cycles on airline cash flow. In addition, capacity reductions from a merger or acquisition could also serve to generate additional revenue through increased fares on some routes; over the long- term, however, those increased fares may be brought down because other airlines, especially low-cost airlines, could enter the affected markets and drive prices back down. In the absence of mergers and acquisitions and facing ongoing cost pressures, airlines have already begun to reduce their capacity in 2008. Airlines may also seek to merge with or acquire an airline as a way to generate greater revenues from an expanded network, which serves more city-pair markets, better serves passengers, and thus enhances competition. Mergers and acquisitions may generate additional demand by providing consumers more domestic and international city-pair destinations. Airlines with expansive domestic and international networks and frequent flier benefits particularly appeal to business traffic, especially corporate accounts. Results from a recent Business Traveler Coalition (BTC) survey indicate that about 53 percent of the respondents were likely to choose a particular airline based upon the extent of its route network. Therefore, airlines may use a merger or acquisition to enhance their networks and gain complementary routes, potentially giving the combined airline a stronger platform from which to compete in highly profitable markets. Mergers and acquisitions can also be used to generate greater revenues through increased market share and fares on some routes. For example, some studies of airline mergers and acquisitions during the 1980s showed that prices were higher on some routes from the airline’s hubs after the combination was completed. At the same time, even if the combined airline is able to increase prices in some markets, the increase may be transitory if other airlines enter the markets with sufficient presence to counteract the price increase. In an empirical study of airline mergers and acquisitions up to 1992, Winston and Morrison suggest that being able to raise prices or stifle competition does not play a large role in airlines’ merger and acquisition decisions. Numerous studies have shown, though, that increased airline dominance at an airport results in increased fare premiums, in part because of competitive barriers to entry. Several recent merger and acquisition attempts (United and US Airways in 2000, Northwest and Continental in 1998) were blocked because of opposition by DOJ because of concerns about anticompetitive impacts. Ultimately, however, each merger and acquisition differs in the extent to which cost reductions and revenue increases are factors. Cost reductions and the opportunity to obtain increased revenue could serve to bolster a merged airline’s financial condition, enabling the airline to better compete in a highly competitive international environment. For example, officials from US Airways stated that as a result of its merger with America West, the airline achieved a significant financial transformation, and they cited this as a reason why airlines merge. Many industry experts believe that the United States will need larger, more economically stable airlines to be able to compete with the merging and larger foreign airlines that are emerging in the global economy. The airline industry is becoming increasingly global; for example, the Open Skies agreement between the United States and the European Union became effective in March 2008. Open Skies has eliminated previous government controls on these routes (especially to and from London’s Heathrow Airport), presenting U.S. and European Union airlines with great opportunities as well as competition. In order to become better prepared to compete under Open Skies, global team antitrust immunity applications have already been filed with DOT. Antitrust immune alliances differ from current code-share agreements or alliance group partnerships because they allow partners not only to code-share but also to jointly plan and market their routes and schedules, share revenue, and possibly even jointly operate flights. According to one industry analyst, this close global cooperation may facilitate domestic consolidation as global alliance partners focus on maximizing synergies for both increasing revenues and reducing costs with their global alliance teams. Potential Challenges to Mergers and Acquisitions Include Integration Issues and Regulatory Challenges We identified a number of potential barriers to consummating a combination, especially in terms of operational challenges that could offset a merger’s or acquisition’s intended gains. The most significant operational challenges involve the integration of workforces, organizational cultures, aircraft fleets, and information technology systems and processes. Indeed, past airline mergers and acquisitions have proven to be difficult, disruptive, and expensive, with costs in some cases increasing in the short term as the airlines integrate. Airlines also face potential challenges to mergers and acquisitions from DOJ’s antitrust review, discussed in the next section. Workforce integration is often particularly challenging and expensive, and involves negotiation of new labor contracts. Labor groups—including pilots, flight attendants, and mechanics—may be able to demand concessions from the merging airlines during these negotiations, several experts explained, because labor support would likely be required in order for a merger or acquisition to be successful. Some experts also note that labor has typically failed to support mergers, fearing employment or salary reductions. Obtaining agreement from each airline’s pilots’ union on an integrated pilot seniority list—which determines pilots’ salaries, as well as what equipment they can fly—may be particularly difficult. According to some experts, as a result of these labor integration issues and the challenges of merging two work cultures, airline mergers have generally been unsuccessful. For example, although the 2005 America West–US Airways merger has been termed a successful merger by many industry observers, labor disagreements regarding employee seniority, and especially pilot seniority, remain unresolved. More recently, labor integration issues derailed merger talks—albeit temporarily—between Northwest Airlines and Delta Air Lines in early 2008, when the airlines’ labor unions were unable to agree on pilot seniority list integration. Recently, the Consolidated Appropriations Act of 2008 included a labor protective provision that applies to the integration of employees of covered air carriers, and could affect this issue. Furthermore, the existence of distinct corporate cultures can influence whether two firms will be able to merge their operations successfully. For example, merger discussions between United Airlines and US Airways broke down in 1995 because the employee-owners of United feared that the airlines’ corporate cultures would clash. The integration of two disparate aircraft fleets may also be costly. Combining two fleets may increase costs associated with pilot training, maintenance, and spare parts. For example, a merger between Northwest and Delta would result in an airline with 10 different aircraft types. These costs may, however, be reduced post-merger by phasing out certain aircraft from the fleet mix. Pioneered by Southwest and copied by other low-cost airlines, simplified fleets have enabled airlines to lower costs by streamlining maintenance operations and reducing training times. If an airline can establish a simplified fleet, or “fleet commonality”—particularly by achieving an efficient scale in a particular aircraft—then many of the cost efficiencies of a merger or acquisition may be set in motion by facilitating pilot training, crew scheduling, maintenance integration, and inventory rationalization. Finally, integrating information technology processes and systems can also be problematic and time-consuming for a merging airline. For example, officials at US Airways told us that while some cost reductions were achieved within 3 to 6 months of its merger with America West, the integration of information technology processes has taken nearly 2 ½ years. Systems integration issues are increasingly daunting as airlines attempt to integrate a complex mix of modern in-house systems, dated mainframe systems, and outsourced information technology. The US Airways-America West merger highlighted the potential challenges associated with combining reservations systems, as there were initial integration problems. The Department of Justice’s Antitrust Review Is a Critical Step in the Airline Merger and Acquisition Process The DOJ’s review of airline mergers and acquisitions is a key step for airlines hoping to consummate a merger. The Guidelines provide a five- part integrated process under which mergers and acquisitions are assessed by DOJ. In addition, DOT plays an advisory role for DOJ and, if the combination is consummated, may conduct financial and safety reviews of the combined entity under its regulatory authority. Public statements by DOJ officials and a review of the few airline mergers and acquisitions evaluated by DOJ over the last 10 years also provide some insight into how DOJ applies the Guidelines to the airline industry. While each merger and acquisition review is case specific, our analysis shows that changes in the airline industry, such as increased competition in international and domestic markets, could lead to entry being more likely than in the past. Additionally, the Guidelines have evolved to provide clarity as to the consideration of efficiencies, an important factor in airline mergers. The Department of Justice Uses the Guidelines to Identify Antitrust Concerns Most proposed mergers or acquisitions must be reviewed by DOJ. In particular, under the Hart-Scott-Rodino Act, an acquisition of voting securities and/or assets above a set monetary amount must be reported to DOJ (or the Federal Trade Commission for certain industries) so the department can determine whether the merger or acquisition poses any antitrust concerns. To analyze whether a proposed merger or acquisition raises antitrust concerns—whether the proposal will create or enhance market power or facilitate its exercise—DOJ follows an integrated five- part analytical process set forth in the Guidelines. First, DOJ defines the relevant product and geographic markets in which the companies operate and determines whether the merger is likely to significantly increase concentration in those markets. Second, DOJ examines potential adverse competitive effects of the merger, such as whether the merged airlines will be able to charge higher prices or restrict output for the product or service it sells. Third, DOJ considers whether other competitors are likely to enter the affected markets and whether they would counteract any potential anticompetitive effects that the merger might have posed. Fourth, DOJ examines the verified “merger specific” efficiencies or other competitive benefits that may be generated by the merger and that cannot be obtained through any other practical means. Fifth, DOJ considers whether, absent the merger or acquisition, one of the firms is likely to fail, causing its assets to exit the market. The commentary to the Guidelines makes clear that DOJ does not apply the Guidelines as a step-by-step progression, but rather as an integrated approach in deciding whether the proposed merger or acquisition would create antitrust concerns. DOJ first assesses competitive effects at a city-pair market level. In its review of past airline mergers and acquisitions, DOJ defined the relevant market as scheduled airline service between individual city-pair markets because, according to DOJ, that is the where airlines compete for passengers. Second, DOJ assesses likely potential adverse competitive effects---specifically, whether a merged airline is likely to exert market power (maintain prices above competitive levels for a significant period of time) in particular city-pair markets. Generally, a merger or acquisition raises anticompetitive concerns to the extent it eliminates a competitor from the markets that both airlines competed in. When United Airlines and US Airways proposed merging in 2000, DOJ concluded that the proposed merger would create monopolies or duopolies in 30 markets with $1.6 billion in revenues, lead to higher fares, and harm consumers on airline routes throughout the United States and on some international routes. The department was particularly concerned about reduced competition in certain markets—nonstop city-pair markets comprising the two airlines’ hub airports, certain other nonstop markets on the East Coast that were served by both airlines, some markets served via connecting service by these airlines along the East Coast, and certain other markets previously dominated by one or both of these airlines. DOJ estimated that the merger would have resulted in higher air fares for businesses and millions of customers. Similarly, in 2000 DOJ sought divestiture by Northwest Airlines of shares in Continental Airlines after the airline had acquired more than 50 percent of the voting interest in Continental. DOJ argued that the acquisition would particularly harm consumers in 7 city- pair markets that linked Northwest and Continental airport hubs, where the two airlines had a virtual duopoly. DOJ also pointed to potential systemwide effects of removing a large competitor. Although DOJ objected to the proposed merger of United and US Airways and the acquisition of Continental by Northwest, it did not challenge a merger between America West and US Airways in 2005 because it found little overlap between city-pair markets served by the two airlines. DOJ, under the Guidelines’ third element, assesses whether new entry would counter the increased market power of a merged airline. If DOJ determines that the merger is likely to give the merging airlines the ability to raise prices or curtail service in a city-pair market, DOJ assesses whether a new entrant would likely begin serving the city-pair in response to a potential price increase to replace the lost competition and deter or counter the price increase. For such entry to resolve concerns about a market, the Guidelines require that it be “timely, likely, and sufficient” to counteract the likely anticompetitive effects presented by the merger. According to DOJ, the inquiry considers an entry time horizon of 2 years and is fact specific rather than based on theory. Some factors that may be considered in assessing likelihood of entry include whether a potential entrant has a hub in one of the cities in a city-pair market of concern so that the potential entrant is well placed to begin service, whether there are constraints (such as slot controls or shortage of gates) that could limit effective entry, and whether the potential entrant would be able to provide the frequency of service that would be required to counteract the merged firm’s presence. For example, if the merging parties operate the only hubs at both end points of a market, it is unlikely that a new entrant airline would find it profitable to offer an effective level of service. In its complaint challenging Northwest Airlines’ attempted acquisition of a controlling interest in Continental, DOJ alleged that significant entry barriers limited new competition for the specific city-pair markets of issue. For example, the complaint alleged that airlines without a hub at one of the end points of the affected hub-to-hub markets were unlikely to enter due to the cost advantages of the incumbents serving that market. In city- pair markets where the merging airlines would have a large share of passengers traveling on connecting flights, DOJ asserted that other airlines were unlikely to enter due to factors such as the light traffic on these routes and the proximity of Northwest’s and Continental’s hubs to the markets as compared to other airlines’ more distant hubs. Fourth, DOJ considers whether merger-specific efficiencies are “cognizable,” that is, whether they can be verified and do not arise from anticompetitive reductions in output or services. Cognizable efficiencies, while not specifically defined under the Guidelines, could include any consumer benefit resulting from a merger—including enhanced service through an expanded route network and more seamless travel—as well as cost savings accruing to the merged airline (for example, from reducing overhead or increased purchasing power that may ultimately benefit the consumer). Because efficiencies are difficult to quantify and verify, DOJ requires merger partners to substantiate merger benefits. DOJ considers only those efficiencies likely to be accomplished by the proposed merger and unlikely to be achieved through practical, less restrictive alternatives, such as code-sharing agreements or alliances. For example, in its October 2000 complaint against Northwest Airlines for its acquisition of a controlling interest in Continental, DOJ noted that Northwest had not adequately demonstrated that the efficiencies it claimed from the merger could not be gained from other, less anticompetitive means, particularly their marketing alliance, which DOJ did not challenge. Finally, DOJ considers the financial standing of merger partners—if one of the partners is likely to fail without the merger and its assets were to exit the market. According to the Guidelines, a merger isn’t likely to create or enhance market power or facilitate its exercise if imminent failure of one of the merging firms would cause the assets of that firm to exit the relevant market. For instance, the acquisition of TWA by American Airlines in 2001 was cleared because TWA was not likely to emerge from its third bankruptcy and there was no less anticompetitive purchaser. In making its decision as to whether the proposed merger is likely anticompetitive—whether it is likely to create or enhance market power or facilitate its exercise—DOJ considers the particular circumstances of the merger as it relates to the Guidelines’ five-part inquiry. The greater the potential anticompetitive effects, the greater must be the offsetting verifiable efficiencies for DOJ to clear a merger. However, according to the Guidelines, efficiencies almost never justify a merger if it would create a monopoly or near monopoly. If DOJ concludes that a merger threatens to deprive consumers of the benefits of competitive air service, then it will seek injunctive relief in a court proceeding to block the merger from being consummated. In some cases, the parties may agree to modify the proposal to address anticompetitive concerns identified by DOJ—for example, selling airport assets or giving up slots at congested airports—in which case DOJ ordinarily files a complaint along with a consent decree that embodies the agreed-upon changes. The Department of Transportation Also Reviews Proposed Mergers to Ensure That They Are in the Public Interest DOT conducts its own analyses of airline mergers and acquisitions. While DOJ is responsible for upholding antitrust laws, DOT will conduct its own competitive analysis and provide it to DOJ in an advisory capacity. In addition, presuming the merger moves forward after DOJ review, DOT can undertake several other reviews if the situation warrants it. Before commencing operations, any new, acquired, or merged airlines must obtain separate authorizations from DOT—“economic” authority from the Office of the Secretary and “safety” authority from the Federal Aviation Administration (FAA). The Office of the Secretary is responsible for deciding whether applicants are fit, willing, and able to perform the service or provide transportation. To make this decision, the Secretary assesses whether the applicants have the managerial competence, disposition to comply with regulations, and financial resources necessary to operate a new airline. FAA is responsible for certifying that the aircraft and operations conform to the safety standards prescribed by the Administrator, for instance, that the applicants’ manuals, aircraft, facilities, and personnel meet federal safety standards. Also, if a merger or other corporate transaction involves the transfer of international route authority, DOT is responsible for assessing and approving all transfers to ensure that they are consistent with the public interest. DOT is responsible for approving such matters to ensure that they are consistent with the public interest. Finally, DOT also reviews the merits of any airline merger or acquisition and submits its views and relevant information in its possession to the DOJ. DOT also provides some essential data that DOJ uses in its review. Changes in the Airline Industry and in the Guidelines May Affect the Factors Considered in DOJ’s Merger Review Process Changes in the airline industry’s structure and in the Guidelines may affect the factors considered in DOJ’s merger review process. DOJ’s review is not static, as it considers both market conditions and current antitrust thinking at the time of the merger review. According to our own analysis and other studies, the industry has grown more competitive in recent years, and if that trend is not reversed by increased fuel prices, it will become more likely that market entry by other airlines, and possibly low-cost airlines, will bring fares back down in markets in which competition is initially reduced due to a merger. In addition, the ongoing liberalization of international markets and, in particular, cross-Atlantic routes under the U.S.-European Union Open Skies agreement, has led to increased competition on these routes. Finally, as DOJ and the Federal Trade Commission have evolved in their understanding of how to integrate merger-specific efficiencies into the evaluation process, the Guidelines have also changed. Increased Competition Indicates That Airline Entry May Be More Likely than in the Past A variety of characteristics of the current airline marketplace indicate that airline entry into markets vacated by a merger partner may be more likely than in the past, unless higher fuel prices substantially alter recent competitive trends in the industry. First, as we have noted, competition on airline routes—spurred by the growth and penetration of low-cost airlines—has increased, while the dominance of legacy airlines has been mitigated in recent years. According to our study, about 80 percent of passengers are now flying routes on which at least one low-cost airline is present. Moreover, some academic studies suggest that low-cost carrier presence has become a key factor in competition and pricing in the industry in recent years. Two articles suggest that the presence of Southwest Airlines on routes leads to lower fares and that even their presence—or entry into end-point airports of a market pair—may be associated with lower prices on routes. Another recent study found that fare differentials between hub and nonhub airports—once measured to be quite substantial—are not as great as they used to be, which suggests a declining relevance of market power stemming from airline hub dominance. The study did find, however that when there is little presence of low-cost airlines at a major carrier’s hub airport, the hub premium continues to remain substantial. However, our competition analysis and these studies predate the considerable increase in fuel prices that has occurred this year and, if permanent, could affect competition and airlines’ willingness to expand into new markets. In some past cases, DOJ rejected the contention that new entry will be timely, likely, and sufficient to counter potential anticompetitive effects. For example, in 2000, when DOJ challenged Northwest Airline’s proposed acquisition of a controlling interest in Continental Airlines, a DOJ official explained that the department considered it unrealistic to assume that the prospect of potential competition—meaning the possibility of entry into affected markets by other airlines—would fully address anticompetitive concerns, given network airline hub economics at the time. Merger Guidelines Have Evolved to Reflect Federal Antitrust Authorities’ Greater Understanding of Efficiencies The Guidelines have been revised several times over the years, and particularly the most recent revision, in 1997, reflects a greater understanding by federal antitrust authorities in how to assess and weigh efficiencies. In 1968, the consideration of efficiencies was allowed only as a defense in exceptional circumstances. In 1984, the Guidelines were revised to incorporate efficiencies as part of the competitive effects analysis, rather than as a defense. However, the 1984 Guidelines also required “clear and convincing” evidence that a merger will achieve significant net efficiencies. In 1992, the Guidelines were revised again, eliminating the “clear and convincing” standard. The 1997 revision explains that efficiencies must be “cognizable,” that is, merger-specific efficiencies that can be verified and are net of any costs and not resulting solely from a reduction in service or output. In considering the efficiencies, DOJ weighs whether the efficiencies may offset the anticompetitive effects in each market. According to the Guidelines, in some cases, merger efficiencies are not strictly in the relevant market, but are so inextricably linked with it that a partial divestiture or other remedy could not feasibly eliminate the anticompetitive effect in the relevant market without sacrificing the efficiencies in other markets. Under those circumstances, DOJ will take into account across-the-board efficiencies or efficiencies that are realized in markets other than those in which the harm occurs. According to DOJ and outside experts, the evolution of the Guidelines reflects an attempt to provide clarity as to the consideration of efficiencies, an important factor in the merger review process. Agency Comments We provided a draft of this report to DOT and DOJ for their review and comment. Both DOT and DOJ officials provided some clarifying and technical comments that we incorporated where appropriate. We provided copies of this report to the Attorney General, the Secretary of Transportation, and other interested parties and will make copies available to others upon request. In addition, this report will be available at no charge on our Web site at http://www.gao.gov. If you or your staff have any questions on matters discussed in this report, please contact me on (202) 512-2834 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report can be found in appendix IV. Appendix I: Scope and Methodology To review the financial condition of the U.S. airline industry, we analyzed financial and operational data, reviewed relevant studies, and interviewed industry experts. We analyzed DOT Form 41 financial and operational data submitted to DOT by airlines between the years 1998 through 2007. We obtained these data from BACK Aviation Solutions, a private contractor that provides online access to U.S. airline financial, operational, and passenger data with a query-based user interface. To assess the reliability of these data, we reviewed the quality control procedures used by BACK Aviation and DOT and subsequently determined that the data were sufficiently reliable for our purposes. We also reviewed government and expert data analyses, research, and studies, as well as our own previous studies. The expert research and studies, where applicable, were reviewed by a GAO economist or were corroborated with additional sources to determine that they were sufficiently reliable for our purposes. Finally, we conducted interviews with government officials, airlines and their trade associations, credit and equity analysts, industry experts, and academics. The analysts, experts, and academics were identified and selected based on literature review, prior GAO work, and recommendations from within the industry. To determine if and how the competitiveness of the U.S. airline industry has changed since 1998, we obtained and stratified DOT quarterly data on the 5,000 largest city-pair markets for calendar years 1998 through 2006. These data are collected by DOT based on a 10 percent random sampling of tickets and identify the origin and destination airports. These markets accounted for about 90 percent of all passengers in 2006. We excluded tickets with interlined flights—a flight in which a passenger transfers from one to another unaffiliated airline—and tickets with international, Alaskan, or Hawaiian destinations. Since only the airline issuing the ticket is identified, regional airline traffic is counted under the legacy parent or partner airline. To assess the reliability of these data, we reviewed the quality control procedures DOT applies and subsequently determined that the data were sufficiently reliable for our purposes. To analyze changes in competition based on the size of the passenger markets, we divided the markets into four groupings. Each group is composed of one-quarter of the total passenger traffic in each year. To stratify these markets by the number of effective competitors operating in a market, we used the following categories: one, two, three, four, and five or more effective competitors, where a airline needed to have at least a 5 percent share of the passengers in the city-pair market to be considered an effective competitor in that market. To stratify the data by market distance, we obtained the great circle distance for each market using the DOT ticket data via BACK Aviation and then grouped the markets into five distance categories: up to 250 miles, 251-500 miles, 501-750 miles, 751-1,000 miles, and 1,001 miles and over. For the purposes of this study, we divided the airline industry into legacy and low-cost airlines. While there is variation in the size and financial condition of the airlines in each of these groups, there are more similarities than differences for airlines in each group. Each of the legacy airlines predate the airline deregulation of 1978, and all have adopted a hub-and-spoke network model, can be more expensive to operate than a simple point-to-point service model. Low-cost airlines have generally entered interstate competition since 1978, are smaller, and generally employ a less costly point-to-point service model. Furthermore, the seven low-cost airlines (Air Tran, America West, ATA, Frontier, JetBlue, Southwest, and Spirit) had consistently lower unit costs than the seven legacy airlines (Alaska, American, Continental, Delta, Northwest, United, and US Airways). For this analysis, we continued to categorize US Airways as a legacy airline following its merger with America West in 2005, and included the data for both airlines for 2006 and 2007 with the legacy airlines and between 1998 through 2005 we categorized America West as a low-cost airline. To determine if competition has changed at the 30 largest airports, we analyzed DOT T-100 enplanement data for 1998 and 2006 to examine the changes in passenger traffic among the airlines at each airport. The T-100 database includes traffic data (passenger and cargo), capacity data, and other operational data for U.S. airlines and foreign airlines operating to and from the United States. The T-100 and T-100(f) data files are not based on sampled data or data surveys, but represent a 100 percent census of the data. To assess the reliability of these data, we reviewed the quality control procedures DOT applies and subsequently determined that the data were sufficiently reliable for our purposes. To determine the potential effects on competition between the merger of Delta Air Lines and Northwest Airlines explained in appendix II, we examined whether the merger might reduce competition within given airline markets. We defined an effective competitor as an airline that has a market share of at least 5 percent. To examine the potential loss of competition under the merger, we determined the extent to which each airline’s routes overlap by analyzing 2006 data from DOT on the 5,000 busiest domestic city-pair origin and destination markets. To determine the potential loss of competition in small communities, we analyzed origin and destination data (OD1B) for the third quarter of 2007 to determine the extent to which airlines’ routes overlap. We defined small communities as those communities with airports that are defined as “nonhubs” by statute in 49 U.S.C. § 47102(13). To identify the key factors that airlines consider in deciding whether to merge with or acquire another airline, we reviewed relevant studies and interviewed industry experts. We reviewed relevant studies and documentation on past and prospective airline mergers in order to identify the factors contributing to (or inhibiting) those transactions. We also met with DOT and Department of Justice (DOJ) officials, airline executives, financial analysts, academic researchers, and industry consultants to discuss these factors and their relative importance. To understand the process and approach used by federal authorities in considering airline mergers and acquisitions, we reviewed past and present versions of the Guidelines, DOT statutes and regulations, and other relevant guidance. We also analyzed legal documents from past airline mergers and published statements by DOT and DOJ officials to provide additional insight into how DOJ and DOT evaluate merger transactions. Finally, we discussed the merger review process with DOJ and DOT officials and legal experts. We conducted this performance audit from May 2007 through July 2008 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Delta and Northwest Merger Appendix III: Number and Size of Dominated Markets by Airline in the Top 5,000 Markets, 2006 Appendix IV: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the contact named above, Paul Aussendorf, Assistant Director; Amy Abramowitz; Lauren Calhoun; Jessica Evans; Dave Hooper; Delwen Jones; Mitchell Karpman; Molly Laster; Sara Ann Moessbauer; Nick Nadarski; and Josh Ormond made key contributions to this report. Related GAO Products Airline Deregulation: Reregulating the Airline Industry Would Likely Reverse Consumer Benefits and Not Save Airline Pensions. GAO-06-630. Washington, D.C.: June 9, 2005. Commercial Aviation: Bankruptcy and Pension Problems Are Symptoms of Underlying Structural Issues. GAO-05-945. Washington, D.C.: Sept. 30, 2005. Private Pensions: The Pension Benefit Guaranty Corporation and Long- Term Budgetary Challenges. GAO-05-772T. Washington, D.C.: June 9, 2005. Private Pensions: Government Actions Could Improve the Timeliness and Content of Form 5500 Pension Information. GAO-05-294. Washington, D.C.: June 3, 2005. Private Pensions: Recent Experiences of Large Defined Benefit Plans Illustrate Weaknesses in Funding Rules. GAO-05-294. Washington, D.C.: May 31, 2005. Commercial Aviation: Legacy Airlines Must Further Reduce Costs to Restore Profitability. GAO-04-836. Washington, D.C.: August 11, 2004. Private Pensions: Publicly Available Reports Provide Useful but Limited Information on Plans’ Financial Condition. GAO-04-395. Washington, D.C.: March 31, 2004. Private Pensions: Multiemployer Plans Face Short- and Long-Term Challenges. GAO-04-423. Washington, D.C.: March 26, 2004. Private Pensions: Timely and Accurate Information Is Needed to Identify and Track Frozen Defined Benefit Plans. GAO-04-200R. Washington, D.C.: December 17, 2003. Pension Benefit Guaranty Corporation: Single-Employer Pension Insurance Program Faces Significant Long-Term Risks. GAO-04-90. Washington, D.C.: October 29, 2003. Commercial Aviation: Air Service Trends at Small Communities since October 2000. GAO-02-432. Washington, D.C.: March 29, 2002.
The airline industry is vital to the U.S. economy, generating operating revenues of nearly $172 billion in 2007, amounting to over 1 percent of the U.S. gross domestic product. It serves as an important engine for economic growth and a critical link in the nation's transportation infrastructure, carrying more than 700 million passengers in 2007. Airline deregulation in 1978, led, at least in part, to increasingly volatile airline profitability, resulting in periods of significant losses and bankruptcies. In response, some airlines have proposed or are considering merging with or acquiring another airline. GAO was asked to help prepare Congress for possible airline mergers or acquisitions. This report describes (1) the financial condition of the U.S. passenger airline industry, (2) whether the industry is becoming more or less competitive, (3) why airlines seek to merge with or acquire other airlines, and (4) the role of federal authorities in reviewing proposed airline mergers and acquisitions. To answer these objectives, we analyzed Department of Transportation (DOT) financial and operating data; interviewed agency officials, airline managers, and industry experts; and reviewed Horizontal Merger Guidelines and spoke with antitrust experts. DOT and the Department of Justice (DOJ) provided technical comments, which were incorporated as appropriate. The U.S. passenger airline industry was profitable in 2006 and 2007 for the first time since 2000, but this recovery appears short-lived because of rapidly increasing fuel costs. Legacy airlines (airlines that predate deregulation in 1978) generally returned to modest profitability in 2006 and 2007 by reducing domestic capacity, focusing on more profitable markets, and reducing long-term debt. Low-cost airlines (airlines that entered after deregulation), meanwhile, continued to be profitable. Airlines, particularly legacy airlines, were also able to reduce costs, especially through bankruptcy- and near-bankruptcy-related employee contract, pay, and pension plan changes. Recent industry forecasts indicate that the industry is likely to incur substantial losses in 2008 owing to high fuel prices. Competition within the U.S. domestic airline industry increased from 1998 through 2006, as reflected by an increase in the number of competitors in city-to-city (city-pair) markets, the presence of low-cost airlines in more of those markets, lower air fares, fewer dominated markets, and a shrinking dominance by a single airline at some of the nation's largest airports. The average number of competitors in the largest 5,000 city-pair markets rose to 3.3 in 2006 from 2.9 in 1998. This growth is attributable to the increased presence of low-cost airlines, which increased nearly 60 percent. In addition, the number of largest 5,000 markets dominated by a single airline declined by 15 percent. Airlines seek to merge with or acquire other airlines with the intention of increasing their profitability and financial sustainability, but must weigh these potential benefits against operational and regulatory costs and challenges. The principal benefits airlines consider are cost reductions--by combining complementary assets, eliminating duplicate activities, and reducing capacity--and increased revenues from higher fares in existing markets and increased demand for more seamless travel to more destinations. Balanced against these potential benefits are operational costs of integrating workforces, aircraft fleets, and systems. In addition, because most airline mergers and acquisitions are reviewed by DOJ, the relevant antitrust enforcement agency, airlines must consider the risks of DOJ opposition. Both DOJ and DOT play a role in reviewing airline mergers and acquisitions, but DOJ's determination as to whether a proposed merger is likely substantially to lessen competition is key. DOJ uses an integrated analytical framework set forth in the Horizontal Merger Guidelines to make its determination. Under that process, DOJ assesses the extent of likely anticompetitive effects in the relevant markets, in this case, airline city-pair markets. DOJ further considers the likelihood that airlines entering these markets would counteract any anticompetitive effects. It also considers any efficiencies that a merger or acquisition could bring--for example, consumer benefits from an expanded route network. Our analysis of changes in the airline industry, such as increased competition and the growth of low-cost airlines, indicates that airline entry may be more likely now than in the past provided recent increases in fuel costs do not reverse these conditions. Additionally, the Horizontal Merger Guidelines have evolved to provide clarity as to the consideration of efficiencies, an important factor in airline mergers.
GAO_GAO-12-45
Background The Emergency Relief Program, authorized by section 125 of title 23 of the U.S. Code, provides assistance to repair or reconstruct federal-aid highways and roads on federal lands that have sustained serious damage from natural disasters or catastrophic failures. Congress has provided funds for this purpose since at least 1938. Examples of natural disasters include floods, hurricanes, earthquakes, tornadoes, tsunamis, severe storms, and landslides. Catastrophic failures qualify if they result from an external cause that leads to the sudden and complete failure of a major element or segment of the highway system that has a disastrous impact on transportation. Examples of qualifying causes of catastrophic failures include acts of terrorism or incidents such as a barge striking a bridge pier causing the sudden collapse of the structure or a truck crash resulting in a fire that damages the roadway. For natural disasters or other events to be eligible for emergency relief funding, the President must declare the event to be an “emergency” or a “major disaster” under the Robert T. Stafford Disaster Relief and Emergency Assistance Act or the governor must declare an emergency with the concurrence of the Secretary of Transportation. Since 1972, Congress has authorized $100 million annually in contract authority for the Emergency Relief Program to be paid from the Highway Trust Fund. Accordingly, FHWA may obligate up to $100 million in any one fiscal year for the program. Any unobligated balance remains available until expended. Additionally, obligations to a single state resulting from a single natural disaster or a single catastrophic failure may not exceed $100 million. In some cases, Congress has enacted legislation lifting this cap for large- scale disasters. Moreover, as provided in FHWA’s regulations, states are eligible for assistance under the Emergency Relief Program if the cost of the damage from a single event exceeds $700,000 for emergency assistance.sites in any state affected by the disaster. According to FHWA guidance, each prospective damage site must have at least $5,000 of repair costs to qualify for funding assistance—a threshold intended to distinguish unusually large expenses eligible for emergency relief funding from costs that should be covered by normal state maintenance funding. Supplemental Appropriations Comprise Most Emergency Relief Funding Provided to States, and a Backlog of Funding Requests Remains From Fiscal Years 2007 through 2010, Congress Provided More than $2.3 Billion for Emergency Relief Events and to Address a Backlog of Unfunded Requests From fiscal years 2007 through 2010, Congress provided more than $2.3 billion to the Emergency Relief Program, including more than $1.9 billion in three supplemental appropriations from general revenues and about $400 million in contract authority paid from the Highway Trust Fund (see fig. 3). The supplemental appropriations represented 83 percent of the program’s funding over that time period. This percentage has been fairly consistent over time: 86 percent of the total Emergency Relief Program funding provided from fiscal years 1990 through 2006 came from supplemental appropriations. Two of the supplemental appropriations that Congress provided to the Emergency Relief Program since fiscal year 2007 were used to address the backlog of unfunded emergency relief requests from states. In May 2007, Congress provided $871 million to help clear a backlog of $736 million in funding requests from 46 states. In September 2008, when the backlog list reached more than $560 million, Congress provided $850 million to address this backlog and provide additional funds for future requests. In December 2007, Congress provided $195 million for the reconstruction of the Interstate 35 West Bridge in Minnesota. FHWA has allocated all of the $2.3 billion provided to the program since fiscal year 2007, as well as an additional $100 million carried over from previously provided program funding, among 42 states and three territories. Sixty-five percent of the allocations (almost $1.6 billion) went to six states—California, Louisiana, Minnesota, North Dakota, Texas, and Washington state (see fig. 4). California received almost $538 million, the most of all states, and most of this was a result of the 2005–2006 winter storms. Washington state was allocated almost $166 million in response to 10 events ranging from a single event estimated to cost $1 million to about $58 million to respond to flooding caused by severe rains in December 2007. Of the $2.4 billion that FHWA allocated to states from fiscal years 2007 through 2010, about 59 percent ($1.4 billion) was allocated for events that occurred during those years. FHWA allocated the remaining 41 percent ($988 million) for events that occurred from fiscal years 2001 through 2006. This amount includes $195 million made available through the December 2007 supplemental appropriation. the lake. Starting in the early 1990s, the lake level has risen dramatically, threatening adjacent roadways. Although Emergency Relief Program regulations define a natural disaster as a sudden and unusual natural occurrence, FHWA determined that the gradual and predictable basin flooding at Devils Lake is eligible for Emergency Relief Program funding. In 2005, through SAFETEA-LU, Congress authorized up to $10 million of Emergency Relief Program funds to be expended annually, up to a total of $70 million, to address an additional problem at Devils Lake and make repairs to certain roads which were impounding water and acting as dams. In the absence of other authority, this funding must come out of the $100 million annual authorization of contract authority, effectively reducing the annual emergency relief funding available to other states. As of March 2010, the Emergency Relief Program has provided more than $256 million for projects related to Devils Lake flooding. Emergency Relief Faces Risk from Escalating Costs of Events Occurring in Past Years In recent years, Congress has provided significant supplemental funding to the Emergency Relief Program, but as of June 2011, a $485 million backlog of funding requests from states remained. This backlog did not include funding requests for August 2011 damages from Hurricane Irene. The backlog list provides a snapshot of states’ funding requests at a given time and is subject to change as states experience new eligible events. According to guidance in FHWA’s Emergency Relief Manual, requested amounts are based on the states’ anticipated need for emergency relief for the current fiscal year and may be less than the total emergency relief needs for any specific event. The June 2011 backlog list contained almost $90 million in formal funding requests for several events that occurred between 1983 and 1993 that were previously determined to be eligible by FHWA. Specifically, California requested almost $83 million for a single, long-term project in response to a 1983 rockslide, known as Devil’s Slide, and an additional $6.5 million for four other events from fiscal years 1990 through 1993. According to FHWA, these requests are for approved emergency relief events with projects that have had delays due to environmental issues or cost overruns. Once an event has been approved for emergency relief by FHWA, current program rules do not establish a time limit in which states must submit all funding requests for repairs. Although FHWA requires states to submit a list of projects within three months of approving a state’s application for emergency relief, eligibility stemming from an approved event does not lapse, and a state’s list of projects may be amended at any time to add new work. Consequently, FHWA faces the risk of receiving reimbursement requests from states for projects years after an event occurs, including requests for projects that have experienced significant delays and cost increases over time, due to environmental or community concerns. The June 2011 backlog list included project funding requests for two events that occurred more than 10 years ago and which demonstrate FHWA’s risk of escalating long-term costs due to older events. The Transportation Equity Act for the 21st Century (TEA-21), Pub. L. No. 105-178, § 1217(a), 112 Stat. 107, 214 (1998). review, the tunnel alternative was selected in 2002 and construction of a pair of 4,200-foot-long, 30-foot-wide tunnels began in 2006—23 years after the originating emergency relief event. Construction of the tunnel is ongoing, with a planned completion in March 2013. To date, FHWA has obligated about $555 million in emergency relief funds to the Devil’s Slide tunnel project out of an estimated cost of $631 million. The $631 million total project cost estimate includes the $83 million requested on the June 2011 backlog list—which is for work completed during fiscal year 2011—as well as an additional $120 million to be requested in the future to fully reimburse Caltrans to complete the project. Alaskan Way Viaduct in Seattle, Washington. The June 2011 backlog list also contained a “pending” request of $40.5 million from Washington state in response to a February 2001 earthquake which damaged the Alaskan Way Viaduct—a 2-mile double-deck highway running along Seattle’s waterfront. In the months after the event, FHWA approved $3.6 million for emergency relief repairs to cracks in several piers supporting a section of the viaduct, which were completed by December 2004. At the time of the earthquake, the Washington State Department of Transportation (WSDOT) had begun considering options for replacing the viaduct, which was approaching the end of its design life. After continued monitoring, WSDOT found that the viaduct had experienced accelerated deterioration as a result of the earthquake and requested $2 billion in emergency relief to replace the viaduct. Congress directed FHWA and state and local agencies to determine the specific damages caused by the earthquake and the amount eligible for emergency relief.response, FHWA found that while the replacement of the entire viaduct was not eligible for emergency relief, the project was eligible to receive $45 million to replace the section of the viaduct damaged by the In earthquake.forward with a more comprehensive replacement project for the entire facility, the estimated amount of emergency relief eligibility could be applied to that project. WSDOT now plans to replace the entire viaduct with a bored tunnel under downtown Seattle, with an estimated cost of almost $2 billion. According to FHWA’s Washington state division office, the $40.5 million listed on the June 2011 emergency relief backlog list will be obligated toward the construction of the larger replacement project for the viaduct. FHWA further found that if WSDOT decided to move The lack of a time limit for states to submit emergency relief funding requests raises the risk of states filing claims for additional funding years after an event’s occurrence, particularly for projects that grow significantly in cost or scope over time. States may have good reasons for submitting funding requests years after an event—particularly for larger-scale permanent repairs that may take years to complete—but such projects can grow unpredictably. The example of the relocation of S.R.1 away from Devil’s Slide and the cost and scope increases that resulted from more than two decades of delays to complete lengthy environmental reviews and address community concerns is case and point. The absence of a time limit for states to submit funding requests hinders FHWA’s ability to manage future claims to the program and creates a situation where Congress may be asked to provide additional supplemental appropriations for emergency relief years after an event occurs. Furthermore, states requesting emergency relief funds for projects many years after an event raises questions as to whether the repairs involved meet the goal of the Emergency Relief Program to restore damaged facilities to predisaster conditions. In 2007 we recommended that FHWA revise its regulations to tighten program eligibility criteria, which could include limitations on the use of emergency relief funds to fully finance projects that grew in scope and cost as a result of environmental and community concerns. In July 2011, DOT’s regulatory agendaEmergency Relief Program that would, among other actions, consider included a planned rulemaking for the specific time restrictions for states when filing a claim for emergency relief eligible work. However, in October 2011, FHWA withdrew this planned item from its agenda. According to an FHWA official, the planned rulemaking was withdrawn because it was premature and because FHWA is still determining what changes if any are needed to address GAO’s 2007 recommendations. FHWA’s Program Revisions Have Not Fully Addressed Prior Concerns FHWA Now Has Procedures to Withdraw Some Unused Emergency Relief Allocations from States, But Lacks Information to Verify Whether Additional Unused Allocations Are Still Needed Since our 2007 report, FHWA has implemented a process to withdraw unused allocations and reallocate funding to benefit other states. FHWA undertook these actions in response to our recommendation to require division offices to annually coordinate with states to identify and withdraw unused allocations that are no longer needed so funds may be used to reduce the backlog of other program requests.based its allocations on a state’s estimate of anticipated emergency relief obligations for the fiscal year. Prior to fiscal year 2007, FHWA’s policy was to allocate the full amount of each state’s emergency relief request, based on total available program funds. Since 2007, FHWA has In fiscal years 2010 and 2011, FHWA division offices coordinated with states to identify and withdraw unused allocations representing approximately $367 million in emergency relief funds from a total of 25 states and 2 territories. To withdraw unused funds from states, FHWA reviews its financial database, FMIS, to identify the amount allocated to each state that has not been obligated to specific projects. FHWA then asks each state to identify remaining fiscal year need for new obligations and the amount of any allocations that will no longer be needed. FHWA then withdraws the amount determined by the state to be no longer needed and reallocates that amount to other nationwide emergency relief needs, such as unfunded requests on the backlog list. Most of the withdrawn allocations were originally allocated to states from fiscal years 2003 to 2006, as shown in figure 5. Of the $299 million that was withdrawn for events occurring from fiscal years 2003 to 2006, about $230 million was withdrawn from Florida. FHWA reallocated $295 million of the $367 million withdrawn from states According to FHWA, the remaining $72 for other nationwide requests.million that was withdrawn but not yet reallocated will be made available to states in future allocations. As of the end May 2011, $493 million that FHWA allocated to states in response to events occurring since 1989 remains unobligated. A significant portion of this amount likely reflects the recent allocation of $320 million in April 2011. However, at least $63 million of the unobligated balance is for older allocations, provided prior to fiscal year 2007. Specifically, New York’s unobligated balance includes almost $52 million provided after the September 11, 2001, terrorist attacks for roadway repairs delayed due to ongoing building construction around the FHWA’s New York division reported that these World Trade Center site.repairs are not expected to be completed until 2014. In addition, California maintained an unobligated balance of more than $11 million from the October 1989 Loma Prieta earthquake. According to FHWA California division officials, FHWA sought to withdraw some of this allocation, but Caltrans and local officials indicated that this allocation was necessary to complete environmental mitigation and bike path projects that were part of reconstruction of the collapsed Bay Bridge connecting San Francisco and Oakland in California. Although the Emergency Relief Manual states that FHWA division offices are to identify and withdraw unused program funding allocations annually, we found several instances in which division offices applied unused allocations from existing events to new events in the same state without requesting a new allocation. Specifically, our file review at the FHWA Washington state and New York state division offices identified three events from fiscal years 2009 and 2010 that the division offices approved as eligible and funded with allocations that were no longer needed from previous events. This practice, which was permitted in the 1989 version of the Emergency Relief Manual, limits FHWA’s ability to track unobligated balances for specific events and determine whether those funds are no longer needed and may be withdrawn. FHWA took steps to limit divisions from using this practice by removing language permitting the practice in the 2009 Emergency Relief Manual. According to FHWA, this change was made so that funds could be more equitably distributed across the nation to address the backlog of funding requests, rather than allowing states to hold unused funds in reserve for future events. Although FHWA removed the language permitting this practice from the manual, FHWA has not provided written guidance to its divisions to prohibit them from applying unused allocations to new events in the same state, and the practice is still being used. For example, in February 2011, FHWA’s headquarters allowed the Washington state division to shift unused funds from a prior event to a new event, and in doing so, the division office did not submit a request for an allocation of funds for those new events and FHWA headquarters did not provide an allocation for those events. Consequently, FHWA headquarters did not have a record for the events, nor did it know the amount of funds made available by the division for these events. Furthermore, FHWA headquarters officials were unable to determine how prevalent this practice was across division offices. As a result, FHWA headquarters lacks information on what funding was made available and remains unobligated to states for specific events. Because Emergency Relief Program funding is not subject to the annual limits that the regular federal-aid highway program is, states have an incentive to retain as much emergency relief funding as possible by not returning unused funds. The lack of information on the amount of funds that could be made available for specific events could prevent FHWA from verifying whether allocations provided to states are still needed or may be withdrawn and used to meet current needs. In Addition to Unused Allocations, Obligated Funds Remain Unexpended In addition to the unused allocations, substantial amounts of obligated emergency relief funding have not been expended. About $642 million in emergency relief funding obligated for states from fiscal years 2001 through 2010 remains unexpended as of May 2011—including about $341 million in emergency relief funds obligated from fiscal years 2001 through 2006. In total for the Emergency Relief Program, 8 percent of all funding obligated from fiscal years 2001 through 2006 has yet to be expended (see table 2). Almost half of the unexpended balance from fiscal years 2001 through 2006 is for projects in response to several extraordinary events that occurred during those years, including the September 11, 2001, terrorist attacks in New York and Gulf Coast Hurricanes Katrina, Rita, and Wilma in 2005. Specifically, about $45 million of the $46 million that remains unexpended for fiscal year 2001 is for repair projects to facilities around the World Trade Center site in New York City. Of the $188 million that remains unexpended for fiscal year 2005, about $118 million is for projects in Louisiana in response to Hurricane Katrina. As of the end of May 2011, FHWA obligated about $952 million to 155 emergency relief projects in Louisiana for this event and has since made reimbursements to the state for all but 1 of these projects, providing approximately 88 percent of the amount obligated. Although substantial unexpended obligated funding remains, FHWA lacks information to determine the amount that is unneeded and could be deobligated because there is no time frame for closing out completed emergency relief projects. FHWA division officials in New York and Texas reported that many emergency relief projects are administered by local public agencies, including towns and counties, and these entities are often slow to process their reimbursement requests through the state department of transportation. As such, FHWA lacks information on the status of these projects and whether projects are ongoing or have been completed. For example, in Texas, 28 of 30 projects since 2007 included in our file review were listed as active in FHWA’s national database, FMIS. However, according to Texas Department of Transportation (TXDOT) officials, construction on 23 of the 28 active projects was in fact completed and waiting to be closed out. FHWA division office officials reported that FMIS is not a project management system and does not provide the actual status of the construction of projects. As such, states may have completed some emergency relief projects but not processed reimbursement requests from local public agencies or completed final project financial audits. Projects remain active in FMIS until final vouchers have been processed to reimburse states. DOT’s Office of Inspector General and external independent auditors have both identified inactive or unexpended obligations as a significant concern within FHWA. Without clear time frames for states to close out completed emergency relief projects, FHWA lacks important information on the status of projects and whether unexpended project funds are no longer needed and may be deobligated to be made available for other emergency relief projects. Prior Concerns about Project Eligibility Have Yet to Be Addressed FHWA has yet to address our longstanding concern about, and our 2007 recommendation for addressing, the use of emergency relief funds to finance projects that have grown in scope beyond the original intent of the program, which is to restore damaged facilities to predisaster conditions. In 1996, we questioned FHWA’s decision to use more than $1 billion in emergency relief funds to replace the Cypress Viaduct in Oakland, California, which collapsed as a result of the Loma Prieta FHWA engineers initially estimated that earthquake in October 1989.replacing the destroyed structure along its predisaster alignment would cost $306 million. In response to public concern, Caltrans identified several alternative alignments that it studied in a 2-year environmental review. In 1991, Caltrans and FHWA decided to replace the destroyed 1.5-mile structure, which had bisected a residential area, with a new 5- mile structure running through active rail yards. This cost estimate later increased to more than $1.1 billion at the time of our 1996 report—an increase of almost $800 million from FHWA’s initial estimate of $306 million to restore the facility to its predisaster condition. As such, we questioned whether the improvements and costs resulting from the significant relocation and changes in scope should have been funded through the Emergency Relief Program rather than the regular federal-aid highway program. We recommended that FHWA modify its guidance to clearly define what costs can be funded through the Emergency Relief Program, particularly when an environmental review recommends improvements or changes to the features of a facility from its predisaster condition in a manner that adds costs and risks to the project. In response to our recommendation, FHWA amended its guidance to more clearly indicate when limits should be placed on emergency relief funding, and when full funding is appropriate, and we closed this recommendation. to restore damaged facilities to predisaster conditions. First, we noted that relocating California S.R.1 at Devil’s Slide could have been addressed through the state’s regular federal-aid highway program, rather than through the Emergency Relief Program. If the regular federal-aid highway program had been used, the project would not have been eligible for 100 percent federal funding, and the federal government would have saved an estimated $73 million. Second, we reported that the reconstruction of the U.S. Highway 90 Biloxi Bay Bridge in Mississippi— which was destroyed in August 2005 during Hurricane Katrina—grew in scope and cost by $64 million as a result of community concerns. Specifically, in response to a concern raised by a local shipbuilder about the proposed height of the new bridge, Mississippi department of transportation expanded the scope of the bridge reconstruction to increase the bridge height to allow for future ships to pass under the bridge. The original design was to provide an 85-foot clearance at a cost of $275 million, but this scope was expanded to its current design to provide a 95-foot clearance at a cost of $339 million. FHWA has clarified its definition of an eligible damage site as we recommended in 2007, through its revisions to its Emergency Relief Manual in 2009. Specifically, FHWA’s 2009 revisions clarified that grouping damages to form an eligible site based solely on a political subdivision (i.e., county or city boundaries) should not be accepted. This change addressed our concern that FHWA division offices had different interpretations of what constituted a site, such that damage sites that were treated as eligible for emergency relief in one state may have not been eligible in another state. Incomplete Information in Emergency Relief Project Files in Three States Raises Concerns about FHWA’s Eligibility Decisions and Program Oversight Documentation for Many Project Files We Reviewed Was Missing, Incomplete, or Inconsistent In our review of 83 selected emergency relief project files in three FHWA division offices, we found that many of the project files reviewed did not contain documentation called for in the Emergency Relief Manual to support FHWA decisions that projects met program eligibility requirements. Of the 83 projects in our review (totaling about $198.5 million in federal funds), 81 projects (about $192.8 million in federal funds) had at least one instance of missing or incomplete documentation. As a result of this missing information, we were unable to determine the basis of FHWA’s eligibility decisions for many of the projects in our file review. The Emergency Relief Manual directs FHWA division offices to maintain files containing information on the methods used to evaluate disasters and FHWA’s assessment of damages and estimates of cost. According to the Emergency Relief Program regulations, program data should be sufficient to identify the approved disaster and permit FHWA to determine the eligibility of the proposed work.several areas of concern with FHWA’s eligibility determinations based on In our file review, we identified missing, incomplete, or inconsistent documentation, as illustrated in table 3 and described below (see app. III for detailed results of our file review). Forty-seven of 83 project files (57 percent) lacked documentation for on- site damage inspections. In particular, they did not include a detailed damage inspection report (DDIR) or the DDIR was not complete. According to the Emergency Relief Manual, on-site detailed damage inspections are conducted by the applicant or a state department of transportation representative if the applicant is a local public agency, and an FHWA representative, if available, to determine the extent of damage, scope of repair work, preliminary estimate of the repair cost, and whether a project is eligible for emergency relief funding. FHWA provides its division offices with a DDIR form that states may use to document their inspections and provide critical information necessary for determining project eligibility, such as a listing of preliminary repair cost estimates for equipment, labor, and materials for both emergency and permanent repairs. Without such information on file for some projects, we could not confirm that FHWA had that information to make emergency relief project eligibility determinations. These documents may be missing due to lack of clear requirements from FHWA. FHWA requires documented on-site damage inspections but does not have a clear requirement for how states submit the inspections to FHWA officials or for how they approve inspection reports; as a result, the three division offices we visited applied the Emergency Relief Manual guidelines differently. For example, none of the 28 project files we reviewed in Texas included a DDIR because FHWA’s Texas division office relies instead on a “program of projects,” which is a spreadsheet of all projects requesting emergency relief funds. In response to a draft version of this report, FHWA’s Office of Program Administration explained that state departments of transportation may use any format to submit the data necessary for FHWA to make an eligibility determination. FHWA’s Texas division officials stated that they find the program of projects useful and believed it to be an FHWA requirement; however, we found that the Emergency Relief Manual guidance was ambiguous and did not directly state that this document can be used in place of DDIRs. One section the Emergency Relief Manual indicates that the state department of transportation is to submit the program of projects to the FHWA division office, but it also states that the program of projects should relate the damage to that described in the DDIRs. Furthermore, the manual suggests in an appendix that the program of projects is actually a package of all DDIRs resulting from the detailed damage inspections. In addition, our file review found that the project descriptions in the program of projects did not always provide the detailed information regarding damages and proposed repairs outlined in the Emergency Relief Manual and found on a DDIR. For example, for one Texas project totaling close to $1.7 million in both emergency and permanent repairs, the project description was the same for both emergency and permanent repairs and did not indicate what specific repair activities were conducted for each repair type. Differentiation between emergency and permanent repairs is important because emergency repairs are eligible for a higher federal share and do not require prior FHWA authorization. Without documentation showing a clear distinction between the emergency and permanent repairs—information that should be identified and documented on a DDIR per program guidance—we could not determine the basis for FHWA’s decision that this project met the eligibility requirements for both repair types. Overall, we found the program of projects was less useful than the DDIR for evaluating the full range of information necessary to determine the basis for FHWA’s eligibility determinations. We found that about half of the projects in our sample (42 of 83) did not include repair cost estimates. The Emergency Relief Manual states that at a minimum the division office’s project file should contain copies of the FHWA field engineer’s assessments on damage and estimates of cost. Officials in each of the FHWA division offices that we visited reported that the state’s department of transportation is responsible for preparing repair cost estimates, but that FHWA area engineers also conduct some on-site inspections to verify the cost estimates provided. In total, 42 projects in our sample did not include any repair cost estimates; thus, we could not confirm that FHWA officials had this information to make eligibility determinations for those projects. For example, a portion of two projects in our sample for emergency and permanent repairs was to remove sand from drainage ditches and was initially approved by the FHWA Texas division office for reimbursement of up to $1.3 million, although the project file included no repair cost estimate for any of the work associated with the project. Additionally, no information was available in the project file to explain the FHWA Texas division office’s decision to later approve a nearly 40 percent increase from $1.3 million to the final approved amount of $1.85 million. In responding to a draft of this report, DOT stated that the cost of the project increased because more sand was removed from the drainage ditches than originally estimated. However, no documentation of this change was included in FHWA’s project files. FHWA officials reported that the division office in Texas reviews a sample of preliminary cost estimates based on risk, among other factors, prior to making any eligibility decisions. According to the officials, FHWA’s Texas division office reviewed preliminary cost estimates of at least 10 of the 30 projects included in our file review before determining eligibility. The officials also reported that this sampling approach is consistent with FHWA’s stewardship agreement with TXDOT and the fact that states have assumed oversight responsibility for design and construction of many federal-aid highway projects, including emergency relief projects. FHWA also reported that TXDOT’s oversight responsibilities do not extend to determining whether particular projects are eligible for federal funds. Furthermore, the Emergency Relief Manual states that Emergency Relief Program eligibility determinations reside with FHWA, and estimated repair costs should be documented to determine eligibility. As such, the practice of reviewing a sample of preliminary cost estimates does not appear to be consistent with the requirements in the Emergency Relief Manual, and as a result, we could not determine the basis of FHWA’s eligibility decisions for those project cost estimates it did not review. We found other cases in which cost increases were not documented according to the internal policies established by each of the division offices we visited. In New York and Texas, FHWA division officials stated they require additional documentation to justify cost increases of 25 percent or more. In Washington state, FHWA division office officials stated they require additional documentation if costs increase by 10 percent or more. Yet 14 percent of the project files we reviewed (12 of 83) showed total cost increases that exceeded the limits established by the three division offices and no additional documentation was on file to support the increases. The majority of the emergency repair project files that we reviewed did not include documentation demonstrating that emergency repairs were completed within 180 days from the event to be eligible for 100 percent federal reimbursement. included emergency repairs approved to receive 100 percent federal funding reimbursement if repairs were completed within 180 days of the event occurrence. However, 39 of the 58 (67 percent) did not have documentation on file to show the completion date of those repairs (see table 3). In total, only 14 of 58 (24 percent) emergency repair projects provided a completion date that was within 180 days of the event’s occurrence. For the majority (39 of 58) of projects, we were unable to confirm whether the emergency repairs were completed within 180 days and whether these projects were eligible to receive 100 percent federal reimbursement. Emergency repairs must be completed within 180 days from the event to be eligible for 100 percent federal funding. See 23 U.S.C. § 120(e); also see the FHWA regulation 23 C.F.R. § 668.107(a). completion. As such, FHWA lacks a standardized process for verifying the completion of emergency repairs within 180 days on projects for which it does not exercise full oversight. By law, states assume oversight responsibility for the design and construction of many federal-aid highway projects, including the vast majority of emergency relief projects in the three divisions we visited. As such, the states—rather than FHWA—were responsible for conducting final inspections of emergency relief projects. States are required to conduct a final inspection for all federal-aid highway projects under state oversight, and these inspections could be useful to determine federal share eligibility of emergency repairs if they provide project completion dates. While officials in each of the three state departments of transportation told us that they conduct final inspections of emergency repairs, we found only two final inspection reports prepared by states in FHWA’s records to confirm the completion of emergency repairs within the required time frame. In addition, when we reviewed final inspection reports from one of the state departments of transportation in our review, we were frequently unable to verify completion dates. Specifically, 11 of the 12 final inspections performed by officials at New York State Department of Transportation for projects in our review did not include project completion dates. Although the Emergency Relief Manual states that FHWA division offices reserve the right to conduct a final inspection of any emergency relief project, only the FHWA Texas division reported conducting spot inspections for a sample of emergency relief projects. In commenting on a draft of this report, DOT stated that the FHWA New York state division office uses other means to verify completion of emergency repairs within 180 days. According to DOT, the state often submits its DDIRs to FHWA after emergency repairs are completed, which allows FHWA to verify the eligibility and completion of an emergency repair when it reviews the DDIR. DOT reported that the FHWA division office does not sign the DDIR until it confirms the work is completed, and that its signature indicates verification that the work was performed within the required time frame. However, our file review found that 14 of the 18 emergency repair projects in New York that were approved for 100 percent federal funding did not have an FHWA signature on the DDIR. In addition to a lack of documentation, we found eight instances in which permanent repair projects may have incorrectly received 100 percent federal share reimbursement. According to the Emergency Relief Manual, absent specific legislative approval, permanent repair work is not to be considered emergency repair work even if it is completed within 180 days. However, we found instances in which projects were determined to be permanent repairs based on information in the project files, but were later authorized to receive 100 percent federal share. For example, in one project in our review, FHWA’s Washington state division office approved permanent repairs to a state highway for $2.6 million in estimated damages caused by a landslide. Our review of FHWA financial records for this project indicates that FHWA later authorized a federal reimbursement of $5.3 million, roughly 99 percent of the total project cost of nearly $5.4 million. FHWA Washington state division officials reported that this project was considered to be a permanent repair performed as an incidental part of emergency repair work. However, the project files did not include any emergency repair work to accompany the approved permanent repairs. According to these officials, the FHWA Washington state division interpreted the 2003 version of the Emergency Relief Manual as allowing incidental permanent work to be funded at 100 percent federal share either with or as emergency repair work. However, the manual states that during the 180 day period following the disaster, permanent repair work is reimbursed at the normal pro rata share unless performed as an incidental part of emergency repair work. As such, based on the program guidance, this project should have been reimbursed at 86.5 percent federal share. A primary purpose of the Emergency Relief Program is to restore highway facilities to predisaster conditions, not to provide improvements or added protective features to highway facilities. However, according to FHWA regulations and the Emergency Relief Manual, such improvements may be considered eligible betterments if the state provides economic justification, such as a benefit-cost analysis that weighs the cost of the betterment against the risk of eligible recurring damage and the cost of future repair through the Emergency Relief Program. In our file review we identified two areas of concern regarding betterments, including instances of missing documentation of benefit-cost analyses: Lack of documentation of required benefit-cost analyses. Six of the 15 projects (40 percent) identified as betterments in our review did not contain the required benefit-cost analyses in their files to justify the As a result we were unable to determine the basis on betterment. which FHWA approved these six betterments. We also found one instance in which the benefit-cost analyses used to justify an approved betterment did not meet Emergency Relief Program requirements. Specifically, FHWA’s New York division office approved a betterment of almost $1.6 million to repair and improve a damaged roadway and shoulder caused by an April 2007 storm. However, we found that the report prepared to justify the betterment did not weigh the cost of the proposed betterment against the risk of future damages and repair costs to the Emergency Relief Program, as required by program regulations. Consequently, we were unable to determine the basis on which FHWA approved the $1.6 million betterment. Lack of documentation indicating whether projects include betterments. We found that it was often difficult to determine which projects included betterments, as FHWA lacks a standard process for where and how betterments should be identified in project documentation. The Emergency Relief Manual states that betterments must receive prior FHWA approval and that further development of contemplated betterments should be accomplished with FHWA involvement, necessitating that proposed betterments are specifically identified. We found eight project files with indications that the projects may have included betterments that were not identified explicitly in project documentation or by FHWA officials. For example, following the completion of emergency repairs to remove debris and protect a bridge against erosion caused by a landslide, the FHWA Washington state division office approved an additional $3.7 million in permanent repairs in response to continued erosion and movement of the hillside. The documentation in the project file indicated that this permanent work was added to stabilize the slide area in anticipation of future flooding. According to officials from the FHWA Washington state division, this slide stabilization project was a betterment, but the project file did not contain documentation to indicate that this project was in fact a betterment. FHWA provides considerable discretion to its division offices to tailor the Emergency Relief Program within states and lacks a standard mechanism to specifically identify whether a project includes a betterment. FHWA’s Office of Asset Management has developed an Economic Analysis Primer for FHWA division offices to use when evaluating benefit-cost analyses for federal-aid program projects. However, neither the Emergency Relief Manual nor the Economic Analysis Primer provide sample benefit-cost analyses or specific guidance on what information should be included in the benefit-cost analysis to demonstrate that the proposed betterment will result in a savings in future recurring repair costs under the Emergency Relief Program. Because we had found betterments without documentation of the required benefit-cost analyses on file and identified possible betterments that were not explicitly identified as such, we could not confirm that federal funds were being reimbursed in accordance with the requirements of the Emergency Relief Program. Further, absent specific guidance for identifying and approving betterments to its division offices, FHWA cannot be assured that the Emergency Relief Program is being administered consistently. Conclusions The federal government plays a critical role in providing financial assistance to states in response to natural disasters and other catastrophic events. Given the costs of these events and the significant fiscal challenges facing both states and the federal government, it is increasingly necessary that federal financial support be delivered in an effective, transparent, and accountable manner so that limited funds are put to their best use. FHWA’s stewardship of the Emergency Relief Program could be better structured to meet that necessity. First, because some emergency relief projects can be delayed for many years due to environmental or community concerns and projects can grow significantly in scope and cost, the federal government faces the risk of incurring long-term costs for such projects. FHWA has limited tools to control its exposure to the costs of older events and ensure that as projects grow in scope and cost that they do not go beyond the original intent of the program, which is to assist states to restore damaged facilities to their predisaster conditions. Once an event has been approved for emergency relief by FHWA, the Emergency Relief Program as currently structured does not limit the time during which states may request additional funds and add projects, which increase the size of FHWA’s backlog list. Because Emergency Relief Program funding is not subject to the annual limits of the regular federal-aid highway program, states have an incentive to seek as much emergency relief funding as possible. Consequently, without reasonable time limits for states to submit funding requests for such older events, FHWA’s ability to anticipate and manage future costs to the Emergency Relief Program is hindered, as is Congress’ ability to oversee the program. Furthermore, without specific action by FHWA to address the recommendation from our 2007 report that it revise its emergency relief regulations to tighten eligibility criteria, the Emergency Relief Program will continue to face the risk of funding projects with scopes that have expanded beyond the goal of emergency relief and may be more appropriately funded through the regular federal-aid highway program. Second, while FHWA has taken some important steps in response to our 2007 report to manage program funding by withdrawing unobligated balances from states, it faces challenges in tracking allocations that have been provided to states. In particular, because FHWA division offices have allowed states to transfer unobligated allocations from an existing event to new events, and because FHWA headquarters is not tracking which divisions have done so, FHWA headquarters does not have the information needed to identify and withdraw all unneeded funds. In addition, without time frames to expedite the close-out of completed emergency relief projects, FHWA lacks useful information to help determine whether obligated but unexpended program funds are no longer needed and could be deobligated. Finally, the fact that we could not determine the basis of FHWA’s eligibility decisions in three states on projects costing more than $190 million raises questions about whether emergency relief funds are being put to their intended use and whether these issues could be indicative of larger problems nationwide. While federal law allows states to assume oversight over design and construction of much of the federal-aid highway program, including many emergency relief projects, FHWA is ultimately responsible for ensuring that federal funds are efficiently and effectively managed and that projects receiving scarce emergency relief funds are in fact eligible. This is especially important in light of the fact that emergency relief funds have been derived principally from general revenues in recent years and that the funds that states receive are above and beyond the funding limits for their regular federal-aid highway program funds. Without clear and standardized procedures for divisions to make and document eligibility decisions—including documenting damage inspections and cost estimates, verifying and documenting the completion of emergency repair projects within the required time frame, and evaluating information provided to justify proposed betterments—FHWA lacks assurance that only eligible projects are approved, and that its eligibility decisions are being made and documented in a clear, consistent, and transparent manner. Recommendations for Executive Action To improve the accountability of federal funds, ensure that FHWA’s eligibility decisions are applied consistently, and enhance oversight of the Emergency Relief Program, we recommend that the Secretary of Transportation direct the FHWA Administrator to take the following four actions: Establish specific time frames to limit states’ ability to request emergency relief funds years after an event’s occurrence, so that FHWA can better manage the financial risk of reimbursing states for projects that have grown in scope and cost. Instruct FHWA division offices to no longer permit states to transfer unobligated allocations from a prior emergency relief event to a new event so that allocations that are no longer needed may be identified and withdrawn by FHWA. Establish clear time frames for states to close out completed projects in order to improve FHWA’s ability to assess whether unexpended program funds are no longer needed and could be deobligated. Establish standardized procedures for FHWA division offices to follow in reviewing emergency relief documentation and making eligibility decisions. Such standardized procedures should include: clear requirements that FHWA approve and retain detailed damage inspection reports for each project and include detailed repair cost estimates; a requirement that division offices verify and document the completion of emergency repairs within 180 days of an event to ensure that only emergency work completed within that time frame receives 100 percent federal funding; and consistent standards for approving betterments, including guidance on what information the benefit-cost analyses should include to demonstrate that the proposed betterment will result in a savings to the Emergency Relief Program, and a requirement that FHWA approval of funding for betterments be clearly documented. Agency Comments and Our Evaluation We provided a draft of this report to DOT for review and comment. DOT officials provided technical comments by email which we incorporated into the report, as appropriate. In response to our finding that the Emergency Relief Program lacks a time limit for states to submit emergency relief funding requests, and our recommendation to establish specific time frames to limit states’ ability to request emergency relief funds years after an event’s occurrence, DOT noted that the program does include general time frames for states to submit an application and have work approved. We incorporated this information into the final report; however, since a state’s list of projects may be amended at any time to add new work, we continue to believe that FHWA’s ability to anticipate and manage future costs to the Emergency Relief Program is hindered absent specific time frames to limit states’ requests for additional funds years after an event’s occurrence. Such time frames would provide FHWA with an important tool to better manage program costs. DOT also commented that its ability to control the costs of some of the projects cited in the report that have grown in scope and cost over the years is limited in some cases by the fact that DOT received statutory direction from Congress to fund these projects. For example, Congress directed FHWA to provide100 percent federal funding for all emergency relief projects resulting from Hurricane Katrina in 2005. We incorporated additional information to recognize this statutory direction; however, a determination by Congress that a particular event should qualify for relief under the Emergency Relief Program, or for other individual actions, does not relieve FHWA of its stewardship and oversight responsibilities. Except as Congress otherwise provides, this includes its responsibility to determine whether enhancements to projects or betterments are consistent with its regulations and the intent of the Emergency Relief Program to restore damaged facilities to predisaster conditions. We continue to believe that, as a steward of public funds, FHWA generally has the discretion to take reasonable steps to limit the federal government’s exposure to escalating costs from projects that grow in scope over time. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution of this report until 30 days from the report date. At that time, we will send copies of this report to the appropriate congressional committees and the Secretary of Transportation. In addition, this report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2834 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology To identify Emergency Relief Program funding trends since our 2007 report, we reviewed federal statutes, including supplemental appropriations to the Emergency Relief Program made since 2007, and Federal Highway Administration (FHWA) documentation on annual funding authorizations to the program. We also reviewed FHWA data on emergency relief funds allocated to states in response to emergency relief events from fiscal years 2007 through 2010, as provided by FHWA’s Office of Program Administration. We interviewed FHWA officials in the Office of Program Administration to gather specific information on how data on allocations was collected and we also reviewed FHWA financial data on total allocations to states from FHWA’s fiscal management information system (FMIS). We interviewed officials from FHWA Federal Lands Highway, FHWA’s North Dakota Division Office, and the North Dakota state department of transportation concerning funding and project activities for the Devils Lake, North Dakota, emergency relief projects. To gather additional information on the Devil’s Slide project in California, we interviewed the FHWA California division office and reviewed information on the estimated project costs. To identify key changes to the Emergency Relief Program implemented in response to concerns raised in our 2007 report, we reviewed recommendations made to FHWA in our 2007 report and FHWA Emergency Relief Program regulations and guidance, including FHWA’s Emergency Relief Manual, as revised in 2009. We compared information in the current version of the Emergency Relief Manual with information in the previous version to determine which elements were revised. We interviewed FHWA officials in the Office of Program Administration to determine why specific changes were made, and we interviewed officials in three FHWA division offices to determine how program changes were implemented. To corroborate information provided by FHWA regarding its process of withdrawing unused Emergency Relief Program funds from states, we reviewed FMIS data on the emergency relief funds that were allocated among all states and territories, obligated to specific projects, and the remaining unobligated balance for all active Emergency Relief Program codes as of May 31, 2011. To determine other amounts of program funding that remained unused, we reviewed data in FMIS on the amount of emergency relief funding obligated to specific projects and expended by all states and territories for events occurring from fiscal years 2001 through 2010. We provided FHWA officials with our methodology for gathering data from FMIS to ensure that our data queries were accurate. To ensure the reliability of data collected in FMIS we interviewed FHWA officials on the procedures used by FHWA and states’ departments of transportation to enter and verify financial information entered into FMIS. We found these data to be sufficiently reliable for our purposes. To determine the extent to which selected emergency relief projects were awarded in compliance with program eligibility requirements, we reviewed federal statutes and regulations, and FHWA guidance on emergency relief eligibility requirements. We selected a nongeneralizable sample of state department of transportation and FHWA division offices in three states—New York, Texas, and Washington state. The states selected are not representative of the conditions in all states, the state departments of transportation, or FHWA division offices, but are intended to be examples of the range of practices and projects being funded by the Emergency Relief Program across the country. These states were selected based on several criteria: 1. The overall amount of emergency relief funding allocated to a state from fiscal years 2007 through 2010, to identify those states that were allocated the most funding (at least $15 million) over that period, based on allocation data provided by FHWA headquarters. 2. Frequency of funding requests to identify those states that requested funds for three or more fiscal years from 2007 through 2010. 3. The occurrence of an eligible emergency relief event since FHWA updated its Emergency Relief Manual in November 2009. For our purposes, we used emergency relief eligible events beginning October 1, 2009, as a proxy for identifying states with emergency relief events since the November 2009 manual update. A total of 10 states met all three criteria. We narrowed our selection down by eliminating those states that experienced outlier events, such as North Dakota’s reoccurring basin flooding at Devils Lake and the catastrophic failure of the Interstate 35 West bridge in Minnesota. We judgmentally selected New York, Texas, and Washington state to reflect a geographic dispersion of states. We reviewed a sample of emergency relief project files in the FHWA division office in each of these states to determine whether the project files included required or recommended documentation cited in federal statute, regulations, and FHWA program guidance. Such documentation included the President or state governors’ proclamation of a disaster, detailed damage inspection reports, cost estimates for repairs, photographs of the damage, and other information. Across the three division offices, we selected a nongeneralizable sample of 88 Emergency Relief Program files out of a total universe of 618 project files for emergency relief projects approved by FHWA in those states from fiscal years 2007 through 2010. Among the 88 projects in our review, 5 projects had been withdrawn by states as FHWA had determined them ineligible for emergency relief funds, or they were reimbursed through a third party insurance settlement, bringing the total number of projects reviewed to 83. The project files we reviewed represented approximately 67 percent of all emergency relief funds obligated to those states during that time period. Those projects were selected based on the following criteria: 1. All projects with more than $1 million in obligated federal funds between fiscal years 2007 and 2010, including a mix of active and closed projects and various event or disaster types. 2. Projects with more than $1 million in obligated federal funds for events from fiscal years 2001 through 2006 on the list of formal emergency relief funding requests as of March 7, 2011, that were either currently active or were completed more than five years after the event occurred. 3. Projects that had other characteristics that we determined to warrant further review, such as events with $0 amounts listed in FHWA’s FMIS database for total cost or which had expended relatively small amounts of funding compared with the obligated amounts in FMIS. Prior to our site visits, we requested that the division offices provide all documentation they maintain for each of the projects selected in our sample. We reviewed all the documentation provided during our site visits, and requested follow-up information as necessary. In conducting our file review, a GAO analyst independently reviewed each file and completed a data collection instrument to document the eligibility documentation that was included for each file. A second reviewer independently reviewed the file to verify whether the specific information identified by the first reviewer was present in the file. The analysts met to discuss and resolve any areas of disagreement until a consensus was reached on whether the required information was included in the file. To gather additional information on the project files we reviewed and the procedures used to manage and oversee emergency relief projects, we interviewed officials in the FHWA division offices and the departments of transportation in our three selected states. We provided the results of our file review to FHWA for their comment and incorporated their responses as necessary within our analysis. Lastly, we contacted state and local audit organizations through the National Association of State Auditors, Comptrollers, and Treasurers for the three states we reviewed, as well as North Dakota, to obtain reports or analyses that were conducted on FHWA’s Emergency Relief Program. None of the states in our review had conducted substantive work on the Emergency Relief Program. We conducted this performance audit from November 2010 to November 2011 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Summary of Emergency Relief Funding for Projects at Devils Lake, North Dakota Devils Lake in North Dakota lies in a large natural basin and lacks a natural outlet for rising water to flow out of the lake. Starting in the early 1990s, the lake level has risen dramatically—nearly 30 feet since 1992— which has threatened the roadways near the lake which were built in the 1930s and 1940s when lake water levels were lower. In April 2000, FHWA issued a memorandum that authorized raising the roads at Devils Lake in response to a predicted rise in the water level of the lake that was within 3 feet of causing inundation, as forecasted by the National Weather Service or U.S. Geological Survey. This allowance to repair roadways prior to damages incurred by an event is a unique provision for the FHWA Emergency Relief Program, which otherwise funds only post-disaster repair or restoration. The basin flooding events at Devils Lake also precipitated a related problem at Devils Lake, as some communities around the lake plugged culverts under roadways to impound rising water and protect property from flooding, which increased the roadways’ risk of failure. These roads were subsequently referred to as “roads-acting-as- dams” which required additional improvements to ensure their structural integrity to serve as dams. Devils Lake projects involve multiple stakeholders, depending on the location and type of roadway. FHWA’s North Dakota division office is responsible for overseeing the Emergency Relief Program projects administered by North Dakota department of transportation. FHWA’s Office of Federal Lands Highway is responsible for the oversight of the Emergency Relief on Federally Owned Roads program, which covers projects on the Spirit Lake Tribe Indian Reservation. The Central Division of Federal Lands Highway leads the overall coordination among the federal, state, and local agencies. FHWA reported that the two FHWA offices are working together to address the roads-acting-as dams projects which affect state highways and roads on the Sprit Lake Tribe Indian Reservation. The North Dakota department of transportation and the Spirit Lake Tribe are responsible for administering the construction projects on their respective roads. To ensure the integrity of the roads at Devils Lake, Congress included funding provisions in Safe, Accountable, Flexible, Efficient Transportation Equity Act: A Legacy for Users (SAFETEA-LU) to raise the roadways and make improvements to roads-acting-as-dams. Through SAFETEA-LU, Congress authorized up to $10 million of Emergency Relief Program funds to be expended annually, up to a total of $70 million, for work in the Devils Lake region of North Dakota to address the roads-acting-as-dams situation. These funds are known as section 1937 funds for the provision in SAFETEA-LU which authorized them. In the absence of other authority, this $10 million must come out of the $100 million annual authorization of contract authority that funds the Emergency Relief Program, effectively reducing the annual emergency relief funding available to other states to $90 million. SAFETEA-LU also included language that exempted the work in the Devils Lake area from the need for further emergency declarations to qualify for emergency relief funding. According to a June 24, 2011, FHWA policy memo, the final allocation of section 1937 funds was made on March 16, 2011, and the $70 million limit has been reached. Although rising water levels at Devils Lake are expected to continue into the future, no further federal-aid highway funds are eligible to raise roads-acting-as-dams or to construct flood control and prevention facilities to protect adjacent roads and lands. Appendix III: Results of GAO’s File Review of Emergency Relief Project Documentation Available in Three FHWA Division Offices Figure 6 represents the results of our review of 88 selected project files from FHWA’s division offices in New York, Texas, and Washington state. Our data collection instrument was used to collect the values for each field during our file review, and that information was summarized and analyzed by at least two GAO analysts (see app. I for a complete discussion of our file review methodology). Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the individual named above, other key contributors to this report were Steve Cohen, Assistant Director; Hiwotte Amare; Matt Barranca; Melinda Cordero; Lorraine Ettaro; Colin Fallon; Bert Japikse; Catherine Kim; Hannah Laufe; Kelly Liptan; Scott McNulty; Josh Ormond; and Tina Won Sherman.
The Federal Highway Administration (FHWA), within the U.S. Department of Transportation (DOT), administers the Emergency Relief Program to provide funds to states to repair roads damaged by natural disasters and catastrophic failures. In 2007, GAO reported that in recent years states' annual demand for emergency relief funds often exceeded the program's $100 million annual authorization from the Highway Trust Fund and required supplemental appropriations from general revenues to address a backlog of funding requests from states. GAO recommended that FHWA tighten eligibility standards and coordinate with states to withdraw unneeded emergency relief funds, among other actions. For this report, GAO reviewed (1) Emergency Relief Program funding trends since 2007, (2) key program changes made in response to GAO's 2007 report, and (3) the extent to which selected emergency relief projects were approved in compliance with program eligibility requirements. GAO reviewed projects in New York, Texas, and Washington state, states selected based on the amount and frequency of funding allocations since 2007, among other factors. From fiscal years 2007 through 2010, the Emergency Relief Program received about $2.3 billion, of which $1.9 billion came from three supplemental appropriations compared with about $400 million authorized from the Highway Trust Fund. FHWA allocated this funding to 42 states and 3 territories to reduce the backlog of funding requests, with $485 million in unfunded requests remaining as of June 2011. This backlog list did not include funding requests for August 2011 damages from Hurricane Irene. Because the program lacks time frames to limit states from requesting funds years after events occur, the June 2011 backlog list includes about $90 million for events that occurred prior to fiscal year 1994. Without time limits for emergency relief funding requests, FHWA's ability to anticipate and manage future program costs is hindered. In response to GAO's 2007 report, FHWA withdrew about $367 million of unobligated emergency relief funds from states and redistributed most of this funding for other emergency relief needs. However, additional funding remains unused, including (1) at least $63 million allocated to states before fiscal year 2007 that has yet to be obligated to projects and (2) $341 million obligated between fiscal years 2001 and 2006 that remains unexpended. Due to a lack of time frames for states to close-out completed projects, FHWA lacks project status information to determine whether unexpended funding is no longer needed and could be deobligated. FHWA has not addressed GAO's 2007 recommendation to revise its regulations to limit the use of emergency relief to fully fund projects that have grown in scope and cost as a result of environmental or community concerns. The Emergency Relief Program faces the continued risk of escalating costs due to projects that have grown in scope beyond the program's goal of restoring damaged facilities to predisaster conditions. GAO's review of 83 emergency relief project files in three FHWA state offices found many instances of missing or incomplete documentation--as such, GAO was unable to determine the basis by which FHWA made many eligibility determinations. For example, about half of the project files did not include required repair cost estimates, and 39 of 58 (67 percent) emergency repair projects approved for 100 percent federal funding did not contain documentation of completion within 180 days--a requirement for states to receive 100 percent federal funding. FHWA lacks clear requirements for how states submit and FHWA approves key project documentation, which has resulted in FHWA state offices applying eligibility guidelines differently. Establishing standardized procedures for reviewing emergency relief documentation and making eligibility decisions would provide greater assurance that projects are in fact eligible and that FHWA makes eligibility determinations consistently and transparently.
GAO_GAO-12-237
Background Capital reassures an institution’s depositors, creditors, and counterparties that unanticipated losses or decreased earnings will not impair a financial institution’s ability to repay its creditors or protect the savings of depositors. In general, capital represents the share of an institution’s assets with no obligation for repayment, although this condition varies for less traditional forms of capital such as some hybrid instruments. Because capital generally does not have to be repaid, it can serve as a buffer against declines in asset values without subjecting an institution to default or insolvency. Capital typically is provided by a banking institution’s owners or through earnings that are retained by the firm. When institutions experience financial losses, the value of the firm represented by the owner’s stake (including retained earnings) is reduced first, thus protecting bank depositors and other creditors from loss. Capital instruments vary in structure and their ability to absorb loss while preventing a banking institution from defaulting on its contractual repayment obligations. The strongest form of capital is common equity (or common stock), which carries no repayment obligation for principal or dividends, has the lowest payment priority in bankruptcy, and has no maturity date. Debt instruments are a weaker form of capital funding than common equity, as they require periodic interest payments and repayment of principal at maturity. Debt also has a higher claim than common equity in bankruptcy. Some debt instruments may qualify as capital if they contain certain equitylike characteristics such as a long maturity, subordination to other creditors, or ability to defer payments. Some hybrid instruments fall into this category, while others share more of the characteristics of common equity. Three federal regulators oversee what we refer to as banking institutions in this report (that is, banks, savings associations (thrifts), and their holding companies). The Federal Reserve is the primary regulator for state-chartered member banks (i.e., state-chartered banks that are members of the Federal Reserve System) and bank and thrift holding companies. OCC is the primary regulator of federally chartered banks and thrifts, and FDIC is the primary regulator for state-chartered nonmember banks (i.e., state-chartered banks that are not members of the Federal Reserve System) and state-chartered thrifts. In addition, FDIC insures the deposits of all federally insured banks, generally up to $250,000 per depositor. Prior to July 21, 2011, OTS was the primary regulator of federally and state-chartered thrifts and thrift holding companies. Because of capital’s important role in absorbing losses, promoting confidence, and protecting depositors, federal banking law requires banking institutions to maintain adequate capital. Federal banking regulators set the minimum capital levels to ensure that the institutions they regulate maintain adequate capital. Federal law also authorizes banking regulators to take a variety of actions to ensure capital adequacy, including informal and formal enforcement actions. In implementing the statutory requirements, regulators generally expect institutions to hold capital at levels higher than regulatory minimums, with specific expectations based on institutions’ risk profiles. The United States, along with nearly all other major economies, agrees to comply with international capital standards set by the Basel Committee on Banking Supervision (the Basel Committee). The Basel Committee, which comprises representatives of central banks and banking regulators from 26 countries, issued its first set of international guidelines on bank capital (commonly known as “Basel I”) in 1988. These guidelines included standards for the amount of capital banks should hold and the nature of the capital instruments that banks could count toward meeting these amounts. In 1998, the Basel Committee specified the characteristics of instruments that would either qualify as the highest quality (Tier 1) capital or would not meet this standard but could be eligible as lesser-quality (Tier 2) regulatory capital. For example, in order for instruments to be considered Tier 1 capital, the Basel Committee stated that instruments would need to meet certain criteria including deferability of dividends on a noncumulative basis, ability to absorb losses before the bank entered bankruptcy, permanence, and discretion over the amount and timing of distributions. Common equity best meets all of the qualifications of Tier 1 capital and thus should comprise the predominant share of Tier 1 under Basel guidelines. The Basel Committee standards have been revised several times since 1988, including most recently with the Basel III reforms released in 2010. Banking institutions subject to the Basel agreements are due to begin implementing some of the recently revised standards in 2013. Regulatory Treatment of Hybrid Instruments While definitions of hybrid capital vary, in this report we use “hybrid capital” and “hybrid instruments” to refer to those instruments that comprise what the Federal Reserve calls “restricted core capital elements”: cumulative perpetual preferred stock, trust preferred securities, certain types of minority interest, and mandatory convertible trust preferred securities (see table 1). These instruments include some but not all of the characteristics that the Basel Committee identified in 1998 as necessary for Tier 1 capital. Nonetheless, the Federal Reserve has allowed bank holding companies to include these instruments as Tier 1 capital. Specifically, the Federal Reserve permits restricted core capital elements in Tier 1 capital in an amount of up to 25 percent of a bank holding company’s total core capital elements after deducting goodwill. Other than these hybrid instruments, the Federal Reserve subjects bank holding companies to capital requirements that are generally similar to those for depository institutions. For example, in addition to common equity, all U.S. banking regulators recognize noncumulative perpetual preferred securities as a component of Tier 1 capital. Similarly, minority interests relating to common equity or noncumulative perpetual preferred securities are recognized as Tier 1 capital. As figure 1 illustrates, provisions in the Dodd-Frank Act require banking regulators to establish rules that will effectively subject bank and thrift holding companies to regulatory capital requirements that are at least as stringent as those applicable to insured depository institutions, thereby effectively eliminating hybrid capital instruments from Tier 1 capital. The restrictions will apply immediately to capital instruments issued on or after May 19, 2010. Only bank holding companies with less than $500 million in total assets are exempt from the Dodd-Frank Act hybrid capital exclusion. Bank holding companies with between $500 million and $15 billion in total assets and thrift holding companies with less than $15 billion in total assets will be allowed to continue including hybrid instruments issued prior to May 19, 2010, in Tier 1 capital but may not use any hybrid instruments issued on or after that date. Bank and thrift holding companies with more than $15 billion in total assets will be required to phase out all of their Tier 1 hybrid capital issued prior to May 19, 2010, over a 3-year period from 2013 to 2016. The act subjects thrift holding companies to the same provisions as bank holding companies, except for thrift holding companies with less than $500 million in total assets. These institutions are treated the same as bank holding companies with between $500 million and $15 billion in total assets. Trust preferred securities have been the most common form of Tier 1 hybrid instrument among bank holding companies. The Internal Revenue Service (IRS) typically treats these securities as tax deductible, making them cheaper than other forms of Tier 1 capital. As figure 2 shows, the holding company establishes a special-purpose entity, usually in the form of a trust that holds all of the common equity. The trust issues undated cumulative preferred securities to outside investors and uses the proceeds to purchase a deeply subordinated unsecured note issued by the bank holding company. Thus, the issuing trust serves as a conduit for exchanging funds between the bank holding company and the preferred equity investors. The subordinated note issued by the bank holding company is the trust’s sole asset and is senior only to the bank holding company’s common and preferred equity. The note has terms that generally replicate those of the trust preferred securities, except that the junior subordinated note has a fixed maturity of at least 30 years. Most trust agreements provide for the trust to terminate when the subordinated note matures. When the trust terminates, the trust preferred securities must be redeemed. The trust collects interest payments on the subordinated note from the bank holding company that it uses to pay dividends to holders of the trust preferred securities. The bank holding company can treat the interest payments on the subordinated note as a tax-deductible interest expense. The terms of the trust preferred securities allow dividends to be deferred for at least 5 years without creating an event of default or acceleration of the principal and accrued interest. After the 5-year dividend deferral period, if the trust fails to pay the cumulative dividend amount owed to investors, an event of default and acceleration occurs, giving investors the right to take the subordinated note issued by the bank holding company. At the same time, the bank holding company’s obligation to pay principal and interest on the underlying junior subordinated note accelerates, and the note becomes immediately due and payable. Hybrid Instruments, While Popular with Holding Companies, Do Not Absorb Losses Well Bank Holding Companies Relied on Trust Preferred Securities for Tier 1 Hybrid Capital Many bank holding companies have used hybrid instruments— predominantly trust preferred securities—to help meet Tier 1 capital requirements. In recent years, approximately two-thirds of all top-level bank holding companies that are subject to capital requirements (generally those with more than $500 million in total assets) have included hybrid instruments in their Tier 1 capital (see fig. 3). For example, December 2010 data filed with the Federal Reserve showed that 85 percent of bank holding companies with more than $10 billion in total assets and 100 percent of bank holding companies with over $100 billion in total assets included hybrid instruments in their Tier 1 capital. These hybrid instruments had a total value of $157 billion, representing 13 percent of all bank holding company Tier 1 capital. Of the total $157 billion in Tier 1 hybrid instruments, trust preferred securities accounted for $128 billion (82 percent).$100 billion in total assets, trust preferred securities made up 97 percent of the total value of Tier 1 hybrid instruments. Excluding Tier 1 Hybrid Capital Likely Will Have Limited Negative Effects With the exclusion of hybrid instruments from Tier 1, few banking institutions will fall below minimum amounts of regulatory capital, and greater reliance on common equity should improve the overall safety and soundness of banking institutions. To identify and resolve problems at banks and thrifts, the prompt corrective action provisions require depository institution regulators to classify insured depository institutions into one of five capital categories—well capitalized, adequately capitalized, undercapitalized, significantly undercapitalized, and critically undercapitalized—using different capital measures. Among these are the Tier 1 risk-based capital ratio (Tier 1 ratio), which measures Tier 1 capital as a share of risk-weighted assets and the Tier 1 leverage ratio (leverage ratio), which measures Tier 1 capital as a share of average total consolidated assets. Well-capitalized banks have a Tier 1 ratio of 6 percent or more, adequately capitalized banks a ratio of 4 percent or more, and undercapitalized banks a ratio of less than 4 percent.minimum leverage ratio is 4 percent for most banks, and well-capitalized The banks have a leverage ratio of 5 percent or more. The Federal Reserve applies similar minimum levels when assessing the capital adequacy of bank holding companies but generally does not identify specific criteria for adequately or well-capitalized institutions or use the term undercapitalized (see table 2). We evaluated the impact of the Tier 1 hybrid capital exclusion on bank holding companies’ capital levels using the explicit criteria for well capitalized that apply to banks and thrifts and found that most bank holding companies would experience a reduction in Tier 1 capital but maintain well-capitalized status without Tier 1 hybrids. Of the 969 top- level bank holding companies filing consolidated regulatory financial reports for 2010, 615 had hybrid instruments that the Dodd-Frank Act would exclude from Tier 1 capital. However, the amount of other Tier 1 capital instruments was large enough that 587 (95 percent) of these institutions would see no change in the capital adequacy category of their Tier 1 ratio without those instruments. As table 3 shows, 554 (90 percent) would maintain a Tier 1 ratio of well capitalized. The average Tier 1 ratio of all top-level bank holding companies would fall from 13.5 percent to 12.2 percent after excluding Tier 1 hybrid instruments, and the average ratio of institutions with Tier 1 hybrids would decrease by around 2 percentage points but remain considerably higher than the minimum level for the well-capitalized category. While the Dodd-Frank Act includes exemptions from the Tier 1 hybrid capital exclusion for the two categories of smaller institutions, our analysis revealed that some smaller institutions with Tier 1 hybrid instruments would not fare as well as larger institutions if they had to exclude hybrid instruments from Tier 1 capital. Specifically, 20 institutions would see the capital category of their Tier 1 ratio fall below the well-capitalized criteria but remain above the minimum level, and an additional 8 would fall to below the minimum level. All of these institutions had less than $2.5 billion in total assets and 7 had less than $500 million as of December 31, 2010. The average Tier 1 ratio of these 28 institutions, including Tier 1 hybrid instruments, was well below the overall average at 6.3 percent and would fall to 4.7 percent without Tier 1 hybrid instruments. Our analysis also found that more institutions would not meet the higher minimum Tier 1 ratio under Basel III, particularly smaller firms. However, how U.S. regulators will implement Basel III is unclear, including how they will determine which institutions will have to meet the higher standards or set the time frames for implementation. The Basel III capital framework— which all member countries, including the United States, have approved—increases the Tier 1 capital ratio and excludes the same hybrid instruments from Tier 1 as the Dodd-Frank Act. Effects of the hybrid capital exclusion on institutions’ leverage ratio capital categories would also be modest, although more institutions would fall below minimum levels. Almost all of the 615 institutions with Tier 1 hybrid capital would be in the same category of leverage ratio capital without those instruments. For example, about the same number of institutions would maintain Tier 1 and leverage ratios of well capitalized (see table 4). However, leverage ratios for 20 institutions would fall below the minimum level. All of the institutions that would see their leverage ratio capital categories fall had less than $4 billion in total assets and 9 had less than $500 million as of December 31, 2010. After excluding Tier 1 hybrid instruments, the average leverage ratio for all top-level bank holding companies would fall from 9.2 percent to 8.2 percent. The average ratio of institutions with Tier 1 hybrids would decrease from 8.9 percent to 7.3 percent, also considerably higher than the minimum level for the well- capitalized category. Exceptions to the hybrid exclusion will help limit potential negative effects on institutions’ capital levels. For example, as discussed earlier, the Dodd-Frank Act includes grandfathering provisions for bank holding companies with less than $15 billion in total assets that will allow these institutions to continue including hybrid instruments issued before May 19, 2010, in Tier 1 capital. Thus, all of these institutions that would have had their capital adequacy categories reduced based on their year-end 2010 Tier 1 or leverage ratios will not experience a reduction in Tier 1 levels as a result of the hybrid capital exclusion.earlier, the Dodd-Frank Act largely exempts institutions with less than Furthermore, as discussed $500 million in total assets from the exclusion of Tier 1 hybrid capital. In addition, although none of the larger bank holding companies not subject to the grandfathering provisions would fall below minimum capital levels or experience a reduction in capital categories based on the Tier 1 ratio or leverage ratio, a phase-in period will help limit the immediate effects of the hybrid capital exclusion. For institutions with $15 billion or more in total assets, hybrid capital deductions from Tier 1 must be phased in over 3 years beginning on January 1, 2013, almost 2-1/2 years after passage of the Dodd-Frank Act. Further, increased reliance on stronger forms of capital should increase institutions’ financial stability. Some institutions may have difficulty replacing hybrid instruments or choose not to replace them with other forms of Tier 1 capital. But to the extent that banking institutions replace hybrid capital instruments with capital that has a higher capacity to absorb unexpected losses—such as common equity—institutions’ financial resiliency should improve. Some market participants identified likely safety and soundness benefits for banking institutions that increase their share of common equity or other stronger capital sources. A few market participants noted that some institutions may respond to increased capital costs by increasing lending and investment risks, including activities for which increased risk may not require additional capital under existing risk- based capital requirements, to generate higher returns. However, bank regulators have the discretion to require higher levels of capital for institutions with heightened risk profiles, and recent Basel Committee reforms include enhancing the risk coverage of the capital framework by strengthening capital requirements for trading activities, complex securitization exposures, and counterparty credit exposures. One market participant argued that the safety and soundness effects could be negative if institutions decided to hold less capital overall rather than increasing the share of common equity. Another market participant noted that the hybrid capital exclusion limits institutions’ options for raising capital in times of financial distress. However, institutions that decide not to replace Tier 1 hybrid capital could retain the instruments in their capital structure, and the hybrid capital may qualify as Tier 2 capital. Furthermore, institutions will still be required to have capital levels sufficient to support safety and soundness, and this capital will be higher quality that will better absorb unexpected losses and improve institutions’ ability to withstand periods of financial distress. Effects on the Cost and Availability of Credit Likely Will Be Small The exclusion of hybrid instruments from Tier 1 capital likely will have modest immediate and long-term effects on the cost and availability of credit. In general, the hybrid capital exclusion could negatively affect the cost and availability of credit in two ways. First, if institutions view their Tier 1 capital positions as insufficient without existing hybrid instruments, they may take actions to maintain consistent regulatory capital levels, creating a negative capital shock that empirical studies suggest could have an impact on lending activity. For example, institutions could choose to replace excluded hybrid capital with other Tier 1 instruments such as common equity, increase capital through retained earnings, or reduce risk-weighted assets. Second, regardless of whether institutions take such actions, those that had previously relied on Tier 1 hybrid instruments as a cheaper form of capital could experience higher overall capital costs when raising Tier 1 capital in the future. Loan rates could increase if institutions choose to and are able to pass on any increased capital costs to borrowers. The terms of the hybrid capital exclusion and the relationship between lending activity and changes to capital levels will limit the exclusion’s immediate consequences for institutions’ lending decisions. As previously discussed, most bank holding companies would not experience reductions in capital levels from the Tier 1 hybrid capital exclusion because the Dodd-Frank Act exempted existing hybrid instruments for most smaller institutions and gradually introduced the exclusions for the remaining institutions with $15 billion or more in assets. Also, institutions with more than $15 billion in assets generally have Tier 1 capital in excess of regulatory minimums, potentially further limiting their need for an immediate response to the hybrid capital exclusion. Finally, institutions generally will be able to include excluded Tier 1 hybrid instruments in Tier 2 capital up to allowable limits, potentially minimizing effects on their total capital positions. Specifically, the dynamic framework we use is known as a vector autoregression (VAR) methodology. Following Cara Lown and Donald P. Morgan, “The Credit Cycle and the Business Cycle: New Findings Using the Loan Officer Opinion Survey,” Journal of Money, Credit, and Banking, vol. 38, (2006): 1575–97; and Jose M. Berrospide and Rochelle M. Edge, “‘The Effects of Bank Capital on Lending: What Do We Know, and What Does It Mean?,’’ International Journal of Central Banking, vol. 6 (December 2010), our model is a version of existing VAR models extended to include a banking sector. Our model includes four variables that capture supply, demand, output, and prices that comprise the “macroeconomy.” We extend the model to include the credit market using various proxies for loan volumes, bank capital, loan spreads, and information on lending standards. The econometric approach has specific limitations but is considered a reasonable alternative to other types of models, including more sophisticated models. See appendix II for a fuller discussion of the methodology, assumptions, and limitations. without the excluded Tier 1 hybrid instruments. Alternatively, institutions that are satisfied with a lower Tier 1 leverage ratio after the exclusions will have a smaller perceived capital deficit. As a result, we were able to analyze the impact under various assumptions about the institutions’ collective desire to rebuild capital buffers. Although considerable uncertainty exists, our model suggests that the immediate effects of the Tier 1 hybrid exclusion on the cost and availability of credit likely will be modest. In the model, a negative capital shock related to implementation of the hybrid capital exclusion causes loan volumes to fall, lending standards to tighten, and lending spreads to rise. However, the implied shocks are relatively small, and the sensitivity of lending activity to changes in capital levels is moderate. Our results are generally consistent with other studies we identified in our review. table 5 shows, in the scenario in which institutions restore 100 percent of excluded hybrid capital to maintain consistent Tier 1 capital ratios, our model estimates an average 1.12 percentage point peak decline in loan growth between two quarters and a year after the exclusion goes into effect. For lending spreads in this scenario, the model estimates an average 0.15 percentage point peak increase occurring about two to three quarters after the hybrid exclusion goes into effect. These effects on the cost and availability of credit are relatively modest and are even more so under less extreme scenarios that consider the amount of excluded hybrid capital that institutions replace. See appendix II for a complete list of studies we identified in our review. Implied change in Tier 1 leverage ratio (percentage points) Peak change in loan growth (percentage points) Peak change in lending spreads (percentage points) Our results assume that banking institutions immediately address capital reductions resulting from Tier 1 hybrid exclusions. Given that these institutions would continue to meet minimum capital requirements and expect the change to the use of Tier 1 hybrid capital, they are more likely to replace—if they elected to do so—any hybrid instruments slowly over a number of years. The Dodd-Frank Act excludes existing Tier 1 hybrid instruments over 3 years beginning in 2013 (for institutions with more than $15 billion in assets), also implying more limited effects on lending activity. However, the immediate effects on overall lending activity may be more significant for certain loan types. For example, the hybrid capital exclusion could affect volumes of commercial and industrial loans more significantly than other types of loans because markets for these loans appear more sensitive to changes in bank capital. The peak decline in loan growth from a negative capital shock is roughly 2.26 percentage points for commercial and industrial loans or about two times larger than the impact suggested for aggregate loan volumes (1.12 percentage points). Other studies also have found that commercial and industrial loans are more strongly affected by capital ratios than other types of loans.parameters are aggregate estimates and may not generalize to the specific circumstances of some banks. For example, our model suggests that banks will adjust lending spreads and loan volumes in response to the hybrid restriction. However some banks may not be able to raise rates and would likely have to take other actions, including reducing loan volumes by more than is suggested here. Although other studies found similar results, our estimates generally should be interpreted with caution, given the methodological and other limitations inherent in this type of analysis. For example, many of the specific estimates are not statistically significant with respect to the actual size of the hybrid capital exclusion’s effects, if any—meaning that they are not statistically different from zero. compared the results of our model to a wider body of empirical literature. In general, these sources also found small to moderate effects on lending activity from changes to bank capital. For example, our estimates have wide confidence intervals suggesting considerable uncertainty in the results (see app. II for limitations). exclusion results in a range of effects. For example, when we assume that institutions target capital ratios equivalent to replacing 70 percent of excluded hybrid instruments, estimates range from a decline of 0.13 to 1.81 percentage points for loan volumes and from an increase of 0.06 to 0.43 percentage points for lending spreads. Although exact comparisons are not always possible, the averages of the estimates are generally consistent with our model results.studies examine the impact of a generalized shock to capital. However, in the case of the Tier 1 hybrid capital exclusion, the capital shock is specific to bank holding companies with assets of $15 billion or more. Given the large number of banking institutions with assets of less than $15 billion, the ability of affected institutions to raise loan rates significantly may be limited by competitive forces, and the decline in loan growth may be mitigated by substitution across institutions. Long-term effects of the hybrid capital exclusion on loan rates will likely also be small, although the exact impact is unknown. Without Tier 1 hybrid instruments, loan rates could increase if capital costs rise for institutions that have relied on these instruments as a cheaper source of regulatory capital. To assess the potential impact on loan rates for these institutions, we used a modified version of an existing loan pricing model. Banking institutions have multiple options for adjusting to more costly forms of Tier 1 capital—such as shifting lending activity to lower- risk borrowers, reducing returns to shareholders, increasing efficiency, or raising lending rates—and the loan pricing model allowed us to consider these different scenarios. For all of the scenarios we examined, our model indicated minimal potential loan rate increases from institutions’ use of higher cost and quality Tier 1 capital and, as a result, modest effects on loan volumes (table 6). Even if institutions are assumed to adjust solely by raising lending rates, rates would increase by 0.12 percentage points. Other scenarios assuming that institutions’ adjustments also occurred in other areas led to smaller increases in lending rates. The long-term effects on lending rates may be more significant for certain institutions. For example, customers of smaller institutions could experience larger increases in loan rates, but even these effects likely will remain modest. Again, the effects on lending rates would likely be mitigated since it may be difficult for the impacted institutions to pass the higher cost on to borrowers without losing market share. The lack of tax-deductible Tier 1 hybrid capital instruments could result in a cost disadvantage for U.S. institutions relative to their foreign peers, although the overall competitive effects are unclear. Hybrid capital instruments, in particular trust preferred securities and real estate investment trust (REIT) preferred securities, generally have been the primary Tier 1 capital instruments for which U.S. institutions have received tax-deductible treatment.instruments receive favorable tax treatment compared to equity. The tax code generally allows interest expenses on debt instruments to be deducted from income, but not dividends or other payments to equity In the United States, debt holders. According to market participants, other Tier 1 capital instruments such as preferred stock generally have not qualified for tax advantages because their equitylike features, such as a perpetual maturity or noncumulative dividends, disqualify them from IRS consideration as debt instruments. Market participants said that a favorable tax treatment is one of the primary reasons banking institutions use hybrid capital. In addition, the Federal Reserve has identified the importance of trust preferred securities’ tax advantages to the competitiveness of U.S. banking institutions as a reason for allowing the instruments as Tier 1 capital. Changes to the definition of Tier 1 capital resulting from the Dodd-Frank Act and Basel III effectively eliminate hybrid capital instruments that qualify for tax-deductible status in the United States. Both the Dodd-Frank Act and Basel III prevent the use of trust preferred securities in Tier 1 capital, and Basel III restricts the use of REIT preferred securities for large banking institutions. In addition to the Dodd-Frank Act’s exclusion of trust preferred securities, Basel III contains provisions that would eliminate the instruments’ use as Tier 1 capital, although over a longer period. The Basel III framework requires Tier 1 instruments that are not common equity to meet certain criteria—including having a perpetual maturity and discretionary, noncumulative dividends—that effectively exclude trust preferred securities. Furthermore, Basel III limits the amount of Tier 1 capital credit for instruments such as REIT preferred, hindering their use as a tax-advantaged source of Tier 1 capital, according to market participants. The Basel III standards provide a single global definition of bank regulatory capital, but how those standards are adopted and implemented depends on statutory and regulatory action by national authorities. To promote complete and globally consistent implementation, the Basel Committee established a framework to monitor and review implementation of Basel III capital requirements. However, some foreign jurisdictions have tax codes that may allow tax advantages for hybrid instruments that would still qualify as Tier 1 under the new Basel III definition, potentially leaving U.S. institutions with a cost-of-capital disadvantage. The tax treatment of capital instruments— such as the ability to deduct interest or dividend payments—differs across countries based on their domestic tax regimes, potentially resulting in varied after-tax costs of Tier 1 instruments across countries. According to market participants, some foreign jurisdictions—particularly in Europe— allow tax deductibility of some perpetual, noncumulative capital instruments that would still meet Basel III Tier 1 criteria. For example, a 2006 report by the Committee of European Bank Supervisors indicated that European countries such as France, Germany, Spain, and the United Kingdom allow tax deductibility of some types of noncumulative, perpetual instruments that are not tax deductible in the United States. In a 2007 report on the use of hybrid capital instruments in Europe, the same organization found that almost all Tier 1 hybrid instruments in Europe were perpetual (95 percent) and noncumulative (93 percent). Market participants indicated that the lack of tax-deductible Tier 1 capital could result in a cost of capital disadvantage for U.S. institutions relative to their international peers. The longer time frame for excluding trust preferred securities and other Tier 1 hybrid instruments under Basel III rules also could present a cost-of-capital disadvantage for U.S. banking institutions during the extended phase-out period. The international competitive effects of any such disadvantage for U.S. institutions are uncertain given the scope and significance of other regulatory reforms occurring domestically and globally. Basel III and the Dodd-Frank Act include many significant changes to capital requirements and financial regulation that may have consequences for the international competitiveness of U.S. banking institutions—consequences that are equal to or greater than the consequences of the changes to Tier 1 hybrid capital rules. For example, Basel III increases required capital levels; introduces additional capital buffers; expands its coverage of risks, including those from securitizations and trading counterparties; and introduces a leverage ratio requirement and liquidity standards. The Dodd-Frank Act introduced fundamental reforms across the banking and financial regulatory systems, including changes to the regulation of systemic risks, the trading and investment activities of banking institutions, the use and trading of derivatives, securities regulation, and the structure of bank supervision. The extent to which these regulatory reforms may interact to present additional competitive advantages or disadvantages to U.S. banking institutions relative to their foreign peers will determine the ultimate significance of any tax disadvantage from hybrid instruments. In addition, market participants identified other factors that might affect any international competitiveness implications of not permitting tax-deductible Tier 1 hybrid instruments. First, U.S. regulators have not yet proposed rules for implementing Basel III, and their decisions on how, when, and for which institutions the provisions will apply may limit potential tax disadvantages. For example, one banking institution said that U.S. regulators could choose not to apply Basel III minority interest deductions to REIT preferred securities because regulators can require that the instruments be converted to preferred shares when necessary to absorb financial losses more effectively. Second, concerns about a cost-of-capital disadvantage would apply only to the largest U.S. banks that compete globally rather than to the many smaller banking institutions that compete with each other domestically. Third, institutions in some foreign jurisdictions may face competitive disadvantages from a more stringent application of Basel III rules for hybrid instruments. For example, European authorities said that draft rules for European institutions require an explicit loss absorption mechanism—such as the ability to write down or convert the hybrid instrument to equity—for all Tier 1 hybrid instruments, while the Basel III rules require such features only for some instruments (not including preferred shares). Finally, hybrid instruments will have a more limited role than in the past because of increased regulatory requirements for the amount of common equity in Tier 1, potentially moderating any competitive disadvantages from differences in the cost of Tier 1 hybrid capital. For example, the common equity requirement under Basel III represents over 80 percent of the overall Tier 1 capital requirement, and the share is higher for systemically important institutions. A higher requirement for common equity results in a smaller scope for using hybrid instruments to meet overall Tier 1 levels. Previous Basel guidelines called for common equity to make up only the predominant share of overall Tier 1—effectively 51 percent. Thus, any cost of capital disadvantages from Tier 1 hybrid instruments may be relatively less significant than under prior international regulatory capital frameworks. Smaller Banking Institutions Face Limited Capital- Raising Options but Report Little Unmet Capital Need Smaller banking institutions generally had limited options for raising capital, and one important form of capital—trust preferred securities—is now largely unavailable to these banks. According to market participants we interviewed, around 2000 or earlier, smaller institutions had little to no access to public capital markets, in part because their offerings were not large enough to attract investors. Starting in 2000, investment banks began pooling the trust preferred securities of many smaller institutions and selling shares of those pools to investors. This pooling of trust preferred securities expanded smaller institutions’ access to capital by removing many of the previous obstacles to attracting investors. For example, the pooled structures received combined credit ratings for all of the underlying issuers, while many smaller institutions did not receive individual ratings. As a result, for the first time, smaller institutions were able to access significant amounts of capital from investors who required credit ratings. Trust preferred securities quickly became a popular option for smaller institutions to access capital. Available data show that, from 2000 to 2007, trust preferred securities accounted for over half of all regulatory capital offerings made by smaller institutions and totaled more than $23 billion (see fig. 8). Based on our nationally representative survey, we estimate that 30 percent of smaller institutions considered that prior to January 1, 2008, their ability to issue trust preferred securities (including pools of trust preferred securities) had been beneficial to their ability to access regulatory capital. About half of institutions did not issue any trust preferred securities, and 10 percent considered their ability to issue trust preferred securities not at all beneficial. During and following the financial crisis, however, offerings of trust preferred securities dropped considerably. According to market participants, investors were no longer interested in purchasing trust preferred securities, partly because of their performance during the financial crisis and concerns about new regulatory restrictions such as those under the Dodd-Frank Act. Specifically, many investors in trust preferred securities found that the instruments did not meet their expectations during the crisis. For example, more institutions deferred dividends than investors had expected, particularly smaller institutions. Additionally, pools of trust preferred securities did not prove to be as diversified as anticipated. After 2007, trust preferred securities accounted for a much smaller share of smaller institutions’ regulatory capital offerings—just 3 percent from 2008 through 2010—and no smaller institutions offered trust preferred securities in the first half of 2011. Based on our survey, an estimated 12 percent of smaller institutions would likely be able to raise trust preferred securities within the next year. With trust preferred securities largely unavailable, smaller institutions increased their reliance on other types of preferred shares as a capital source, largely through investments from the Treasury Department’s Troubled Asset Relief Program (TARP). Prior to the financial crisis, smaller banking institutions rarely issued preferred shares that were not pooled into trust preferred securities. For example, between 2000 and 2007, preferred shares accounted for 4 percent of the number of regulatory capital offerings of smaller institutions. However, in 2008 and 2009, when TARP made its investments in hundreds of banking institutions, over half (58 percent) of smaller institutions’ capital offerings were in the form of preferred shares. Of these, 82 percent were offered through TARP. As the federal government is no longer making new capital investments in banking institutions, smaller institutions will likely face more limited access to preferred shares in the future. For example, preferred shares accounted for only 17 percent of smaller institutions’ capital offerings in the first half of 2011. Common equity now predominates, and the most available source of capital for smaller institutions is equity investments from board members or the local community. In 2010 and 2011, most capital offerings by smaller institutions (70 percent) were in the form of common equity. In 2010, smaller institutions raised more common equity—$7 billion—than in any year between 2000 and 2009, a period when the average amount raised annually was $3.4 billion. Based on our survey results, we estimate that 70 percent of smaller institutions would likely be able to raise equity capital from board members or their local community within the next year. However, smaller institutions were considerably less likely to be able to raise capital in other forms during this time. For example, we estimate that about 30 percent of institutions would likely be able to raise preferred equity from a private placement, subordinated debt, or common equity from a public offering, and the estimated percentages are lower for preferred equity from a public offering and trust preferred securities (see fig. 9). Percentages are estimates based on the results of our nationally representative sample of smaller banking institutions. All estimates have a margin of error of less than 7 percentage points. Smaller Institutions’ Ability to Raise Capital Varies by Financial Condition and Other Factors Smaller institutions consider their financial condition and performance (for example, profitability, debt and capital levels, and asset quality) as the most important factor in their ability to successfully access capital. Based on our survey results, we estimate that 87 percent of smaller institutions consider financial condition and performance as a very important factor in their ability to raise capital. Market participants noted that investors may be concerned about smaller institutions’ loan portfolios and concentrations in commercial real estate. They explained that smaller institutions tend to have greater geographic concentration and fewer business lines and tend also to focus on traditional lending, which has not been profitable recently. Management quality was the second most important factor, with 74 percent considering it as very important to raising capital. One smaller institution with less than $100 million in total assets noted that it could raise capital fairly easily from existing investors and local customers but added that they would have to perceive the bank’s performance and management as satisfactory. Smaller institutions rated several other factors as important to their ability to successfully raise capital, including growth potential, the economic environment in their lending area, and familiarity with investors. Additionally, results from our survey showed that smaller banking institutions’ ability to raise different forms of capital varied somewhat by factors such as asset size, ownership type (public or private), institution type (bank or thrift), and organization structure (holding company or stand-alone). For example, we estimate that a larger proportion of public institutions and institutions with total assets of between $500 million and $10 billion were likely to be able to raise common equity from a private capital offering than were private institutions and institutions with less than $500 million in total assets. Also, a larger proportion of banks and holding companies were likely to be able to raise subordinated debt than were thrifts and stand-alone institutions without a holding company. However, most of these groups saw equity investments from board members or the local community as the most available form of capital. The current regulatory capital-raising environment was described as very challenging for an estimated 44 percent of smaller banking institutions and moderately challenging for an additional 32 percent, for several reasons. Smaller institutions most often considered the economic climate and laws and regulations as challenges to their institutions’ ability to raise capital. Specifically, 89 percent of smaller institutions found the economic climate, market conditions, or both to be challenging to their ability to raise capital, and 86 percent found laws and regulations to be challenging. Several respondents identified SEC rules that apply additional reporting requirements to institutions exceeding 500 shareholders as a constraint on their ability to raise capital from new investors. Other factors that the majority of smaller institutions identified as challenging included the transaction costs of conducting a public offering, lack of access to public capital markets, and investors’ preference for large offerings. Market participants also identified several factors that inhibited smaller institutions’ access to public capital markets. For example, some investors have minimum investment requirements and cannot make investments below a certain size. At the same time, limitations on the share of ownership of banks—beyond which investors would have to register as a bank holding company—restrict the share of equity securities that most investors are willing to purchase. According to some market participants, the minimum investment size requirements, along with ownership limitations, eliminate many investors as a potential capital source for small institutions. Additionally, market participants said that potential investors generally were not willing to devote resources to researching offerings of relatively small banks because the research required for a small offering was nearly the same as it would be for a larger offering that would provide more potential for a higher absolute return. Also, market participants noted that investors generally required that the securities they purchased be liquid—that is, easily resold at a reasonable price. The capital offerings of smaller institutions typically have less liquidity than those of larger institutions because a more limited group of investors is able and willing to purchase the instruments, and they are traded less frequently. Finally, market participants reported that credit rating agencies generally did not rate the offerings of smaller institutions, which can restrict access to public capital markets. A Majority of Smaller Institutions Report No Unmet Capital Need Most smaller institutions have not raised capital since January 1, 2008, and the majority of those reported no need for or interest in additional capital (see fig. 10). Specifically, we estimate that 65 percent of smaller institutions have not raised capital since January 1, 2008, and 88 percent of those did not need or want to raise more regulatory capital. Only 3 percent of smaller institutions that had not raised capital since January 1, 2008, attempted to raise capital but were unable to do so. The smaller institutions that had raised capital since January 1, 2008, were generally satisfied with the capital they had raised. We estimate that 35 percent of smaller institutions had raised regulatory capital since January 1, 2008. Of these institutions, 82 percent reported that the amount of regulatory capital raised met their goal, and 93 percent reported that it met their initial terms and conditions. Institutions whose financial condition was relatively strong generally had a more favorable view of the capital-raising environment. Supervisory examination ratings assigned by a banking institution’s primary regulator generally assess the institutions’ financial condition and performance.According to our survey results, institutions that found the current regulatory capital-raising environment challenging had weaker supervisory ratings on average than institutions that did not find the environment challenging (see fig. 11). Furthermore, among smaller institutions that raised capital, the institutions that met their initial targets had significantly stronger supervisory ratings than institutions that did not meet their target amounts. Consistent with our survey results, market participants noted that capital was available for relatively healthy institutions that sought capital to support growth opportunities but was largely unavailable to weaker institutions seeking capital to address problems with their financial condition and performance. Agency Comments We provided a draft of this report to FDIC, the Federal Reserve, and OCC for their review and comment. FDIC and the Federal Reserve provided technical comments that were incorporated, as appropriate. We are sending copies of this report to appropriate congressional committees, FDIC, the Federal Reserve, OCC, and other interested parties. In addition, the report is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-2642 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology The objectives of our report were to examine (1) the use of hybrid capital instruments as Tier 1 capital and the benefits and risks of including them in this category, (2) the potential effects on banking institutions and the economy of prohibiting the use of hybrid instruments to meet Tier 1 capital requirements, and (3) options that exist for smaller banking institutions to access regulatory capital. Use of Tier 1 Hybrid Capital Instruments and Their Benefits and Risks To describe the use of Tier 1 hybrid capital instruments, we analyzed data from banking institutions’ regulatory financial filings and reviewed relevant federal banking regulations. To determine the instruments that were eligible for Tier 1 capital treatment for various banking institutions, we reviewed the statutes and regulations concerning capital requirements for banks (including national banks, state member banks, and state nonmember banks), thrifts, and bank and thrift holding companies. We also interviewed federal banking regulators—specifically, from the Federal Deposit Insurance Corporation (FDIC), the Board of Governors of the Federal Reserve System (Federal Reserve), the Office of the Comptroller of the Currency (OCC), and the Office of Thrift Supervision (OTS)—to determine the regulatory treatment of hybrid capital instruments for different banking institutions. We defined the scope of this report to focus on instruments that the Federal Reserve made eligible for limited inclusion in Tier 1 capital for bank holding companies but were not allowed for other types of banking institutions. These instruments— defined by the Federal Reserve as restricted core capital elements—will be excluded from Tier 1 capital by the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) and include trust preferred securities, which are widely recognized as the most common hybrid capital instrument. Because federal banking regulators did not allow these instruments as Tier 1 capital for depository institutions— banks and thrifts—our review focused on the use of hybrid instruments by holding companies. To assess the use of hybrid instruments by bank holding companies, we analyzed data that these institutions report to the Federal Reserve annually on form FR Y-9C, “Consolidated Financial Statements for Bank Holding Companies.” This report is filed by all top-level bank holding companies with $500 million or more in consolidated total assets and by select institutions with less than $500 million in total assets. The Federal Reserve supervises approximately 5,000 top-level bank holding companies, although most of these do not file form FR Y-9C and are not subject to Tier 1 capital requirements because of their small asset size. For December 31, 2010, our data included 969 top-level bank holding companies. We removed a small number of institutions from our data that reported “NA” for total assets. We also removed a small number of domestic subsidiaries of foreign banking institutions that the Federal Reserve exempted from Tier 1 capital requirements. We analyzed year- end data from 1997—the first full year following the Federal Reserve’s decision to allow trust preferred securities as Tier 1 capital—through 2010, the most recent year with complete data available. We collected the FR Y-9C data using SNL Financial, a private data provider, and calculated the amount of hybrid instruments eligible for inclusion in Tier 1 in consultation with the Federal Reserve. We also assessed the use of hybrid capital instruments by thrift holding companies. Although thrift holding companies were not subject to uniform Tier 1 capital requirements, they were informally assessed under rules similar to the Federal Reserve’s rules for bank holding companies, according to OTS officials. As such, we included thrift holding companies in our analysis using a proxy Tier 1 calculation described in the OTS examiner’s handbook for thrift holding companies. We obtained data on thrift holding companies from the holding company schedule in the Thrift Financial Report filed by OTS-supervised institutions. The data included some institutions that were not top-level consolidated thrift holding companies, and we removed them from our analysis after discussions with a former OTS official who is now at FDIC. We used SNL Financial and the Federal Reserve’s National Information Center to determine which records to remove from our analysis. Because data from the Thrift Financial Report includes fewer fields than the FR Y-9C, we limited our analysis to thrift holding companies’ use of trust preferred securities from year-end 2004 to year-end 2010. Data on thrift holding companies’ use of trust preferred securities was not available prior to 2004. To describe the benefits and risks of including hybrid instruments as Tier 1 capital, we collected and reviewed studies and other documentary evidence from federal regulators, industry participants and observers, and academic sources. We conducted interviews with market participants, including banking institutions, investment banks, credit rating agencies, law firms, industry associations, and each of the federal banking regulators. We also reviewed data on the recent default and dividend deferral activity of trust preferred securities provided to us by a major credit rating agency. Effects of Excluding Tier 1 Hybrid Instruments on Capital Adequacy and International Competitiveness To assess the effects of excluding Tier 1 hybrid capital on the capital adequacy of financial institutions, we analyzed regulatory capital data to determine the extent to which bank holding companies may fall below minimum regulatory capital levels without Tier 1 hybrid instruments. We used year-end 2010 data from the FR Y-9C regulatory filing discussed previously as a baseline to compare institutions’ Tier 1 capital levels before and after the hybrid capital exclusion. We assessed potential reductions in institutions’ capital categories based on the Tier 1 risk- based capital ratio and Tier 1 leverage ratio. For the risk-based capital and leverage ratios, we used the capital adequacy category of well capitalized based on the levels that FDIC has identified for depository institutions under prompt corrective action standards. For the minimum capital levels for these ratios, we used benchmarks based on the Federal Reserve’s bank holding company capital adequacy regulations. To consider the most significant potential effects, our analysis removed all Tier 1 hybrid instruments from all bank holding companies’ Tier 1 capital. In reality, any effects will be mitigated by grandfathering, exemptions, and phase-in periods. We also collected information from interviews with regulators and industry participants and observers on the potential effects of the hybrid capital exclusion on the safety and soundness of banking institutions. To evaluate the potential implications for international competitiveness of restricting Tier 1 hybrid capital, we reviewed studies and other documentary evidence and compared international rules on hybrid capital proposed by the Basel Committee on Banking Supervision with U.S. regulatory policy, including the Dodd-Frank Act. We also reviewed proposed rules to implement the new Basel Committee standards in Europe and reports by the Committee of European Banking Supervisors on the use of hybrid capital in Europe. We interviewed regulators, industry participants and observers, and European regulatory organizations to gather information on the effects of the hybrid capital exclusion on the international competitiveness of U.S. institutions. For information on our analysis of the hybrid capital exclusion’s potential effects on the cost and availability of credit, see appendix II. Smaller Banking Institution Access to Regulatory Capital To address our third objective, we conducted a nationally representative web-based survey of executives of banks, thrifts, and bank and thrift holding companies with less than $10 billion in total assets. Based on information on banking institutions provided by FDIC, OTS, and the Federal Reserve, we identified 6,733 institutions with less than $10 billion in total assets that would serve as the population for this survey. This population included all stand-alone banks and thrifts (banks and thrifts that do not have a holding company), as well as all top-level consolidated bank and thrift holding companies. We included top-level holding companies in our population rather than the subsidiary banks or thrifts because industry participants and regulators said that the holding company typically raised capital for its subsidiaries. We selected a stratified random sample of 794 institutions from the population of 6,733. We divided the population into four strata based on the amount of assets and the entity’s status—that is, whether it was part of a holding company or a stand-alone bank or thrift. We designed the sample size to produce a proportion estimate within each stratum that would achieve a precision of plus or minus 7 percentage points or less at the 95-percent confidence level. We then inflated the sample size for an expected response rate of 50 percent. Because of the small number of banks and holding companies with assets greater than $5 billion and less than $10 billion, we selected all of these with certainty. We received valid responses from 510 (64 percent) of the 794 sampled banking institutions. The weighted response rate, which accounts for the differential sampling fractions within strata, is 66 percent. We identified eight banking institutions in our sample that were either closed or were improperly included in the sampling frame. We classified these as out-of- scope institutions and adjusted our estimates so that they were generalized only to the 6,659 (+/- 58) institutions estimated to be in-scope institutions in the population. We analyzed our survey results to identify potential sources of nonresponse bias using two methods. First, we examined the response propensity of the sampled banking institutions by several demographic characteristics, including asset size, type of institution, region, regulator, and ownership status. Second, we compared weighted estimates from respondents and nonrespondents to known population values for four measures that were related to the survey outcomes we were most interested in. We conducted statistical tests of differences, at the 95- percent confidence level, between estimates and known population values, and between respondents and nonrespondents. We determined that weighting adjustments within strata would be sufficient to mitigate any potential nonresponse bias. We did not observe any significant differences between weighted estimates and known population values or between respondents and nonrespondents. The web-based survey was administered from June 15, 2011 to August 15, 2011. We sent banking institution executives an e-mail invitation to complete the survey on a GAO web server using a unique username and password. Nonrespondents received several reminder e-mails and a letter from GAO asking them to complete the survey. The practical difficulties of conducting any survey may introduce additional nonsampling errors, such as difficulties interpreting a particular question, which can introduce unwanted variability into the survey results. We took steps to minimize nonsampling errors by pretesting the questionnaire with four banks in April 2011. We conducted pretests to make sure that the questions were clear and unbiased and that the questionnaire did not place an undue burden on respondents. An independent reviewer within GAO also reviewed a draft of the questionnaire prior to its administration. We made appropriate revisions to the content and format of the questionnaire after the pretests and independent review. All data analysis programs were independently verified for accuracy. See appendix III for responses to survey questions. We also collected information on supervisory examination ratings from the Federal Reserve and FDIC to supplement information from our survey. To identify trends in the amount and types of regulatory capital raised by smaller banking institutions since 2000, we analyzed data on capital issuances. We obtained data from SNL Financial, which collects capital issuance data from Securities and Exchange Commission filings and press releases. We limited our review to the issuance of instruments that may be counted as Tier 1 or Tier 2 regulatory capital by an institution’s primary federal regulator. These included common equity, preferred stock, trust preferred securities, and subordinated debt. We discussed the data with SNL Financial representatives to confirm our understanding of what the data represented and what types of capital issuances were not included. The data included offerings on public and private exchanges but did not reflect capital raises that were not publicly offered, such as equity investments in small institutions made by board members or local communities. Comprehensive data on the raising of private capital were unavailable. We also interviewed market participants, including banking institutions, investment banks, industry associations, and federal banking regulators, to collect information on how smaller banking institutions access regulatory capital and challenges they face in raising capital. For parts of our methodology that involved the analysis of computer- processed data, we assessed the reliability of these data and determined that they were sufficiently reliable for our purposes. Specifically, we conducted reliability assessments on the SNL Financial data and on data from OTS’s Thrift Financial Reports. To assess the reliability of these data, we reviewed factors such as the timeliness, accuracy, and completeness. We conducted electronic testing and manual review to identify missing and out-of-range data and other anomalies and compared computer-generated data to source documents for a selected sample of companies. We conducted this performance audit from December 2010 to January 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: GAO Analysis of the Economic Effects of the Hybrid Capital Exclusion To assess the effects on banking institutions and the economy of prohibiting the use of hybrid instruments to meet Tier 1 capital requirements, we analyzed the potential impact of this change on the cost and availability of credit. Specifically, we designed a modified version of an established econometric model to estimate the effect of a change in Tier 1 capital levels on key credit market variables, including loan volume growth and lending spreads. We also used a modified version of an existing loan pricing model to assess the impact on loan rates of banking institutions’ inability to include newly issued hybrid securities as Tier 1 capital. Vector Autoregression Model To estimate the effect of changes to banking institutions’ capital ratios on the cost and availability of credit, we estimated a modified version of a vector autoregression (VAR) model commonly used in macroeconomic and monetary research. Our VAR model consists of eight variables, including variables that serve as a proxy for the banking sector. We conducted analysis known as “innovation accounting” to trace a temporary shock to bank capital through the banking system. These techniques allowed us to form estimates of the impact of changes in capital ratios on loan growth, loan spreads, and lending standards. Our model closely follows similar analysis by Berrospide and Edge (2010), Lown and Morgan (2006), and Bernanke and Gertler (1997). We found that a negative 1 percentage point decrease in the capital ratio results in a 1.2 percentage point decline in loan volume growth and a 0.16 percentage point (16 basis points) increase in loan spreads. We calibrated these estimates to the capital shock resulting from the hybrid capital exclusion, assuming that banks have a particular capital target. The VAR methodology provides a systematic method to capture dynamics in multiple time series and provide empirical evidence on the response of macroeconomic variables to various exogenous changes (called shocks or impulses within the framework). In contrast to structural models, VARs do not rely on detailed ex ante modeling of the relationships among the variables of interest. So long as they are present in the data during the period over which the model is estimated, many of the factors that need to be modeled separately by other estimation approaches—including international spillovers, impacts of competition or market power, and the stabilizing role of monetary policy—are incorporated implicitly. The VAR methodology advanced by Sims (1980) treats all variables symmetrically and as potentially endogenous. That is, each variable in the model is treated as if it is influenced by other variables in the system. No structure is imposed on the variables in the model, and instead any existing causal relations are determined purely by the data itself. Each variable is expressed as a linear function of its own past values and the past values of all other variables included in the system. The equations are estimated by ordinary least squares (OLS) with the error terms representing surprise/unexpected movements in the variables after taking past values into account. The VAR methodology can be transformed to examine the dynamic reaction of each of the endogenous variables to shocks to the system. This technique is often referred to as innovation accounting and involves the construction of impulse response functions. Impulse responses trace the effects of shocks or innovations to one variable through the system and examine their impacts on the other included variables. In tracing out the response of current and future values of each of the variables to a shock in the current value of one of the VAR errors, we assume that this error returns to zero in subsequent periods and that all other errors are equal to zero. Consequently, the shock is designed to be temporary. To exploit the innovation accounting framework and identify the impulse response function, we must impose some structure on the model that takes the form of simplifying restrictions. These restrictions result in causal priority given to some variables over others and are generally driven by theory. As a result, although the system incorporates feedback between all of the variables, some variables are expected to impact on others without contemporaneous feedback. As we discuss later, the ordering of variables is critically important and can impact the results in material ways. The residuals obtained from each of the estimated OLS regressions in the VAR system are combinations of underlying structural innovations. four variables: (1) real loan volume growth—commercial bank and thrift loan growth in our base models, (2) changes in lending spreads— commercial and industrial loan rate relative to a benchmark, (3) lending standards as measured by the net fraction of loan officers at commercial banks reporting a tightening of credit standards for commercial and industrial loans (C&I) in the Federal Reserve’s Senior Loan Officer Opinion Survey, and (4) the aggregate capital-to-assets ratio for the commercial banking sector. The addition of the latter four variables allows us to investigate the dynamic interaction between banks and the macroeconomy. We assembled the data from Thomson Reuters Datastream and the Federal Reserve System (table 8). We have relied on this data in our past reports and neither Thomson Reuters Datastream nor the Federal Reserve has changed their methods for collecting or reporting data since we relied on it last. We consider this data to be reliable for our purposes. We transformed all of the variables into growth rates except for the capital ratio and lending standards in our base models. We adjusted loan volumes for inflation as suggested by the Basel Committee on Banking Supervision’s Macroeconomic Assessment Group. Using the estimated VAR system for the third quarter of 1990 through the fourth quarter of 2010, we traced out the dynamic responses of loan volumes, lending spreads, and other macroeconomic variables to shocks to the bank capital ratio. As a result, we can obtain quantitative estimates of how bank “innovations” or “shocks” affect the cost and availability of credit. To model the relationship as validly as possible, we transformed the variables to ensure that they were stationary, selected the appropriate lag length using a formal test, tested formally for the stability of the system, determined a reasonable ordering of the variables, and conducted sensitivity tests. Our base results rely on impulse response functions using the following causal ordering of the variables: GDP, GDP deflator (inflation), federal funds rate, commodity spot prices, loan volumes, capital ratio, loan spreads, and lending standards. However, we also obtained impulse response functions using an alternative ordering that gave causal priority to the banking sector variables. Although these are two extremely different ordering schemes, we found that the results were only mildly sensitive to the decision to give causal priority to the macroeconomic variables. For example, using the standard ordering of the variables, we found a 1 percentage point increase in the capital ratio yields peak effects on loan volumes and lending spreads of 0.96 percentage points and 14 basis points, while the alternative order produced peak effects of 1.4 percentage points and 17 basis points, respectively. Nevertheless, our base estimates use the average of the outcomes for the two different orderings of the variables: (1) where the macro variables are given causal priority and (2) where the bank variables are given causal priority. We also varied the functional form in some sensitivity tests, including changing the time period analyzed and using different proxies for loan volumes and bank capital. In some sensitivity tests, we excluded the effects of the global financial crisis by running the model on the time period from the third quarter of 1990 to the third quarter of 2008. The estimated parameters from these estimates generally resulted in smaller effects on loans but larger effects on loan spreads. One finding in the literature is that C&I loans are more sensitive to changes in capital. As a result, we looked directly at the response of C&I loans to a capital shock. Our results were consistent with the literature, and we found an impact of capital changes on C&I loan volumes of about twice the size as the impact for aggregate loans. Specifically, for C&I loans, we found that a negative 1 percentage point increase in the capital ratio results in a 2.4 percentage point decline in loan volume growth and a 21 basis point increase in loan spreads. The VAR methodology, while containing some advantages over other modeling techniques, has particular limitations, and therefore results using this approach should be interpreted with caution. First, the methodology potentially overstates the quantitative effects of shocks on the economy and can be difficult to interpret. Second, the results are heavily influenced by market and macroeconomic conditions in place during past periods of large changes in the modeled variables, so they may not be informative if similar shifts take place under different circumstances. Also, because the statistical relationships are estimated from aggregate historical data, the model may not be fully informative about how economic actors will respond to future policy changes. Third, the model parameters are aggregate estimates and may not generalize to the specific circumstances of some banks. Fourth, causal priority is given to some variables over others in order to conduct meaningful assessments of the impacts of shocks to the system. Our results, however, are not particularly sensitive to this ordering, although we do obtain larger impacts of bank capital on lending activity with some alternative orderings. To minimize this limitation, our estimates are an average of a model where causal priority is given to the macroeconomic variables and a model where causal priority is given to the bank variables. It should also be noted that VAR shocks reflect omitted variables. If the omitted variables (factors or information) correlate with included variables, then the estimates will contain omitted variable bias. Lastly, in our particular case, the impulse response functions have wide confidence intervals, suggesting considerable uncertainty in the results. Despite these limitations, the VAR approach is considered to be a reasonable alternative to other types of models. Users of the report should be aware that the VAR methodology represents one approach to analyzing the effect of bank capital on lending activity. As a result, we believe the results should be analyzed in the context of the wider body of literature on the issue. Table 9 identifies studies that we used to compare our results for reliability and consistency. To assess the impact of the inability of banking institutions with greater than $500 million in assets to include newly issued hybrid securities as Tier 1 capital, we utilized a modified version of a loan pricing model following Elliott (2009, 2010). This methodology is designed to illustrate that banking institutions have multiple options for adjusting to more costly forms of Tier 1 capital and allows us to consider these different scenarios and show the implied change in lending rates. Given the variety of ways that banks can adjust and the degree of competition in loan markets, we found that the impact on lending rates will likely be modest. Our framework is a simple mathematical model that is based on a loan pricing equation where the price of the loan is such that it must at least cover the weighted cost of capital, expected credit losses, and administrative expenses. We augment the equation found in Elliott (2009, 2010) by decomposing equity into common equity and equitylike instruments (hybrid capital) that qualify as Tier 1 capital. Assuming that the loan is priced so that the rate charged at least covers the weighted cost of capital and that institutions hold common equity and hybrid capital as equity, we can write the following: L*(1-t) >= (E*(EK*rce + EH*rtps(1-t))+((D*r)+C+A-O)*(1-t) L = effective interest rate on the loan t = marginal tax rate for the bank E = proportion of equity backing the loan rtps = required rate of return (yield) on the marginal hybrid securities (trust preferred securities) rce = required rate of return (yield) on the marginal common equity EK = proportion of equity held as common equity EH = proportion of equity held as hybrid securities (trust preferred securities) D = Proportion of debt and deposits funding the loan r = Effective marginal interest rate on D C = the credit spread (equal to probability weighted expected loss on the loan portfolio) A = administrative and other expenses related to the loan O = other offsetting benefits to the bank of making the loan This formula is used to capture the lower cost of hybrid securities, including the associated tax benefits (EH*rtps(1-t)). In practice these instruments are largely trust preferred securities. As a result, we use the yield on trust preferred securities as our proxy for the yield on the class of hybrid instruments. We assume that the yield on hybrid capital is 8.65 percent based on our review of a small sample of actual trust preferred securities. For smaller banking institutions, we increase the yield on hybrid capital slightly to 9 percent. For the aggregate banking sector, we assume that institutions hold 12 percent of their Tier 1 equity in the form of hybrid securities based on our analysis of banking data from SNL Financial. Similarly, based on our analysis, we assume that smaller institutions hold a larger percentage of hybrid securities as equity—19 percent. We initiated our model using the assumptions laid out in Elliott (2009, 2010) but then made modest adjustments to calibrate the loan rate to the actual yield on loans for the commercial banking sector (5.6 percent). For smaller banking institutions, we used Elliott’s (2010) assumptions for banks with $1 billion to 10 billion in assets with minor modifications. For example, we assumed that smaller institutions had a higher probability-weighted loss on loan portfolios. The remaining assumptions not discussed here are contained in table 6. Our scenario analysis is designed to illustrate how the loan rate might be affected given various assumptions about banking institutions’ responses and other mitigating factors. However, because there is limited empirical foundation for many of our initial values, the assumptions underlying the analysis and estimates for the loan rate should not be considered definitive. Our analysis is designed to illustrate how the cost of credit might change given various assumptions about institutions’ responses and other factors, rather than arrive at precise estimates for the level of loan rates. Moreover, because we focused our analysis on the aggregate banking sector, the actual impact on and response by individual institutions can differ depending on a number of dynamics. For example, we have assumed that banking institutions have the ability to pass on higher costs to borrowers in the form of higher lending rates, to some degree. However, some institutions may have to resort to asset sales, thereby reducing the total amount of their risk-weighed assets or undertaking other actions due to the inability to pass on the higher cost of capital to customers. Appendix III: Responses to Questions from GAO’s Survey of Smaller Banking Institutions We sampled 794 stand-alone banks and thrifts (those with no holding company) and top-level bank holding companies and thrift holding companies with total assets of less than $10 billion from the population of 6,733 to examine the options these smaller institutions have for raising capital. We received valid responses from 510 (64 percent) out of the 794 sampled institutions. Tables 10-24 show the responses to questions from the survey. Because we followed a probability procedure based on random selections, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we also provide the lower and upper bound estimates at a 95 percent confidence interval. The weighted response rate, which accounts for the differential sampling fractions within strata, is 66 percent. For more information about our methodology for designing and distributing the survey, see appendix I. Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Daniel Garcia-Diaz (Acting Director), James Ashley, Kevin Averyt, Emily Chalmers, William R. Chatlos, Rachel DeMarcus, M’Baye Diagne, Lawrance Evans Jr., Richard Krashevski, Jill Lacey, Courtney LaFountain, Marc Molino, Patricia Moye, Michael Pahr, and Maria Soriano made key contributions to this report.
Hybrid capital instruments are securities that have characteristics of both equity and debt. The Federal Reserve allowed bank holding companies to include limited amounts of hybrid instruments known as trust preferred securities in the highest level of required capital (Tier 1), although other federal banking regulators never approved these or other hybrid instruments for this purpose. Responding to concerns that these instruments did not perform well during the 2007-2009 financial crisis, in 2010 the Dodd-Frank Wall Street Reform and Consumer Protection Act (Dodd-Frank Act) required regulators to establish rules that will exclude the instruments from Tier 1 capital and required GAO to study the possible effects of this provision. This report addresses (1) the use, benefits, and risks of hybrid instruments as Tier 1 capital; (2) the potential effects of the exclusion on banking institutions and the economy; and (3) options for smaller banking institutions to access regulatory capital. For this work, GAO analyzed data from financial regulatory filings and other sources, interviewed regulators and market participants, conducted economic analysis, and surveyed smaller banking institutions. GAO makes no recommendations in this report. GAO provided a draft to the banking regulators for their review and comment. FDIC and the Federal Reserve provided technical comments that were incorporated, as appropriate. Tier 1 hybrid capital instruments, particularly trust preferred securities, have been heavily used by bank holding companies because of their financial advantages, but they are not as effective in absorbing losses as traditional forms of Tier 1 capital, such as common equity. As of December 31, 2010, almost two-thirds of all top-level bank holding companies that were subject to capital requirements included hybrid instruments in their Tier 1 capital, for a total value of $157 billion. Hybrid instruments such as trust preferred securities have offered institutions the benefit of lower-cost capital, largely because of their debt-related features—including tax-deductible dividends. These instruments also are accessible to a broader range of potential investors. However, trust preferred securities do not absorb losses like other Tier 1 instruments because of their obligation to repay principal and dividends. Trust preferred securities may provide limited financial flexibility in times of stress, but they may also hinder efforts to recapitalize troubled banking institutions. Eliminating Tier 1 hybrid capital likely will have modest negative effects on the existing capital measures of individual banking institutions and lending and could improve institutions’ financial stability. Few institutions will fall below minimum regulatory capital levels without Tier 1 hybrid instruments, and banking institutions’ overall safety and soundness should improve with higher reliance on common equity. GAO’s analysis of the relationship between bank regulatory capital and lending activity suggests that any negative effects on the cost and availability of credit should be small, but the exact impact is unknown. Market participants said that losing access to tax-advantaged Tier 1 instruments could place U.S. institutions at a competitive disadvantage, as some foreign banks may still have access to such instruments. The international competitive effects are unclear, however, given the scope of ongoing worldwide regulatory reforms. Smaller banking institutions, which often had larger proportions of hybrid instruments as Tier 1 capital, have limited options for raising regulatory capital but indicated little unmet need for it. These smaller institutions now have access primarily to common equity raised from private sources. GAO’s survey results showed that smaller institutions consider their financial condition and performance as the most important factor affecting their ability to raise capital. Market participants identified challenges that could impact smaller institutions’ ability to raise capital, including limitations related to the size of capital raised, liquidity, and return potential for investors. However, GAO estimated that most smaller institutions (65 percent) had not raised regulatory capital since January 1, 2008, and of these, a large majority (88 percent) indicated that they had no need or interest in raising more. Further, most smaller institutions that had raised capital since 2008 were satisfied with the amount and terms involved. Only a small percentage of institutions (3 percent) that had attempted to raise capital since January 1, 2008, were unable to do so. Institutions with a stronger financial condition generally had a more favorable view of the capital raising environment.
GAO_HEHS-95-190
Introduction In recent years the claims adjudication process within the Department of Veterans Affairs (VA) has been the subject of much concern and attention, both within VA and from others, including members of the Congress and veterans service organizations. In May 1995, veterans were awaiting decisions on over 450,000 compensation and pension claims and VA was taking over 5 months on average to process original disability compensation claims. Although these numbers represent some improvement over the immediately preceding years, no such improvement has been shown in appeals processing. At the end of 1994 over 47,000 appeals were before the Board of Veterans’ Appeals. On average, veteranscould expect to wait almost 2-1/2 years from the date of their original claim to receive a decision on these appeals. Many veterans will, therefore, experience a significant delay in receiving benefits they are entitled to. Over the last 5 years, numerous studies have made recommendations to improve VA claims processing and many of these recommendations have focused specifically on appeals processing. In response to these studies, recent legislation, and other management initiatives, VA has implemented many changes and plans more efforts to address problems at both its regional offices (VARO) and the Board. In spite of these actions, however, VA officials foresee continued problems with appeals processing. Although the recommendations were wide-ranging, three areas were most frequently targeted. Appendix I summarizes recommendations and VA actions in two of these areas: (1) improving staff performance through training, guidance, and standards and (2) increasing process efficiency through actions such as automation, revised procedures, and increased staffing. Recommendations and VA actions in the third area—ensuring that VA organizations work together to identify and resolve problems—are the focus of this report. Claims Adjudication Is Complex The adjudication of VA disability compensation and pension claims, including adjudication of appeals, is a fairly complicated process, involving many organizations in VA and often a veterans service organization representative as well as the veteran. The process is one that seeks to ensure every opportunity for the veteran to substantiate the claim. One of the 58 VAROs in the Veterans Benefits Administration (VBA) first makes a decision about a claim after obtaining all available pertinent evidence, including oral evidence at a hearing if the veteran so requests. An important part of the evidence is often a physical examination conducted by physicians in the Veterans Health Administration (VHA), which operates the many medical centers and other health facilities in VA. If dissatisfied with the VARO’s decision, the veteran may file an appeal, which is decided by the Board, an independent organization within VA. The Board also may conduct a hearing if the veteran requests. The Board can allow or deny benefits or remand (return) the claim to the VARO to develop further evidence and reconsider the claim. Before 1989, the Board’s decision on an appeal was final. In that year the Court of Veterans Appeals, established by the Veterans’ Judicial Review Act (P.L. 100-687, Nov. 18, 1988), began to hear cases. With the Court in place, the Board is no longer the final step in the claims adjudication process. Veterans who disagree with Board decisions may now appeal to the Court. Additionally, under some limited circumstances, either veterans or VA may appeal Court decisions to the Court of Appeals for the Federal Circuit. Veterans appeal relatively few VARO decisions. In 1994, veterans appealed 8 percent of compensation and pension cases decided by the VAROs. Of the cases appealed to the Board, about 17 percent are ultimately allowed by the VARO or Board when they consider the appeal. Adjudication Problems a Continuing Concern VA officials have recognized the critical problems they face in appeals processing and are making many changes in an effort to solve them (see app. I). Such changes could improve claims adjudication quality and timeliness; however, Board officials are skeptical about the extent to which implemented and planned changes will reduce the appeals backlog and processing time. In fact, the Chairman of the Board suggested that to solve the problems either significant additional resources must be committed or the process itself must be altered. The Congress’ continued concern about VA’s ability to process claims led it to create, in November 1994, the Veterans’ Claims Adjudication Commission. The Commission, with representation from both inside and outside VA, is charged with studying the entire adjudication process, from the beginning through final appeal, and making recommendations for improvements. The Commission is to submit its preliminary conclusions to the Congress in November 1995 and its final report with recommendations in May 1996. Objectives, Scope, and Methodology Senator John D. Rockefeller IV, then Chairman, Senate Committee on Veterans’ Affairs, and Senator Ben Nighthorse Campbell asked us to review several aspects of the Board’s processing of appeals. During our preliminary work we identified several recently completed reviews of VA’s appeals process. We also determined that VA had made progress in implementing two types of recommendations frequently cited in these studies—those dealing with improving staff performance and with making the process more efficient—but substantially less progress in a third area, interaction among VA organizations responsible for appeals processing. In subsequent discussions with the requesters’ staffs we agreed to focus our review on (1) the current state of appeals processing, including trends in timeliness and backlog, and (2) the extent to which VA has implemented study recommendations, especially those designed to help VA organizations work together to improve claims and appeals processing to better serve the veteran. Through discussions with Board officials and others, we identified seven studies that included adjudication of veterans’ appeals. The studies were completed between May 1990 and March 1995. The studies contained 89 recommendations. To better understand the intent of some recommendations and obtain further insights of those who had participated in the studies, in some cases, we also discussed the recommendations with members of the study groups. We reviewed the following seven studies that evaluated aspects of the appeals process: 1990 GAO report, (Veterans’ Benefits: Improved Management Needed to Reduce Waiting Time for Appeal Decisions (GAO/HRD-90-62, May 25, 1990) discussed the lack of timeliness in VA appeals processing. 1992 Task Force on the Impact of the Judicial Review Act of 1988 evaluated the effect of the Court on workload and timeliness. 1993 Blue Ribbon Panel on Claims Processing made recommendations to shorten the time it takes to make disability decisions, including appeals. 1994 Analysis of Board of Veterans’ Appeals Remands (VBA’s study of almost 700 remanded cases) sought to improve the timeliness of the service VA provides to customers who appeal a decision. 1994 Board of Veterans’ Appeals Select Panel on Productivity Improvement developed recommendations to reduce appeal processing time. 1995 VA’s Compliance With the Court of Veterans Appeals (a committee appointed by the Secretary of VA) investigated issues raised by the Court’s Chief Judge. 1995 VA Inspector General Audit of Appeals Processing Impact on Claims for Veterans’ Benefits reviewed the impact of appeals and the appeals process. We also included in our review actions taken beyond specific study recommendations, including the Board’s plans for reorganization and implementation of two key pieces of legislation that mandated or allowed changes in claims processing: The Board of Veterans’ Appeals Administrative Improvement Act of 1994 (P.L. 103-271, July 1, 1994) and the Veterans’ Benefits Improvements Act of 1994 (P.L. 103-446, Nov. 2, 1994). Among other things, these laws allowed single-member Board decisions and required VAROs to place a priority on adjudicating appeals remanded by the Board. To determine the status of the recommendations and other actions, we interviewed officials of three VA organizations directly involved in appeals processing—VBA, the Board, and the VA Office of the General Counsel—in Washington, D.C. We also reviewed pertinent documents and records obtained from these organizations, as well as relevant VHA policies; VHA publishes the guidance used by its physicians in performing disability determination examinations. Additionally, to provide examples of the adjudication process and problems identified in the studies, we reviewed a small sample (39) of appeals that the Board remanded during the period January 27 to 31, 1995. We selected the cases from the 290 that were remanded during this period. Although the Board decides appeals about all aspects of veterans programs, including, for example, health care, home loans, and education, over 90 percent of appeals concern claims for disability compensation and pension benefits originally decided by VAROs. Therefore, we focused on these types of appeals. The Board remands appeals to the agency of original jurisdiction, which could be organizations within VA other than VBA. However, because we focused on compensation and pension appeals, we refer throughout this report to VBA, which is the agency of original jurisdiction for these types of claims. We did not assess either the efficiency of VA’s claims and appeals adjudication operations or the impact that implementing the study recommendations would have on operations. Also, we did not independently verify, beyond reviewing VA central office documents, the extent or manner in which recommendations have been implemented. We did our work in accordance with generally accepted government auditing standards from December 1994 through July 1995. Appeals Process Is Increasingly Bogged Down Veterans are waiting increasingly longer times for their appeals to be decided. Both the number of appeals waiting to be processed and appeal processing times have grown substantially in recent years. Similarly, the percentage of cases remanded to VAROs for additional action has increased. The Board attributes much of this increase to additional responsibilities placed on VA and the Board in particular by the act and Court rulings.These responsibilities are seen as especially difficult to integrate into an already complicated and lengthy process. Backlog and Processing Times Greatly Increased The backlog of cases awaiting Board action has been increasing since at least 1985. However, as shown in figure 2.1, this increase began to skyrocket after 1991. Overall, the backlog increased by almost 175 percent, from about 17,000 in 1991 to over 47,000 in 1994. In large part this increase is due to the decrease in the Board’s productivity—the number of decisions rendered annually. This decrease began about the time the Court began remanding cases to the Board for lack of completeness. Although the number of appeals received by the Board annually has decreased slightly in recent years, the backlog has grown substantially as the number of decisions the Board rendered decreased. The number of decisions dropped from about 45,000 in 1991 to about 22,000 in 1994. Given that the number of decisions rendered annually is significantly lower than the number of appeals being filed (about 35,500 in 1994), the backlog can be expected to continue to increase. Appeals processing time is also on an upward course. On average, appeals decided in 1990 that were not remanded took about 16 months from the time the veteran notified the VARO that he or she disagreed with the VARO’s decision until the Board made its decision. In 1994, an appeal took over 24 months to process, a 50-percent increase in 4 years. During a large part of that time the appeal is “in queue” at the Board, awaiting its turn. The portion of time the appeal was before the Board—after the VARO had reconsidered the appeal based on the veteran’s notice of disagreement and certified the appeal to the Board—increased 100 percent, from 6 months in 1990 to 12 months in 1994. Board officials acknowledged that as the backlog increases the processing time will also rise. Unlike VBA, the Board has not established claims processing timeliness goals. VBA has established goals for some types of claims. For example, its goal for original compensation claims is 106 days. However, the Board has not established any similar types of goals. Another measure of this increase is the Board’s response time, defined as the number of days it will take the Board to render decisions on all pending appeals. The Board’s response time rose from 240 days in 1992 to 781 days in 1994; as of July 1995, the response time for fiscal year 1995 was estimated at 694 days. These statistics are even more alarming because about one-half the Board’s decisions are not final. Increasingly, the Board is remanding cases to the VAROs for additional development and reconsideration. The percentage of cases remanded has increased substantially. Before 1991, about 20 percent of cases were remanded annually. This percentage began rising in 1991 to its current rate of about 50 percent. The number of remands peaked at about 17,000 in 1992; there were about 11,000 remands in 1994. (See fig. 2.2.) Additionally, in 1994, over 34 percent of appeals being remanded to the VAROs had already been remanded one or more times. Remands can add substantially to Board workload. Board data show that about 75 percent of the cases remanded to a VARO are again denied and returned to the Board. Thus, about 8,000 of the 11,000 claims remanded by the Board in 1994 can be expected to be returned; this is about 20 percent of the 35,500 appeals the Board received in 1994. Furthermore, average processing time is higher for remanded cases and also has been rising. In 1990, the Board averaged just over 17 months to process a remanded appeal, but by 1994 the time had increased by over 60 percent to almost 28 months. Claims Adjudication Process Can Be Cumbersome The claims adjudication process in VA has evolved over many years and seeks to provide the veteran every opportunity to prove his or her claim and to obtain benefits. The basic process, which starts with filing a claim with a VARO, allows each claim up to five considerations, as follows: VARO staff decide whether to grant the benefits after obtaining and considering all relevant information, such as military service and medical records, and in most cases a physical examination by a VHA physician. If the veteran notifies the VARO that he or she disagrees with the decision, the VARO will reconsider the case and if it does not grant the requested benefits it will issue a statement of the case summarizing the reasons. If the veteran still disagrees, he or she can file an appeal. The VARO will again reconsider the case and if all claimed benefits still are not granted it will send the appeal to the Board. However, any amount granted by the VARO is received by the veteran while the amount under appeal is being decided by the Board. The Board makes its decision. If benefits are still denied, the veteran can take his or her claim to the Court. But this process can become increasingly complex. At any time before a claim is appealed to the Court, the veteran can ask and will be given an opportunity for a hearing. If the hearing is before a VARO hearing officer, the hearing officer may also consider the claim and grant benefits if evidence in the hearing warrants. If the claim has already been appealed, a Board member would conduct the hearing and consider the evidence presented in arriving at a Board decision. Much of this process, such as when veterans may receive hearings and from whom, is set forth in legislation and regulations. The veteran may also introduce new evidence during the process, requiring VA to then obtain that evidence and decide if it is relevant and, if so, consider it in the decision. Under VA regulations, unless the veteran shows good cause, all new evidence must be submitted within 90 days after the VARO notifies the veteran that his or her appeal has been forwarded to the Board. The Board is usually liberal in its interpretation of good cause, according to a Board official; thus, veterans frequently submit new evidence after the 90-day period. Also, the veteran may claim a new or increased disability after the appeal is in process, in which case, unless the veteran waives his or her right for reconsideration, the VARO must complete any required additional development, often including a new physical examination, and then reconsider the claim. Again, the veteran’s right to revise a claim during the adjudication process is set forth in legislation, regulations, and Court decisions. Figure 2.3 illustrates this process. If the Board remands the claim to the VARO, much of the process may be repeated, since the remand usually will require additional development and reconsideration. Additionally, even if the veteran has received a final decision on a claim he or she may seek to reopen the claim with the submission of new and material evidence. This is considered a new claim in VA’s processing system. Board officials pointed out that even if the evidence is not new and material a decision by both VBA and, if the decision is appealed, by the Board is required to adjudicate the claim. Officials Cite the Act and Court Decisions as Key Factors in Increased Board Workload VA officials stated that increased responsibilities imposed on VA by the 1988 act and Court decisions have contributed directly to the substantial increase in claims and appeals processing times as well as to the increased number of remanded decisions. Board officials stated that the impact of the act and the Court began to be felt in 1991, after the Court began issuing decisions in 1990. Officials cited increased responsibilities in two areas as having the greatest impact: the need to fully explain the reasons and bases for decisions and VA’s duty to assist the veteran in filing a claim. Board officials point to the expanded requirements to explain the reasons and bases for their decisions as one of the key reasons for the 50-percent reduction in Board productivity. They noted, for example, that these requirements, such as specifically discussing the merit and weight given to each item of evidence, have substantially increased the complexity of each decision in terms of both its length and the time it takes to prepare it. The decrease in the Board’s productivity is also reflected in a substantial growth in the per-case costs that the Board incurs to process appeals. As figure 2.4 shows, the Board’s per-case cost was relatively stable until 1991, when it began to increase rapidly. The costs grew from about $400 in 1990 to $1,250 in 1994. The increased duty-to-assist requirements fall heaviest on the VAROs, which have primary responsibility for developing claims. However, given that the failure to comply with some aspect of duty to assist is a reason for the majority of remanded cases, the Board, too, feels the impact. Data from the February 1994 VBA remand study show that 60 percent of the remands involved the need to accomplish additional case development; that is, in the Board’s view, VAROs had failed to satisfy their duty-to-assist responsibilities by not making sufficient effort to obtain some type of evidence in support of appellants’ claims. Board data also point to the substantial impact of the duty-to-assist requirement. The data show that claims are remanded, on average, for 2.6 reasons. The Board categorizes these data into 20 reasons; only 4 are cited in over 20 percent of the cases and 3 of those directly relate to duty-to-assist requirements. Officials also noted that both the act and Court decisions have placed greater emphasis on procedural requirements, resulting in remands for failure to follow technical procedures. The following case serves as an example. A veteran’s ex-wife was appealing a VARO decision not to apportion part of the veteran’s pension for the period of time they were separated before their divorce. The VARO denied the ex-wife’s claim because the veteran had provided financial support during that time and apportioning his benefits would have caused an undue financial hardship. The Board remanded the appeal, directing the VARO to comply with contested claim procedures by notifying the veteran of the appeal. Yet the claim file indicates that the veteran was aware of the appeal, because his representative provided evidence to the Board at an informal hearing on the appeal. Board and VBA officials agree that in many of these cases the likelihood of a change in the decision—that is, in the veteran being granted benefits—is virtually nonexistent. In the past, officials said, the Board often would not have remanded such cases because the final result would have been the same. Now, with the possibility of judicial review before them, the Board does remand the case to ensure all necessary procedures are followed. Finally, both Board and VBA officials said that cases often are remanded so that the VARO can apply a new rule—that is, a rule articulated by the Court after the VARO makes its decision but before the Board closes the case. Court decisions must be applied to all pending cases. Thus, even if the VARO applied the law appropriately at the time that its decision was made, if an applicable Court decision is issued before the Board makes a final decision, the Board will remand the case to the VARO to apply the newly stated requirements. For example, in one case a VARO denied death compensation to a veteran’s wife based on VA regulations that were in effect at the time. While the case was pending before the Board, the Court held that a portion of the regulation used in deciding the case was invalid. Therefore, the Board remanded the case. Data available from the Board indicate, however, that this is a relatively infrequent occurrence. No more than 6 percent of the cases remanded were because of laws, regulations, or Court decisions that became effective after the VARO’s decision. Current Process May Contribute to Problems Some officials we spoke to see the process itself as part of the problem, and the studies we reviewed made several recommendations to change various aspects of the process. Legislative or regulatory changes are needed before these recommendations can be implemented, however, and in some cases concerns have been raised about the possible negative impact on veterans seeking benefits. One recommendation limits the time in which the veteran may introduce new issues to the appeal. The Inspector General found that allowing veterans to introduce new evidence and issues at virtually any time in the appeals process caused significant delays in many cases. In March 1995 he recommended that veterans not be allowed to add new issues or evidence once the VARO certified the appeal to the Board; a new appeal would have to be filed on new issues. A recent VBA study of a small sample of appeals seems to support the potential benefit of this recommendation. The study found that in all cases the hearing officer’s decision was correct (over one-half the cases had a VARO hearing); only after the hearing and after new evidence and issues had been included were problems found. VBA is currently considering how to implement the Inspector General’s recommendation, including deciding if a legislative change is necessary. Another recommendation seeks to change the process in another way. The Board’s Select Panel on Productivity Improvement called for the Board to obtain the evidence needed to decide appeals. Thus if additional information were needed, the Board would obtain it directly from the source rather than remanding the claim back to the VARO to get it. This recommendation is intended to reduce the time spent preparing cases by eliminating the need for formal statements between the Board and VAROs about additional required evidence. In addition, the chairman of the panel noted that the suggested change would use Board members and staff attorneys to prepare decisions, rather than VARO staff who are not attorneys, and thus better ensure that the decisions meet the Court’s requirements. VA is drafting revised regulations to enable it to test this approach. Another recommendation by both the Select Panel and VA’s Inspector General is to allow VARO hearing officers to conduct hearings for the Board. The studies point to several positive benefits of this change, including a reduction of travel costs for Board members and the opportunity for more cases to be resolved in VAROs. A legislative change is needed to implement this recommendation. The Inspector General also recommended eliminating the requirement for statements of the case for disability claims. Currently, VAROs must prepare a statement of the case for each claimant who initiates an appeal. The purpose of this statement is to aid the claimant by describing the issues on appeal and summarizing the evidence of record, applicable laws and regulations, and reasons for the decision. The report suggested that VA’s responsibility to provide the claimant with the legal citations on which the decision is based could be met by including those citations in the rating decision rather than in the statement of the case as is currently done. Again, a legislative change is needed before VA can implement this recommendation. Concerns have been raised about some of these recommendations, most frequently focused on the potential negative effect on those seeking benefits. Both veterans service organizations and VA’s Inspector General disagree with the recommendation to have the Board obtain information directly, concerned that already overburdened Board staff would get further behind. While this concern might be overcome by adding staff to the Board, perhaps from VAROs, some veterans service organizations were also concerned that this recommendation would eliminate one step in the process that could allow cases to be granted by VAROs. Likewise, the Disabled American Veterans representative on the Select Panel disagreed with the recommendation concerning hearing officers, stating that it would prevent the appellant from having the hearing before the person who would decide the case. Conclusion The appeals process is very cumbersome and additional efforts to identify ways to streamline it are warranted. The studies of the appeals process included in our scope did not fully review the process itself, much of which is legally mandated. In addition, some of the process changes recommended in recent studies may be seen by many as reducing veterans’ rights. Decisionmakers will have to weigh the benefits of providing individual veterans with unlimited access to the system against the impact that access has on the system as a whole and all veterans seeking benefits from VA. Interaction Among VA Organizations Needed to Ensure Effective Service to Veterans Since 1990, at least four studies have made recommendations aimed at ensuring that VA organizations work together to improve claims and appeals processing to better serve veterans and their families. VA officials point to a number of formal and informal meetings and working groups that they believe meet the intention of the studies’ recommendations. However, we found evidence that existing mechanisms do not always identify or are slow to resolve important problems in adjudication—problems that require input from several organizations to attain resolution. Our work, for example, identified several areas in which different organizations either interpreted or applied laws and regulations differently. These types of differences not only contribute to inefficient adjudication, but also inhibit VA’s ability to clearly define its responsibilities and the resources necessary to carry them out. Lack of agencywide consensus about its responsibilities, in turn, inhibits VA’s ability to identify meaningful solutions to the current claims and appeals processing difficulties. Meanwhile, veterans wait longer and longer for decisions on their claims. VA Organizations Operate Independently The Board, VBA, and VHA are independent of each other and report directly to the Secretary. All are bound by the same laws and regulations, but they also issue independent policy and procedural guidance. Guidance issued by one does not apply to the others. Although VHA and VBA provide guidance for claims adjudication by the 58 VAROs through VBA’s claims processing manual and VHA’s physicians’ guide for conducting examinations, the Board is not involved in developing or reviewing that guidance. VA’s Office of General Counsel, which also reports directly to the Secretary, may issue precedent opinions that also are binding on all VA organizations, including the Board. (See fig. 3.1.) There is no one place where the Board’s interpretation of laws, regulations, and Court decisions is set forth and available for other VA organizations to use in establishing policies and procedures. The Board’s role is to render decisions about the legal adequacy of VA’s implementation of laws and regulations, including Court decisions, in specific cases. Its legal authority does not include setting policy or issuing rules for VA claims adjudication. As legally constituted, each of the Board’s approximately 60 members is responsible for drawing individual conclusions from Court decisions and laws. To state specific interpretations about adjudication requirements to which all Board members are bound would, according to Board officials, be beyond the Board’s authority. “Articulating the reasons(s) for providing the claims folder to the [physician conducting the disability examination] will better enable the [VARO] to discern the situations in which it is necessary to do so in other cases, thereby eliminating the necessity for remand for reexamination when the claims folder had not been available to the .” VA headquarters does not have a central point that reviews Board remands or reversals on an ongoing basis that could identify trends in Board interpretations of Court decisions that signal the need for changes or clarification in adjudication policies and procedures. Soon after a Court decision affecting how VAROs adjudicate claims is issued, a teleconference is held with staff from each VARO and representatives of VBA, General Counsel, and the Board. Additionally, VBA, in consultation with General Counsel staff, puts out written guidance shortly after such Court decisions. This written guidance has limitations, however. First, the Board does not participate in developing it. Additionally, the full impact of Court decisions may not be immediately clear so that this written guidance may not capture all needed changes. Board interpretations of Court decisions can evolve over time as individual Board decisions more fully or clearly articulate the principles included in specific Court decisions. Currently, these Board decisions are sent directly to the VARO responsible for the initial decision. Under current procedures, therefore, staff in each of the 58 VAROs independently implement Board decisions. One of the conclusions of the recent report from the Secretary’s Court of Veterans Appeals Fact-Finding Committee—that VARO staff do not understand the underlying legal principles of Court decisions and, therefore, cannot apply them in “similar” cases—suggests the current approach has not been effective. Figure 3.2 depicts the complex framework of the adjudication process. Many different organizations or agencies are involved, including 58 VAROs and nearly 60 individual Board members, and they have requirements or guidance placed on them from many different sources. Recommendations for More Interaction Not Implemented, Other Actions Taken Four of the studies that we reviewed raised issues that related to the need for organizations within VA to work together to improve claims and appeals processing. VA does have a variety of communication mechanisms and is cognizant of the importance of having all the organizations working together to support the veteran. However, VA has not implemented specific recommendations aimed at improved interaction, believing that existing mechanisms fulfill the intent of those recommendations. Studies Recommend Formal and Ongoing Interaction Among VA Organizations As early as 1990, we identified and reported unresolved problems in appeals processing that involved more than one VA organization. We recommended that a focal point be established to lead efforts to resolve problems and obtain cooperation among the VA organizations involved in the appeals process. In 1992, VA’s Task Force on the Impact of Judicial Review recommended that representatives from organizations in VA work together in a small group to develop strategic, coordinated initiatives responding to judicial review. This type of concern was echoed more recently by both the May 1994 VBA Analysis of Board of Veterans’ Appeals Remands and the June 1994 Board of Veterans’ Appeals Select Panel on Productivity Improvement. The VBA analysis recommended that an active working group be established with representatives from the Board, VBA, and where appropriate VHA to address problem areas so that VA as a whole could provide better service to the veteran. Likewise, the Select Panel recommended that a steering group composed of VBA, VHA, the Board, and veterans service organizations meet at least quarterly to recommend changes concerning the quality and timeliness of claims processing. Some officials we spoke to also suggested that the organizations may have difficulty working together to resolve issues, especially those that directly affect individual organization resources. Some suggested that a broader organizational restructuring may be needed. For example, staff familiar with VBA and Board activities suggested that working groups comprised of representatives of equal status—that is, without designating someone to be in charge—are ineffective. Likewise, the chairman of the Select Panel on Board Productivity noted that someone above each of the organizations needed to be involved to ensure resolution of problems. Others familiar with the system also suggested a variety of changes. The Chief Judge of the Court, expressing dissatisfaction with VARO implementation of Court decisions, suggested that VAROs should be placed directly within the chain of authority of the Court. Likewise, an official in VA’s General Counsel suggested that Board members be stationed in the VAROs, improving communication and reducing processing time. Similarly, the chairman of the Select Panel on Board Productivity suggested that the introduction of the Court created an entirely new set of circumstances and that the Board’s role may need to be fundamentally changed. He pointed out, for example, that as the process is currently structured, the Board and VBA hand appeals back and forth to each other and neither organization is held accountable for the total action. The Chairman of the Board has also noted that since the Court became operational the Board’s role is a hybrid—on the one hand judging VARO decisions and on the other making its own decisions about appealed cases. While not suggesting a specific resolution of the issue, he has suggested that it may be necessary to redefine the Board’s role. VA Officials Point to Extensive Communication Among Organizations Officials pointed to a variety of formal and informal communications involving organization officials and staff. Most involve at least VBA and the Board and concern claims adjudication. In general, VA officials noted that representatives of the VA organizations meet frequently as the need arises. They also pointed out that many of the authors of the studies we reviewed were representatives from several key VA organizations, thereby assuring that they worked together to identify problems and solutions. More specifically, in the fall of 1994, the Deputy Secretary of VA started holding monthly meetings with the Chairman of the Board and the Under Secretary for Benefits to discuss claims processing issues. The Chairman of the Board also noted that he meets with VA’s General Counsel and the Under Secretary for Benefits as needed to iron out any problems that come up. Although VA never established a focal point as we recommended, the Chairman said that the meetings between key officials, in essence, ensure a focal point for agency actions in appeals processing. He could not identify specific policy or procedural changes resulting from these meetings, explaining that the meetings would be only a part of the process for developing such changes. VA officials told us that the ongoing, formal strategic planning process—preparing a written strategic plan—does not include a focus on responding to requirements of judicial review. They said that this recommendation, which was made in September 1992, was not considered pertinent when the new administration took over in January 1994 with new strategic planning priorities. However, the Chairman again pointed to the meetings of key organization officials as, in essence, constituting a strategic planning forum. The Chairman suggested that under earlier administrations the key VA organizations were, perhaps, less willing to work together, but that the current Secretary has put a priority on VA organizations working together to serve the veteran. He also said that actions such as those taken as part of the budget process and in response to the Select Panel recommendations demonstrate that VA is, in fact, addressing issues resulting from judicial review. Likewise, the working groups recommended by the VBA Remand Analysis and the Select Panel were not established, even though officials in both VBA and the Board had agreed to do so. VA officials reiterated that many meetings already occur between representatives of the organizations. Officials cited triad meetings—monthly meetings of staff from the Board, VBA, and General Counsel—as especially significant. Officials said that these meetings are intended to resolve intra-agency issues. Staff members who attend these meetings told us that the primary focus of the meetings is on individual cases, for example, agreeing on a Department position on a claim appealed to the Court, rather than on more general procedural or policy issues. VA officials pointed out, however, that the need for policy or procedural changes can be identified from discussions of individual cases and provided an example of a procedure that was worked out between the Board and VBA at those meetings. (It dealt with whether the Board or VBA would notify the claimant about a particular type of Board decision.) VA officials also said that they were reluctant to establish working groups such as that recommended in the Select Panel report to identify problems because doing so would duplicate the efforts of the Veterans Claims Adjudication Commission. The Commission is charged with making recommendations to improve VA’s claims adjudication process. Commission officials told us that they were sympathetic with VA’s reluctance to devote resources to yet another study of the claims processing issue or to develop solutions now that might be overcome by recommendations of the Commission. However, they were uncertain about the extent to which their study and recommendations would include the overall organizational structure or interaction among the various VA organizations responsible for claims adjudication. They also said that it may be inappropriate for VA to postpone actions to identify and resolve ongoing problems pending the Commission’s report. Organizations’ Interpretation of Some VA Responsibilities Are Inconsistent A key area requiring cooperation and communication among VA organizations is ensuring consistent interpretation and application of legal requirements within VA. Doing so is critical to fair and efficient claims adjudication. Yet despite the extent of ongoing discussions, we found evidence that organizations continue to interpret some requirements differently. For example, we found differences in interpretations in several aspects of VA’s duty to assist veterans in filing claims. Two major areas of concern relative to VA’s duty to assist are the sufficiency of documentation and adequacy of medical examinations required to support a veteran’s claim. Officials Have Different Views About Requirements for Documentation VBA officials have indicated concerns about the Board’s interpretation of VA’s responsibility to assist veterans in developing their claims as stated in some cases remanded to VAROs. In discussions with us, for example, they questioned Board directions to (1) try repeatedly to obtain information from sources such as the Social Security Administration and the military services, (2) solicit from the veteran information about all records that might possibly exist rather than relying on the veteran to identify pertinent information and records, and (3) obtain records that officials know will not support the veteran’s claim. Case Study Example One case we discussed with VBA and Board officials demonstrates some of the concerns that VBA raised. The veteran maintained he was struck in the back by shrapnel during an explosion on a ship in November 1944 or February 1945. He said that he sought treatment in sick bay and has suffered from back problems since his discharge. The veteran filed for disability in 1946, but was denied service connection because the condition was not shown during service. The decision was not appealed and became final. In October 1991 (45 years after the initial claim was denied) the veteran filed to reopen his claim. He requested a VA examination but did not submit additional evidence. The claim was denied in November 1991. The veteran appealed in July 1992, at which time he submitted new evidence, including service personnel records that indicated that he had a scar on his back before entrance into the service. At a Board hearing in December 1992, the veteran submitted additional evidence, including a statement from a service friend and evidence of other care received from private medical providers. The friend said that he remembered hearing that the veteran was hit with shrapnel during an explosion. The Board remanded the case to the VARO in January 1995. The following summarizes the actions required by the Board in this remanded case and comments by VBA and Board officials. Those comments highlight some of the potential areas of disagreement. Board Remand Action 1 The VARO should request from the veteran as much detail as possible about the explosion and injury, such as dates, times, units, and the names of others involved. The VARO should prepare a summary of the veteran’s account and send it to the service department or the official depository of U.S. Navy records. The VARO should request a search to ascertain whether there is evidence to corroborate the explosion and location of the veteran at the time of the explosion. Officials’ Comments VBA officials said that they normally would not request this detail from the veteran. If the veteran volunteered the information, they would record it and verify it, if possible. Board officials said that although the records requested do not specifically support the claim, they could help support the veteran’s general credibility. Board Remand Action 2 The VARO should make additional attempts to obtain copies of Daily Sick Reports or any other document from the veteran’s service unit for the dates of treatment he reports. Officials’ Comments VBA officials said they would not normally obtain these records; Daily Sick Reports only show that the service member reported to sick bay, but do not give details about the case. Again Board officials said the information would help establish the veteran’s credibility. Board Remand Action 3 The VARO should request from the veteran the names and addresses of all health care providers who have treated him for a back disorder since his discharge. After securing the necessary releases, the VARO should obtain copies of all treatment reports and hospital treatment folders not already on file. If any records identified by the veteran are not available, that fact and the reason(s) should be annotated in the claims folder. Officials’ Comments VBA officials said that they would not seek this information and normally would not obtain these records. They believe that the issue is service-connection, not whether the veteran currently has a disability. Board officials noted that under Court decisions, VA has a duty to assist the veteran in developing the facts pertinent to a well-grounded claim, including all medical evidence, even where the additional evidence submitted is inadequate to reopen the claim. Board Remand Action 4 After actions 1 through 3 have been completed, the veteran should receive a VA examination to determine whether he has residuals of a shell fragment wound to the back. A copy of this remand and the claim folder must be made available to and reviewed by the examiner before the examination. The examiner should render an opinion as to the probability that any current back disorder is the result of a shrapnel wound to the veteran’s back during service. The examiner should provide complete rationale for all opinions and conclusions reached. Officials’ Comments VBA officials said they do not believe a new exam is pertinent to the issue and if ordering an exam they would not normally provide the claim file or seek the medical opinion specified by the Board. Board officials said that the veteran requested a VA exam and VA’s duty to assist includes the duty to conduct a thorough and contemporaneous exam. They also said that because the Court has ruled that the Board must have independent medical opinions, they need the VHA physician to state an opinion about the relationship of any current disability to the alleged service incident. Also, a thorough exam, as defined by the Court, includes the review of the entire file by the physician. Board Remand Action 5 The VARO should formally adjudicate whether new and material evidence has been submitted to support reopening the claim. Officials’ Comments Both VBA and Board officials agreed that before the 1988 act and several Court decisions, the Board would not have remanded this case. The Board would have determined if the evidence was new and material and if not would have upheld the region, even though the VARO may have neglected to articulate its decision as to whether the evidence was new and material. VBA officials said that the type of development required by the Board in this and other similar remand decisions is, in many cases, either futile (since the requested records will not be found) or is not pertinent to the specific issue at hand. Board officials agreed and also said that even when the additional requested evidence is obtained, the benefits are often not granted. However, they stated that the additional development is needed to meet the act’s requirements for duty to assist as interpreted by the Court. They stated that VA cannot tell before reviewing the documents if the benefits would be granted and, further, that Court decisions do not limit VA’s responsibility for development (part of VA’s duty to assist) to cases where the benefits might be granted. They said that they could not tell which cases might be appealed to the Court and, therefore, they believed VA must Court “proof” all decisions. Thus this level of development is needed to ensure that the Court will not remand a decision to the Board. In a meeting with both VBA and Board officials, VBA officials stated that they did not know of any general disagreements they had with the Board’s interpretations but that they did disagree in individual cases such as that cited above. However, they also told us in a separate meeting that although this case is somewhat atypical in terms of the number of actions required, it is typical of the types of actions seen in many of the remand decisions. They said that they would not follow the procedures outlined in this remand in other cases unless again directed to do so in a remand decision; current regulations do not require them to do so. Given this, it seems unlikely that the goal stated in the 1991 memorandum by the Chairman—that VBA staff will use remanded decisions to understand and apply the legal principles to like cases—will be met in these types of cases. Officials Disagree About Requirements for Medical Examinations One aspect of duty-to-assist requirements demonstrates the existence of this disagreement and its impact—the requirements for medical examinations. As discussed in the case study above, Board officials indicated that they have interpreted Court decisions since 1991 to require that in the majority of cases physicians conducting disability compensation medical examinations have the entire claim file so that the examination can be a fully informed (thorough) one. In contrast to this requirement, however, guidance used by VBA adjudicators in requesting examinations specifically states that, normally, physicians will not be given the claim file. Officials from VBA said that no one has clearly defined what is necessary for a thorough examination and that VAROs may be accepting a somewhat lower standard for thorough than the Board. “Physicians clearly must be assisted to better understand their role in the adjudicative process and the legal implications of their statements.” This example demonstrates another troubling aspect of interaction among the organizations. VBA officials were apparently unaware that the guidance did not conform to the Board’s interpretation of an adequate examination. When we raised this issue during our review, VBA officials said that they had not heard the Board indicate that the law required the claim file to be available for virtually every examination or that an opinion about the origin of any disability be stated. Other medical examination issues concerning inconsistent interpretations have also been long-standing concerns. One relates to how current a medical exam needs to be for cases sent to the Board. In 1990, we reported that VAROs were using different standards; some delayed decisions to obtain new exams for appealed claims, others did not. We recommended that VA provide VAROs guidance on this issue. The 1991 Chairman’s memorandum to Board members pointed to additional issues related to inconsistencies between Board interpretations of requirements and VARO staff practices. These included the need for examinations by specialists, the need for the physician to review the entire claim file before completing a physical examination, and the need for VARO’s to “try again” to obtain information. And, again in 1994, the VBA Analysis of Remands recommended that the Board and VBA work together with an immediate aim of developing guidance on medical examinations, including guidance on age of examinations and need for specialty examinations, issues raised in 1990 and 1991. Having VBA policies and practices that differ from Board interpretations contributes to inefficient claims adjudication and to the remand rate. However, Board officials cautioned that appeals may be remanded for several reasons and changing policies and practices in the areas cited above may not affect the remand rate overall. For example, even if the claim file has been provided and an opinion expressed, if the examination was inadequate for other reasons, such as needing a specialty examination, the case would be remanded. Concerns and Inconsistencies Are Resolved Slowly, If at All VA’s Office of General Counsel can issue interpretations of VA’s responsibilities that are binding on all VA organizations, including VBA and the Board. However, VA organizations have been slow to identify questions needing General Counsel opinions and slow to seek resolution once they have been identified. If questions about interpretations arise between the Board and VBA or VHA or if requirements need to be clarified, the organizations involved can request a precedent opinion from the VA General Counsel. In addition to the laws and regulations, each of the organizations is bound by these General Counsel precedent opinions. However, for issues to be raised to the General Counsel, they must first be identified as problems. As discussed above, these inconsistencies are not always identified under the current organizational structure and adjudication process. But even after problems are identified, the agencies are slow to react. For example, although we pointed out the need for guidance on currency of examinations in 1990, VBA did not request and receive a General Counsel decision on this issue until 1995. A Board official indicated that although the issue of age of the examination was raised in 1990, the issue did not have any significant impact on claims and appeals processing until much later, at which time a General Counsel opinion was requested and obtained. However, the May 1994 VBA Remand Analysis raised concerns about Board directions concerning age of the examinations (that is, the Board was remanding cases because the examination was old), yet it took until February 1995 for VBA to seek a General Counsel opinion. The 1994 VBA Remand Analysis itself demonstrates other difficulties in interaction among VA organizations. Although the study was completed in May 1994, VBA did not send a copy of the report to the Chairman of the Board until late August 1994, with a letter apologizing for forgetting to send the study. The Chairman immediately responded, agreeing to the recommendation to establish a working group to develop guidance on adequacy of examinations, but no such group was formed. Neither Board nor VBA officials could provide a reason why it was not established but suggested, perhaps, that no one took the lead. Likewise, some issues raised in the Chairman’s 1991 memorandum apparently are still not resolved. For example, the Chairman indicated that procedures about obtaining specialty examinations were not always consistent with Board interpretations, yet one recommendation of the 1994 VBA remand study was that VBA and the Board needed to develop guidance on this issue. Lack of Clearly Defined Requirements Hinders VA’s Ability to Assess and Solve Appeals Processing Difficulties A clear and consistent agencywide interpretation of VA’s responsibilities is essential to resolving the claims adjudication dilemma. Additionally, a clear understanding of VA’s responsibilities is necessary for a meaningful analysis of the resources needed to meet those responsibilities and, in turn, for developing solutions to overcome problems, including resource constraints. Both Board and VBA officials have indicated concern that their organizations do not have the resources to meet the requirements set forth in the act, especially Court and Board interpretations of those requirements. An official in the Office of the Secretary stated that he is not yet convinced that resource limitations cannot be overcome by improved efficiencies and other initiatives, but other officials in the organizations responsible for claims processing are less optimistic. A meaningful assessment of needed resources, however, is not feasible without a clear understanding of responsibilities. Board officials emphasized to us that although continued and improved interaction among VA organizations is important, they believe it is unlikely to solve the appeals adjudication backlog. Board officials are concerned about the resources the Board must invest to meet requirements set forth in the act and Court decisions. They pointed to increased responsibilities, especially the requirement to fully explain reasons and bases, for the Board’s substantially reduced productivity. The Chairman stated that in spite of the many initiatives under way to improve Board efficiency (see app. I), without substantial increases in staffing, he did not expect the backlog of appeals to be appreciably reduced. He said that even with added resources it would take many years to get the backlog to a manageable size. Likewise, VBA officials stated that their organization does not have the resources to apply the requirements set forth in some Board remands to all like cases. According to officials, the remands often require substantial time and resources for development. For example, officials noted that the development required in the above case study would require substantial resources even for that one case. Similarly, during our discussion with Board and VBA officials concerning the need for the VHA physician to have the claim file, VBA officials said that doing that in most cases raises resource issues, including the time it takes to transfer the file and the difficulty in keeping track of the file once it leaves the VARO. VHA also may have difficulty meeting this requirement. For example, the Secretary’s Court of Veterans Appeals Fact-Finding Committee reported in February 1995 that VHA often uses contract physicians to perform the physical examinations on veterans claiming disability compensation. The report noted one such physician as saying that VA did not pay him enough to spend the time it would take to review the claim file, even if VBA sent it. The impact that various interpretations of requirements has on resources underscores the need for organizations to agree on what those requirements are. Board decisions directly affect VARO and VHA workloads; for example, remanded decisions specify activities for VAROs in individual cases and can require additional examinations by VHA. More generally, Board decisions—interpreting requirements set forth in law and Court decisions—can expand the level of effort VAROs must expend in assisting veterans in filing claims. Similarly, VARO actions directly affect Board workloads. For example, the increase in the number of remands increases the Board workload in the form of claims returned for a second Board review. Yet under the current legal and organizational structure, neither organization has any responsibility for the amount of work it “causes” for the other. A Board official said that the Board—like the Court—decides cases on their merit and the Board cannot legally change its interpretation of the law because it is administratively inconvenient or infeasible for VBA. Recent Actions Could Serve as Critical First Steps In May 1995, after several discussions we had with VBA and Board officials, VBA initiated two actions that could represent important first steps in improving interaction among the VA organizations and, in turn, service to veterans. But much remains to be done to bring effective closure to these efforts. First, following a suggestion we made in April 1995, VBA conducted a study, with assistance from the Board, to determine how well VAROs are complying with Court precedent and procedural guidance. The study, completed in June 1995, identified areas where VARO actions were not consistent with Board interpretations and recommended that guidance to VAROs be improved. But the study also confirmed our concerns about the complexity of the appeals process and supported the need for continued cooperation between the Board and VBA staff in developing guidance and interpreting requirements. VBA’s second action may provide a forum for this cooperation. On May 15, 1995, the Under Secretary for Benefits asked the Under Secretary for Health, the Chairman of the Board, and the General Counsel to appoint representatives to a permanent working group to address and resolve claims processing problems. The Under Secretary pointed to information we supplied about failure to implement previous recommendations calling for intra-agency efforts as the impetus for establishing this group. The working group held its first meeting on July 13, 1995. Study Finds No Difference in Organizations’ Interpretations but Sees Need to Clarify Guidance For the May 1995 study, Board staff attorneys and VBA staff independently reviewed a small sample of appeals recently certified by VARO staff to be ready for Board consideration. (With the current backlog of appeals, the Board would not expect to officially review these claims for another 2 years.) VBA recognizes that this was not a statistically representative sample, but sees it as a sufficient number to identify any frequently occurring problems. Both staffs reviewed 115 appeals. A single claim can be appealed on several issues. VBA staff identified 185 separately appealed issues in the 115 claims; Board staff identified 193 issues. There were 166 issues that both staffs identified and reported on. They reviewed the appeals separately, then met to discuss general observations. VBA concluded, based on the Board staff results, that if these cases came to the Board now, about 42 percent of the appeals would be remanded, a somewhat lower percentage than the 1994 rate of about 48 percent, but still troubling. The reviewers identified areas in which they believe guidance to VAROs needs to be clarified. Key areas include failure to address issues, inadequate development, inadequate examinations, failure to assess all medical evidence, inadequate handling of new and material evidence issues, failure to issue supplemental statements of the case when appropriate, and improper identification of issues. Officials stated that the problems occurred because VARO and VHA staff failed to follow existing guidance, not because guidance was inconsistent with Board interpretation of requirements or Board and VBA staffs’ judgment varied. The study recommended that guidance be improved and, in several instances, sought participation by organizations other than VBA to do so. Two recommendations involving the Board are for development of (1) a checklist for use by VARO staff in certifying an appeal as ready for Board review and (2) a training program on appeals. The Chairman of the Board has agreed that to the extent resources permit the Board will work with VBA in implementing the recommendations that included the Board. The study also recommended that a focus group of VHA physicians and staff review the issue of inadequate examinations, which this study again found to be a significant problem. The study report notes that using the expertise of VHA physicians will provide VBA and the Board valuable information on the examination process and that working in unison will result in the quality examination necessary. Continued Focus on Interpreting Responsibilities and Resolving Problems Is Critical The VBA study of certified appeals identified significant issues and could serve as a solid basis for intra-agency action and an initial focus for the newly established permanent working group. But the study results themselves point to the difficulties that VA may face in providing an agencywide interpretation of key responsibilities; they indicate that VBA and Board reviewers may have seen requirements with regard to some responsibilities differently. Likewise, the study itself was of limited scope and may not be sufficient to bring to light all significant problem areas. Although Board and VBA reviewers apparently agreed on key areas needing attention, their views of specific appeals were in some cases very different. For example, for almost one-third of the 166 issues both groups assessed, one of the groups cited a high or moderate chance of remand and the other cited no chance. For 17 percent of the issues the difference was between a high chance and no chance. Similarly, although both Board and VBA reviewers individually concluded that about one-half of the issues had no chance of being remanded, they were often different issues. They both saw no chance in only 26 percent of the same issues. In some cases the reviewers even identified different issues as being under appeal; overall, VBA staff identified 19 issues that the Board did not identify and Board staff identified 27 issues that VBA did not identify. Although in some cases this resulted from one or the other separating out issues that the other combined (4 individual orthopedic issues as opposed to 1 combined issue), in others, very different issues were involved. These results raise questions about the consistency with which VBA headquarters staff and Board staff view legal requirements. Although the two groups agree on the broad requirements, such as the need to better develop claims, meet duty-to-assist responsibilities, and ensure adequate examinations, they may not agree on the specifics of how or when to apply these requirements. For example, both raised issues of how the VAROs handled new and material evidence issues, but not always on the same appeals. These are important issues. If staff work together to discover the reasons for different assessments of cases, they may be able to more clearly identify and communicate what is required in different types of situations. Their review of the same cases may also provide a consistent basis for discussion to help to develop guidance that can be clearly understood. However, this study by itself may not be sufficient. This one-time effort will not identify problems that might arise over time as Court and Board interpretations are further developed through individual decisions. Additionally, review of completed Board remands also may be needed. A review of recently certified appeals, as was done in this study, can provide a better picture of the adequacy of current VARO actions but may not surface differences between the organizations with regard to all issues. The case study cited earlier serves to demonstrate this possibility. Even if both parties agreed that the case should be remanded and an examination obtained, VBA officials disagreed with some of the specific actions the Board required as part of the remand, such as soliciting information on all past health care providers and obtaining those records. We found disagreement about these types of actions—which may not be the cause of the remand but are specified in Board decisions as actions required. Actions such as these can have a significant impact on the resources as well as the time necessary to adjudicate the appeal. These resource and timeliness issues also have implications for other claims if they are to be applied in like cases. Conclusions VA’s current legal and organizational structure, the complex nature of the claims adjudication process, and the current fluid environment brought about by the Veterans’ Judicial Review Act and the Court make effective interaction among VA organizations imperative. Although the Board, VBA, and VHA have unique roles, they are intimately linked in the claims and appeals adjudication process. VA has many internal forums for interaction among these organizations, but greater effort is needed. There are many procedural and interpretive areas in which multiple VA organizations are involved. Ensuring clear and consistent interpretation of VA’s adjudication responsibilities, such as duty-to-assist requirements, is one obvious area needing the involvement of all the organizations involved in adjudication. If VBA and VHA policies and practices and Board interpretations of legislative and judicial requirements are not consistent, the likelihood of remands and reversals and the resulting inefficient claims processing increases. VA needs to clearly define what it believes is required and do so in a way that ensures that the Board, VHA, and VBA follow requirements consistently. Because Court and Board interpretations evolve over time, this must be an ongoing process. The recent VBA and Board review of appeals certified as ready for Board review by VAROs is a solid first step in identifying issues that currently require clarification. Ensuring resolution of these issues could be an important focus of the recently established permanent working group. The Claims Adjudication Commission also offers the possibility of recommendations for significant improvement, having looked at the adjudication process in its entirety, not just from one organization’s perspective. But difficulties are likely to remain. VA organizations have agreed to work together and to clarify guidance in the past, but these actions were not always completed. It is also unlikely that the Adjudication Commission recommendations will address the detailed problems we found in interpretation of VA’s responsibilities. And, short of action to substantially alter the organizational structure of VA—for example, to abolish the Board or consolidate claims adjudication functions into a limited number of VAROs—the need for effective interaction among VA organizations will continue. Both the historical and current difficulties in interaction reinforce our 1990 recommendation that a focal point be established for the appeals process as a whole. The nature of the current problems and possible solutions suggest that the only way to ensure that intra-agency issues are identified and resolved may be for that focal point to be designated at the Department level. Such a focal point could help the Department ensure that at this time and in the future promised cooperation leads to closure. More importantly, the appeals process faces significant difficulties; clearly defining responsibilities is a beginning step, not a full cure. If responsibilities, once clearly defined, are such that one or more of the organizations does not have the resources to carry them out, new solutions will be needed. They could include amending legislation to reduce or at least better define VA’s responsibilities, obtaining more resources, or reconfiguring the agency. The nature of the problems and of their possible solutions suggests the need for active involvement above the level of the autonomous organizations—each of which views the claims and appeals process in terms of its unique responsibilities and capabilities. Recommendation The Secretary of Veterans Affairs should designate a Department-level official to monitor actions by the Board, VBA, and VHA to identify and resolve intra-agency impediments to efficient claims and appeals processing. This individual should be charged with making recommendations to the Secretary, if necessary, to ensure resolution of problems. The recently established permanent working group could serve as the focus of these monitoring efforts and the designated official could report to the Secretary and make recommendations about problems that the working group is unable to effectively resolve. A first priority of this official should be monitoring the progress of the organizations in implementing the recommendations of the VBA study of recently certified appeals and ensuring that VBA and VHA policies and practices are consistent with Court and Board decisions. These efforts should be ongoing to ensure such consistency, obtaining General Counsel opinions where needed and to ensure that any resource or organizational difficulties are identified and resolved. Agency Comments In a meeting on August 15, 1995, VA’s Chief of Staff, the Under Secretary for Benefits, the Chairman of the Board of Veterans’ Appeals, and other key officials commented on a draft of this report. The VA officials acknowledged that appeals processing is one of the most serious problems currently facing the agency. They stated that they concurred with our recommendation and that the Deputy Secretary sees clearly identifying and resolving problems with the claims adjudication process as his responsibility. They see the recommendation as reemphasizing the importance of efforts under way by the Deputy Secretary and other key officials to solve this problem. The officials said that they have been focused on the overall issue of VBA/Board timeliness for some time and that efforts to improve the timeliness of VARO claims processing have been successful. They said that emphasis and resources will continue to be devoted to improving VBA timeliness, but that increased attention now has been placed on the Board and on the VBA/Board interface. The officials indicated that they are committed to reducing appeal processing time significantly. They noted that three presidential appointees confirmed by the Senate are directly responsible to the Secretary and Deputy Secretary for taking the steps necessary to improve timeliness and that the Deputy Secretary is actively involved in ensuring that each facet of the adjudication structure works with the common goal of putting veterans first. Each official understands the necessity for early identification and resolution of inconsistent interpretations of law. They also noted that specific actions to ensure identification and resolution of problems have been suggested in the many studies already done. Officials said that many of these ideas have been or will be implemented and that those discussed in our report are good ideas. They also said that the actions that are not being done need to be done, and those that are not being done well must be improved. They believe that the necessary mechanisms are in place to identify any inconsistent interpretations and resolve them through the General Counsel. Officials said that time will tell if this increased awareness and focus will resolve the problem. We agree that the active involvement of the Deputy Secretary, especially with an increased focus on appeals, could have a positive impact on resolution of appeals problems. This is especially true if actions previously recommended are implemented and if, through existing mechanisms, such as the permanent working group, VA actively pursues the issue of inconsistent interpretations and other interface problems. VA officials, however, did not offer any details of actions expected to be taken.
Pursuant to a congressional request, GAO examined the need for organizations within the Department of Veterans Affairs (VA) that are involved in processing claims to increase cooperation and coordination so that impediments to processing appeals can be identified and resolved. GAO found that: (1) legislation and Court of Veterans Appeals' rulings have forced VA to integrate new adjudication responsibilities into its already unwieldy adjudication process; (2) since 1991, the number of appeals awaiting Board of Veterans Appeals' action has increased by over 175 percent and the average processing time has increased by over 50 percent; (3) the VA legal and organizational framework makes effective interaction among autonomous VA claims adjudication organizations essential to fair and efficient claims processing; (4) although VA believes it has implemented efficient problem solving mechanisms, problems are going unidentified and unresolved; (5) unless consistent Board interpretations are developed, VA decisions will continue to be remanded, delaying benefits for some veterans and increasing VA workloads; and (6) unless VA clearly defines its adjudication responsibilities, it can not determine whether it has adequate resources to meet those responsibilities and whether new solutions may be needed.
GAO_GAO-13-150
Background CPSC’s Authorities and Mission CPSC was created in 1972 under the Consumer Product Safety Act to regulate certain consumer products and address those that pose an unreasonable risk of injury; assist consumers in evaluating the comparative safety of consumer products; and promote research and investigation into the causes and prevention of product-related deaths, injuries, and illnesses. CPSC’s jurisdiction is broad, covering thousands of types of manufacturers and consumer products used in and around the home and in sports, recreation, and schools. CPSC does not have jurisdiction over some categories of products, including automobiles and other on-road vehicles, tires, boats, alcohol, tobacco, firearms, food, drugs, cosmetics, medical devices, and pesticides. Other federal agencies—including the National Highway Traffic Safety Administration, Coast Guard, Department of Justice, Department of Agriculture, Food and Drug Administration (FDA), and Environmental Protection Agency (EPA)—have jurisdiction over these products. CPSC has broad authorities for identifying, assessing, and addressing risks associated with consumer products. The Consumer Product Safety Act (CPSA) consolidated federal safety regulatory activity relating to consumer products within CPSC. As a result, in addition to its responsibilities for protecting against product hazards in general, CPSC administers the following laws that authorize various performance standards for specific consumer products: the Flammable Fabrics Act, which among other things, authorizes CPSC to prescribe flammability standards for clothing, upholstery, and other fabrics;the Federal Hazardous Substances Act, which establishes the framework for the regulation of substances that are toxic, corrosive, combustible, or otherwise hazardous;the Poison Prevention Packaging Act of 1970, which authorizes CPSC to prescribe special packaging requirements to protect children from injury resulting from handling, using, or ingesting certain drugs and other household substances; the Refrigerator Safety Act of 1956, which mandates CPSC to prescribe safety standards for household refrigerators to ensure that the doors can be opened easily from the inside; the Virginia Graeme Baker Pool and Spa Safety Act of 2007, which establishes mandatory safety standards for swimming pool and spa drain covers, as well as a grant program to provide states with incentives to adopt pool and spa safety standards; and the Children’s Gasoline Burn Prevention Act of 2008, which establishes safety standards for child-resistant closures on all portable gasoline containers.the Child Safety Protection Act of 1994, which requires the banning or labeling of toys that pose a choking risk to small children and the reporting of certain choking incidents to the CPSC. In 2008 CPSIA mandated that CPSC develop an approach, not later than August 2010, to identify products imported into the United States that are most likely to violate consumer product safety statutes enforced by the Commission. CPSIA specifically requires that CPSC develop this methodology in partnership with U.S. Customs and Border Protection (CBP) using information from shipment data from the International Trade Data System and other databases. CPSC was required to incorporate this approach into its information technology (IT) modernization plan, to move to a single integrated data system intended to upgrade the data systems that support CPSC’s regulatory activities. The act also required that CPSC use this information to examine ways to identify possible shipments of violative consumer products and share this information with CBP to prevent such items from entering the marketplace. CPSC has subsequently reported on its efforts to develop this approach for import surveillance. These efforts are discussed in greater detail later in this report. While CPSC has statutory authority to regulate many types of products, it does not have authority to require pre-approval of products before they enter the U.S. market. Because CPSC regulates consumer products after they enter the market, identifying new products and any new hazards that may be associated with new products is difficult. Generally, CPSC can require every manufacturer of an imported product subject to a consumer product safety rule to issue a certificate that certifies based on reasonable laboratory testing that the product complies with all rules, bans, standards or regulations. Under several of the acts that it administers, CPSC’s primary mission is to protect consumers from unreasonable risk of injury or death from consumer products under its jurisdiction. To achieve its mission, CPSC uses various approaches captured under five strategic goals: (1) to provide leadership in safety; (2) to reinforce a commitment to prevention; (3) to engage in rigorous hazard identification; (4) to provide a decisive response to identified product hazards; and (5) to raise awareness of safety issues and CPSC capabilities. Under the Consumer Product Safety Act, CPSC is authorized to evaluate a consumer product to determine whether the product creates what the act calls a “substantial product hazard” or whether the Commission should issue a consumer product safety standard or ban by regulation to prevent or reduce an unreasonable risk. CPSC considers the risks associated with a consumer product and assesses whether a particular risk is known or is a new or emerging hazard. New hazards can be associated with either a new or existing product. For example, a new hazard could materialize in the form of new material used to manufacture a type of product already in existence. To address product hazards, CPSC can issue regulations that establish performance or labeling standards for consumer products, often referred to as mandatory standards. CPSC refers to products subject to such mandatory standards as regulated products. Those regulated products that do not comply with mandatory standards are referred to as violative products. In contrast, many consumer products that are under CPSC’s jurisdiction are subject to voluntary standards, which are generally determined by standard- setting organizations, with input from government representatives and industry groups, and are also referred to as consensus standards. Unregulated products are those products not subject to any mandatory standards and may include those covered by voluntary standards, which do not have the force of law. However, many voluntary standards are widely accepted by industry.Product Safety Act require CPSC to defer to a voluntary standard—rather than issue a mandatory standard—if CPSC determines that the voluntary standard adequately addresses the hazard and that there is likely to be substantial compliance with the voluntary standard. As a result, voluntary standard development is an important tool in CPSC’s hazard-reduction efforts. In some cases, Congress has enacted a specific statutory requirement for CPSC to create a mandatory standard, or convert a voluntary standard to a mandatory standard. For instance, CPSA, as amended by CPSIA, mandated the conversion of voluntary standards for The 1981 amendments to the Consumer durable infant and toddler products, all-terrain vehicles, and children’s toys to mandatory standards. CPSC’s Criteria for Establishing Agency Priorities CPSC has established criteria for setting agency priorities and selecting potential hazards to address.the agency regulations, include the following: These criteria, which are incorporated into the frequency and severity of injuries resulting from the hazard; the cause of the hazard, which should be analyzed to help determine the extent to which injuries can reasonably be expected to be reduced or eliminated through CPSC action; the number of chronic illnesses and future injuries predicted to result from the hazard; preliminary estimates of costs and benefits to society resulting unforeseen nature of the risk, which refers to the degree to which consumers are aware of the hazard and its consequences; vulnerability of the population at risk (such as children and the probability of consumer exposure to the product hazard; and other additional criteria to be considered at the discretion of CPSC. CPSC’s regulations do not specify whether any particular criterion should be given more weight than the others or that all criteria must be applied to every potential hazard. However, CPSC officials have noted that a product hazard that could result in death is typically granted the highest priority. CPSC’s Organizational Structure for Managing Risks Risk management is a primary function throughout the Commission, but certain offices have specific responsibilities for identifying, assessing, and addressing product hazards. CPSC’s Office of Hazard Identification and Reduction is tasked with responsibility for identifying emerging hazards that can be addressed by agency projects, warnings, mandatory or voluntary standards, and public awareness campaigns. This office also provides technical support to the Office of Compliance and Field Operations, which is responsible for capturing information about regulated products and substantial product hazards and conducts compliance and administrative enforcement activities under the acts that CPSC administers. The Office of Compliance and Field Operations has responsibility for identifying and addressing safety hazards for consumer products already in commerce, promoting industry compliance with existing safety rules, and conducting administrative litigation seeking remedies that may include public notice and refund. The office receives information about potential product hazards through industry reporting requirements and through its own investigation of defective products. CPSC’s Information System Modernization The CPSIA required that CPSC establish and maintain a database on the safety of consumer products and other products or substances regulated by the Commission and that it improve its IT architecture. In response, CPSC created a public database, which is accessible through the Internet at SaferProducts.gov and allows consumers to directly report product- related incidents. SaferProducts.gov was launched in March 2011 and is integrated with CPSC’s larger, internal Consumer Product Safety Risk Management System (CPSRMS). To address the requirement to upgrade its IT architecture, CPSC is currently implementing improvements to CPSRMS. CPSC officials have described this system as a centralized, integrated data environment that upgrades its legacy systems to support multiple efforts at the agency, such as its case management and investigative processes. When fully integrated, CPSRMS will replace CPSC’s historically segmented data systems with a unified information technology system. The updated system is intended to allow CPSC to analyze data from multiple sources in a centralized location to identify emerging consumer product safety hazards. The purpose of this centralization component of CPSC’s IT modernization effort is to improve its ability to collect and analyze the hazard information it receives from consumers and other data sources. CPSC has reported that modernizing its IT systems will improve efficiency by connecting separate data systems, reducing or eliminating manual and redundant processing, and eliminating redundant and inefficient steps required to code the information and to share the information with businesses. In addition to this modernization effort, CPSC is developing an automated system to improve its ability to target imported products by integrating data from both CPSC and CBP. This system will also be integrated into CPSRMS. CPSC Uses Several Mechanisms to Stay Informed about New Product Hazards, but Statutory Provisions Constrain Its Ability to Identify Risks CPSC gathers information about new and emerging risks through several means, such as surveiling retail markets and coordinating with other agencies. CPSC could also potentially obtain nonpublic information on product-related hazards from its foreign counterparts, but its legal restrictions on public disclosure of information have hampered its ability to establish information-sharing agreements. Further, CPSC collects data on product-related injuries and deaths from a variety of sources, such as consumer reports and death certificates, and as discussed above is currently working to improve the system it uses to manage these data. Finally, CPSC has another effort under way to improve its surveillance of imported products, which could prevent violative products from entering the U.S. markets. CPSC Uses Various Means to Stay Informed about New Product Risks CPSC uses multiple mechanisms to stay informed about new and emerging risks from consumer products, especially new products entering the market. CPSC’s market surveillance activities are one primary mechanism staff use to track new products entering the markets, including surveillance of imported products entering the United States, retail stores, and the Internet: Import surveillance, which is discussed in greater detail later in this report, targets products before they enter the market and is CPSC’s stated key activity to address the challenge of overseeing and regulating the thousands of product types under its jurisdiction. Import surveillance activities include scrutiny of import documentation and physical screening of products at the ports. CPSC field program surveillance includes compliance monitoring of specified products with CPSC requirement to ensure conformance. Surveillance and inspections are done at the manufacturer, importer, and retail locations. CPSC’s retail surveillance includes targeted activities to identify potentially unsafe products, such as children’s products with unsafe lead content and unsafe electrical products, as well as some products subject to mandatory standards. This retail surveillance includes in- store screening of products to ensure they are appropriately labeled and are contained in proper child-resistant packaging when required. At times, such as for holiday sales, CPSC field staff also screen certain products to find out if they meet generally accepted industry voluntary standards. CPSC compliance staff also conduct searches of the Internet, to monitor the compliance of certain product sales. Since many firms sell their products exclusively from Internet websites, this surveillance functions as the primary CPSC oversight of these sellers. Staff also attend trade shows to target possible products of interest by observing what new products are coming to market. These visits may be announced or unannounced. Another mechanism CPSC has relied on for keeping informed about new and emerging risks is its agreements with other federal and state agencies to research various emerging issues. For example, CPSC participates in a federal effort to leverage its limited staff resources with larger research efforts under way on nanomaterials, as part of the National Nanotechnology Initiative. CPSC has a joint agreement with EPA to research the health effects of nanotechnology in consumer products. This effort is part of a larger international research project intended to provide a systematic, multidisciplinary approach, including both experimental and computational tools and projects, for predicting potential human and environmental risks associated with a range of nanomaterials (i.e., silver and titanium dioxide). Nanomaterials represent a wide range of compounds that may vary significantly in their structural, physical, and chemical properties, and potentially in their behavior in the environment and in the human body. Because of the wide variation in potential health effects and the lack of data on exposure and toxicity of specific nanomaterials, CPSC has been unable to make any general statements about the potential consumer exposures to or the health effects that may result from exposure to nanomaterials during consumer use and disposal. (NIST). CPSC signed an interagency agreement with NIST in 2011 to develop protocols to assess the potential release of nanoparticles into the indoor air from various consumer products and determine the potential exposure to people. Measurement protocols do not exist yet to characterize these particle emissions or to assess the properties of the emitted particles that may relate to any health impacts. Under this agreement, NIST will begin testing to assess the properties of nano-sized particles. At the completion of this project, CPSC staff expect to complete a status report on the measurement protocols developed for laboratory testing for the release of nanoparticles from consumer products, as well as for testing in actual residences. Additionally, CPSC is working with the National Library of Medicine to identify approaches to expand and improve a database to provide information on nanomaterials in consumer products. One researcher emphasized that this database is quite important to further research efforts because companies are not required to report whether nanomaterials are used in their products. Staff also use other channels to exchange information about consumer products with other federal agencies, including the National Institutes of Health (NIH), the Centers for Disease Control and Prevention (CDC), and FDA, within the Department of Health and Human Services, the Department of Labor-Occupational Safety and Health Administration, EPA, and the Department of Housing and Urban Development (HUD). CPSC staff participate in product safety committees with these agencies. For example, staff serve on the Chemical Selection Working Group sponsored by NIH/National Cancer Institute, as well as the Federal Liaison Group on Asthma and the National Cancer Advisory Board. Staff also participate in multiple working groups sponsored by the National Institute for Environmental Health and Safety and the National Toxicology Program. CPSC staff co-chair the Interagency Lead-based Paint Task Force, working with EPA and HUD on human exposure to lead. CPSC staff also serve on the Core Committee at the Center for Evaluation of Risks to Human Reproduction under the National Toxicology Program. Staff participate in interagency committees that develop U.S. positions for international harmonization on test guidelines developed by the Organisation for Economic Co-operation and Development, guidance documents, and the globally harmonized system for the classification and labeling of chemicals. Staff also use their professional connections, subscribe to professional journals, and attend scientific and consumer product safety conferences. For example, CPSC staff maintain contacts with individual scientists at FDA on multiple issues, such as phthalates, lead, and nanotechnology. Furthermore, CPSC has authority to establish advisory committees to assist in advising it on new and emerging risks. Such advisory committees can be appointed to advise the agency on chronic hazards that may contribute to cancer, birth defects, and gene mutations associated with consumer products. As required by CPSIA, in 2010 CPSC appointed a Chronic Hazard Advisory Panel (CHAP) to review the potential effects on children’s health of phthalates and phthalate alternatives in children’s toys and child care articles.currently the only operating advisory committee to CPSC. The CHAP is to consider the cumulative effects of exposure to multiple phthalates from all sources, including personal care products. The CHAP was required by CPSIA to submit a final report based on its examination by April 2012. The CHAP examination is still ongoing and the report is expected to be completed in fiscal year 2013. The CHAP must recommend to the Commission whether any additional phthalates or phthalate alternatives should be declared banned hazardous substances. Within 180 days after this recommendation is made, CPSIA requires CPSC to promulgate a final rule based on the report. Pending completion of the report, staff are to provide a briefing package to the Commission for its consideration of whether to continue the interim ban that CPSIA established (effective Feb. 10, 2009) for certain phthalates, or whether to regulate other phthalates or phthalate substitutes. Statutory Restrictions Hamper CPSC’s Information-Sharing Efforts with Foreign Counterparts Several of CPSC’s strategic goals emphasize working with other federal agencies, as well as agencies of state and foreign governments. This cooperation is important to the Commission’s effectiveness, particularly in light of the large volume of imported products that enter the United States each year. One key aspect of interagency cooperation is sharing information with CPSC’s counterparts in other countries. CPSC has memorandums of understanding (MOU) with several foreign counterparts to share publicly available information about unsafe consumer products. These agreements provide a formal mechanism for general exchanges of information on consumer product safety, and in some cases include plans for informational seminars and training programs. For example, CPSC has taken the lead with several MOU partners on an international initiative to work towards harmonizing global consumer product standards or developing similar mechanisms to enhance product safety, known as the Pilot Alignment Initiative. This initiative involves staff from the central consumer product safety authorities of Australia, Canada, the European Union, and the United States.positions among the participants on the hazards to children and potential solutions for three products: corded window coverings (i.e., window blinds), chair-top booster seats, and baby slings. The initiative seeks to reach consensus CPSC’s existing MOUs do not permit the exchange of nonpublic information because of specific statutory limitations. When we reported on CPSC’s authorities in August 2009, we concluded that CPSC had adequate authorities to perform its mission and we made no recommendations to change its authorities. conclusion. CPSIA amended section 29 of CPSA to allow the Commission to make publicly available information to any federal, state, local, or foreign government agency upon prior certification or agreement that the information will be maintained in confidence, as defined in the act. At that time, CPSC was working with its foreign counterparts to implement its new authorities under CPSIA that allow it to share nonpublic information with foreign counterparts. In the course of this review, however, we found that when attempting to implement these authorities, CPSC has faced certain legal constraints in sharing information with its foreign counterparts and has not completed any new agreements concerning the exchange of nonpublic information, as they had expected at the time of our 2009 report. GAO-09-803. readily identify a manufacturer, CPSC must afford the manufacturer the opportunity to designate the information as business confidential—that is, information a company considers and designates to be proprietary or confidential—and barred from disclosure. The CPSA contains an additional restriction on the public disclosure of certain regulatory information, such as information that identifies a product manufacturer or private labeler. Specifically, section 6(b)(1) generally prohibits CPSC from publicly disclosing information that would readily identify the product manufacturer unless it first takes reasonable steps to assure that the information is accurate and that the disclosure is fair in the circumstances and reasonably related to carrying out CPSC’s purposes under its jurisdiction. The inclusion of section 6(b) grew out of concern about damage that manufacturers would incur if the agency released inaccurate information about the manufacturers’ products. Before publicly disclosing the information, CPSC must give the manufacturer advance notice and the opportunity to comment on the disclosure of the information, which adds more time before CPSC can publicly respond to a potential product hazard. If CPSC decides to disclose information that the manufacturer claims to be inaccurate, it generally must provide 5 days advance notice of the disclosure, and the manufacturer may bring suit to prevent the disclosure. CPSC has issued a rule that interprets the public disclosure restrictions of section 6(b) as covering disclosures to any person unless specified exceptions apply. Section 29(e) of CPSA permits CPSC to disclose accident or investigation reports to officials of other federal, state, and local agencies engaged in health, safety, or consumer protection activities, but only if business- confidential information is removed and the recipient agency agrees to maintain certain confidentiality restrictions. Section 29(f) of CPSA, as amended by CPSIA, authorizes CPSC to disclose certain information to foreign government agencies in addition to federal, state, and local government if the recipient agency certifies in writing in advance that the information will be kept confidential. In addition, it provides that CPSC generally is not required to disclose under the Freedom of Information Act or other law confidential information it has received from a foreign agency (although this provision does not authorize withholding of information from Congress or a court in an action commenced by the United States or CPSC). Both Senate and House of Representatives committee reports on CPSIA legislation provided the rationale and expectation underlying the provisions enacted as section 29(f). Specifically, the Senate report noted that goods made overseas are sold not only in the United States but also in Europe, Africa, and other continents. Additionally, the Senate report noted, “To the extent that the European Union bans an unsafe product and the United States does not, shipments to Europe may well be diverted to American shores. Once in the United States, the products may move from state to state.” Both the Senate and House committees’ reports noted expectations that CPSC would work closely with any other federal, state, local, or foreign governments to share information, so long as those entities have established the ability to protect such information from premature public disclosure. The House report further noted that “The Committee expects that the CPSC will revisit and renegotiate, where necessary, existing memoranda of understanding with foreign governments and negotiate new agreements with other governments as necessary.” Although the addition of section 29(f) was intended to encourage information sharing, in our discussions with CPSC staff, they expressed concern that restrictive language in section 29(f) has hindered their ability to share information. Specifically, CPSC explained that during the interagency review process to address this new authority, the Department of State (State) reviewed CPSC’s suggested language for an agreement to implement information sharing under section 29(f). According to CPSC, State identified that, because of certain language in section 29(f), CPSC could not agree to allow a foreign agency to further disclose information it had received under a confidentiality agreement, even under tightly controlled circumstances. As a result, CPSC cannot approve text in the information-sharing agreement that allows for further disclosures. For example, CPSC could not permit Health Canada to disclose information it received from CPSC under a section 29(f) agreement to a sister agency or provincial-level safety agency. Likewise, CPSC cannot grant approval to the European Commission to disclose such information to member states. In contrast, the confidentiality restrictions section 29(f) imposes on information CPSC receives from a foreign agency are less severe than those that apply when a foreign agency receives information from CPSC—that is, CPSC has greater freedom to disclose information than it may grant to its foreign counterparts. CPSC is required to make available to Congress and the courts information it receives, but its foreign counterparts would not be allowed to make similar disclosures to their own governing bodies or court systems. According to CPSC staff, this lack of reciprocity has made foreign agencies unwilling to enter into agreements with the United States to share nonpublic information. In August 2012, CPSC staff told us that the Commission has been unable to enter into any international agreements pursuant to section 29(f) because CPSC’s foreign counterparts will only share information if the terms are reciprocal. In contrast to this difficulty in completing agreements with foreign counterparts, CPSC has on occasion been able to share information it has gathered with U.S. state and local agencies. For example, in dealing with hazards associated with defective Chinese drywall, CPSC was able to share information from the investigation involving the Chinese government with U.S. state and local agencies, which is discussed in greater detail in appendix II. According to CPSC staff and our further analysis of the statute, section 29(f) has not achieved the results expected by Congress when it enacted this provision, as expressed in the previously cited committee reports. The primary reason for this, according to CPSC staff, is that section 29(f) does not contain a provision allowing foreign agencies to further disclose the information CPSC provides to a foreign agency pursuant to a section 29(f) agreement—even disclosures required by the foreign agency’s laws or to other agencies within the same nation or administrative area. This inability to establish information-sharing agreements may hinder CPSC’s ability to respond to a potential hazard in a timely manner because of the delay that might occur between when a foreign counterpart decides to take action in response to a product hazard and when that action becomes public. This delay may allow injuries and deaths to occur from the unsafe product’s use in the United States. CPSC Faces Challenges in Identifying Risks Associated with New Products, but Is Taking Steps to Improve Data Systems CPSC uses information from a number of sources to identify specific risks associated with both new and existing products. However, many of these sources have limitations, such as missing details. CPSC’s Emerging Hazards Team and Integrated Teams review the collected data to identify patterns of new hazards, but analyzing large quantities of information presents challenges. To address these challenges, CPSC is currently implementing upgrades to CPSRMS, its data management system, as required by CPSIA. CPSC has authority to identify and act on a wide range of consumer product hazards. However, obtaining useful and timely information about products involved in injuries and fatalities is an ongoing challenge for CPSC. Additionally, according to CPSC officials, it faces challenges in identifying risks from new and emerging products largely because statutorily CPSC was established to respond to risks after products have been introduced into market. To fulfill its mission of protecting the public against unreasonable risks of injuries associated with consumer products, CPSC collects, reviews, and analyzes information on consumer-product- related injuries and deaths from many sources, such as the National Electronic Injury Surveillance System (NEISS), consumer incident reports, death reports, and reports from manufacturers (see table 1).CPSC uses this information to identify a hazard or hazard pattern. CPSC obtains most of its injury information from NEISS reports. According to CPSC staff, this information is timely and useful in projecting national injury estimates and monitoring historical trends in product- related injuries and is immediately accessible to CPSC staff once hospital staff input information into the database. However, staff told us that the information contained in the reports has limitations. As noted in CPSC’s 2011 annual report, while the reports may indicate that a consumer product was involved in an incident, a product may not necessarily have caused the incident. Nonetheless, the reports provide an important source of information concerning the nature of the incidents and injuries and the product associated with the incident. To obtain more specific information, CPSC sometimes supplements the NEISS information by conducting further investigations. CPSC also identifies risks through incident reports received from consumers and others, such as health care professionals and child service providers, through its websites, telephone hotline, e-mail, fax, or postal service. According to CPSC officials, information in the incident reports is not always complete. Furthermore, the reports may not identify the risk associated with the incident, thus CPSC may conduct a more in- depth review of the incident. Every incident report CPSC receives does not necessarily involve a hazardous incident. In some instances, consumers report concern that a potential hazard might exist. year lag before the mortality data become available. CPSC supplements information from the NEISS system, death certificates, and reports from individual consumers with reports from medical examiners and coroners. These reports are also limited because they do not typically contain information that specifically identifies the product (such as brand name, model or serial number) or manufacturer. CPSC also receives information from manufacturers, distributors, and retailers about products distributed in commerce that the manufacturers conclude are potential substantial product hazards. Manufacturers of consumer products must notify the Commission immediately if they obtain information that reasonably supports the conclusion that a product fails to comply with a product safety standard the Commission has relied upon; fails to comply with any rule, regulation, standard, or ban under CPSA or any other act enforced by the Commission; contains a defect that could create a substantial product hazard; or creates an unreasonable risk of serious injury or death. However, CPSC does not rely solely on manufacturers to report a product defect in order to identify and address hazards because CPSC sometimes obtains information on a product defect before the manufacturer becomes aware of the problem. For example, according to CPSC staff, retailers may provide CPSC with reports of safety-related information and CPSC uses this retailer information in identifying and assessing risks. The hazard type or category classifies the general nature of the actual or potential hazard presented by the incident, such as a chemical or mechanical hazard. data, such as submitter’s information and relationship to the victim, reports consumers submit through the public database reduce some of the manual tasks, such as rekeying of incident data. According to CPSC staff, for reports received through the hotline, staff use a template to enter information directly into the database. Other reports continue to be manually coded by staff. According to CPSC officials, staff must review incident reports daily to identify pertinent information to “code” the reports in the database. This work requires staff to read the narrative and extract the information, such as a description of the incident, location where the incident occurred, number of victims, severity of the injury, the source of the incident report, and a description of the product involved in the incident. After the coding is completed, the incident reports advance to the Emerging Hazards Team. The Emerging Hazards Team is composed of statisticians, whose responsibilities include reviewing incident reports to identify new and emerging product-associated hazards, performing product safety assessments, directing new reports to appropriate Integrated Product Teams, and sending out daily death notifications. The Emerging Hazards Team’s review is CPSC’s first step in identifying a hazard and determining whether the hazard is new and emerging. According to CPSC staff, the Emerging Hazards Team reviews all incident reports daily, including those stored in the data management system, to identify trends and patterns. They said that this review is intended to determine whether reports should be forwarded to one of six Integrated Product Teams, which are composed of subject-matter experts from the Office of Hazard Identification and Reduction, the Office of Compliance and Field Operations, and staff from other CPSC offices and are organized by type of hazard. (We discuss the Integrated Product Teams’ role in CPSC’s assessment of risk in greater detail later in this report.) CPSC officials told us that in making their determination, the Emerging Hazards Team considers the criteria set forth in 16 CFR 1009.8, such as the frequency and severity of the injury and the vulnerability of the population at risk. These criteria are considered at each step of the risk process and in setting agency priorities. CPSC officials also told us that the Emerging Hazards Team uses criteria provided to them by the Integrated Product Teams to classify reports within the system as needing no further review. Reports requiring no further review are stored in the database (see fig. 1). According to CPSC officials, incidents involving a death, particularly if it involves a vulnerable population, are granted the highest priority and are immediately forwarded to the appropriate Integrated Product Team for action. In performing its review, the Emerging Hazards Team said it uses the historical data to identify trends and patterns of potentially new and emerging hazards while at the same time forwarding the reports to the appropriate Integrated Product Team. Specifically, incidents that are unusual or that appear to be similar to previously reported incidents are analyzed more closely to determine whether they need to be assessed by both the Emerging Hazards and Integrated Product Teams. For instance, according to the staff, in April 2012 CPSC received a news clip that detailed an incident involving a toy with a mirror that was attached but protected by a plastic cover. The staff conducted a search of CPSC’s database and identified a similar incident in August 2011. In both cases, the child was able to remove the cover and gain access to the hazardous component within it. Based on this finding, the team determined that the toy was a choking hazard and the reports were forwarded to the appropriate Integrated Product Team for a more in-depth review. According to an agency official, identifying patterns of risk is particularly challenging in situations involving many different makes and models of a particular product category. For example, CPSC staff completed a comprehensive review of crib-related infant fatalities reported to the agency between January 2000 and May 2010 involving drop-side crib hazards. During that period, staff was aware of 32 fatalities and hundreds of incidents that were caused by or related to brackets that detached from the drop-side cribs made by various manufacturers. According to the CPSC official, because the fatalities occurred across several different makes and models of cribs, it was difficult for CPSC to identify a pattern. In 2007 CPSC launched its Early Warning System to look for patterns in order to identify emerging hazards in a specific group of children’s products—including bassinets, cribs, and play yards—quickly and efficiently. This system relied on the integration of timely input from technical experts and technology to rapidly identify emerging hazards and led to millions of products being recalled. According to a CPSC news release issued October 2008, since the creation of its Early Warning System, the agency has conducted five crib recalls. Because of the success of the Early Warning System in identifying hazards in these children’s products, CPSC expanded the use of new technologies to address hazards in other product areas through its system upgrade and the Integrated Product Team concept. In fiscal year 2011, staff within the Office of Hazard Identification and Hazard Reduction implemented a new business process building upon the existing NEISS coding system. The new process required that all incident reports be reviewed and screened by the Emerging Hazards Team and that all incident reports associated with certain product codes be reviewed and analyzed by the appropriate Integrated Product Teams. However, according to agency officials, before they can fully implement this process, more automation of the screening process in the data- management system remains to be completed to allow the technical experts time to focus their attention on those incidents that could indicate a potential new hazard that needs further analysis. To improve the processing of the voluminous data it receives, CPSC is upgrading its data-management system—CPSRMS—as previously discussed. According to CPSC, the upgraded system is designed to enhance CPSC’s efficiency and effectiveness, enable a more rapid dissemination of information, and allow consumers to search the database through a publicly available portal. CPSC officials expect the system upgrades to be completed in fiscal year 2013 and fully operational in fiscal year 2014. Further, CPSC anticipates that staff will be able to create electronic files of related incidents, investigations, assessments, and other information to manage the high volume of incident reports the agency receives in order to identify emerging hazards more quickly. Finally, as part of the data system upgrade, CPSC expects to automate the process to determine which incident reports will be assigned for investigation. As previously noted, CPSC’s incident reports contain information that CPSC enters into the data system using standardized codes. However, CPSC officials told us that in order to be more efficient in identifying patterns and trends, the Integrated Product Teams need additional standardized codes built into the system for identifying product hazards. According to CPSC staff, they are in the process of developing additional standardized codes and eventually algorithms to conduct searches using key words, such as product manufacturer or country of origin. While the officials said it will take 3 to 5 years to develop the standardized language for the system, they added that the goal of this new capability is to help the agency achieve consistency as it loses institutional knowledge due to attrition and retirement. Ultimately, they expect the upgraded system to expedite the process for identifying emerging hazards. CPSC officials told us that before this upgraded database system, staff turnover had a more dramatic impact on CPSC’s ability to identify patterns or trends in the incident information it analyzed. In addition, the Commission did not have the capability to monitor the incidents in such a way that one person could see all the historical data, which interrupted the continuity in staff analysis. Furthermore, reviewing incident reports requires individual judgment, and automating the screening process is expected to allow the technical experts the opportunity to focus their efforts on specific records. As a result of the upgrade to CPSC’s information infrastructure, manufacturers are also able to enter information about substantial product hazards directly into CPSRMS, allowing the information to go through the coding and screening process more quickly. Furthermore, CPSC is in the process of developing case- management software for the Office of Compliance and Field Operations that will integrate the various databases to provide efficiency to all staff working on the compliance cases. The case management system is intended to allow staff to track the progress of an investigation throughout the agency and is scheduled to be completed in fiscal year 2013. CPSC Is Taking Actions to Improve Its Ability to Identify Unsafe Imported Products before They Enter the Marketplace As we have previously reported, CPSC has had limited ability to identify unsafe products at the ports. In our 2009 report, we recommended that the Chairman and commissioners of CPSC take several actions to improve the agency’s ability to target shipments for further screening and review at U.S. ports of entry as follows: 1. To ensure that it has appropriate data and procedures to prevent entry of unsafe products into the United States, we recommended that CPSC update agreements with CBP to clarify each agency’s roles and to resolve issues for obtaining access to advance shipment data. 2. To improve its targeting decisions and build its risk-analysis capability, we recommended that CPSC (a) work with CBP, as directed under CPSIA, through the planned targeting center for health and safety issues, to develop the capacity to analyze advance shipment data; and (b) link data CPSC gathers from surveillance activities and from international education and outreach activities to further target incoming shipments. CPSC views its import surveillance activities as a preventative strategy, intended to stop unlawful products before they are received into the United States. CPSC considers this strategy more proactive than relying on traditional compliance and recall efforts to remove violative products from the marketplace after harm may have occurred. In response to CPSIA, CPSC has developed and is pilot testing an approach for identifying and targeting unsafe consumer products at U.S. ports. CPSC is designing this approach to evaluate products entering the United States based on a predetermined set of rules (i.e., to target specific hazardous products or importers) intended to identify imports with the highest risks to consumers. CPSC has reported that given its low staffing levels and limited coverage at the ports (as of November 2012, CPSC had 20 port investigators stationed full-time at 15 of the largest U.S. ports), developing an automated process for identifying violative products was essential to increasing its ability to target unsafe products before they enter commerce. As detailed in CPSIA and based on our prior recommendation, CPSC is designing its approach to integrate its information with import data from CBP. CPSC has completed its agreement with CBP and obtained the shipment data as we recommended. CPSC is in the process of moving to a computer-based, systematic approach for targeting imports from its prior process for screening imported products. Under its prior process, established in 2007, CPSC staff manually screened importers’ documentation and telephoned CBP staff at the ports to detain shipments for inspection. CPSC is designing the new targeting approach to provide a framework that permits rules to be added and modified easily to accommodate new risk factors and changes in operations. For example, its approach is designed to allow CPSC staff to rank or risk-score incoming shipments in order to prioritize the Commission’s responses to product hazards that can be addressed at the ports. CPSC’s initial activities are focused on import compliance, such as screening children’s imported products for lead content. CPSC reported that in 2011, it conducted an analysis of children’s product importers that have had a history of noncompliance with safety standards and continues to target these importers for safety assessment. In a CPSC staff demonstration of this new targeting approach, we observed the use of their rule sets and the integration of import data used to make determinations for which shipments to target. When this import targeting system is fully implemented, CPSC expects to be able to systematically analyze 100 percent of shipments within CPSC jurisdiction to ensure that adequate resources are dedicated to the highest risk shipments, as indicated by its targeting rules. CPSC reported that it began limited testing of its targeting concept in fall 2011. According to its 2013 Performance Budget, in 2011, CPSC port investigators, working with CBP agents, screened almost 10,000 import samples at U.S. ports of entry and collected more than 1,800 import samples for testing at the CPSC laboratory. CPSC projects that the full implementation of this new system will take about 4 to 7 years, depending on resources devoted to this effort. CPSC’s detailed proposal on this import-targeting approach reported the need for additional staff for strengthening their coverage at the ports and for additional laboratory staff. In its report to Congress, CPSC also recommended certain legal changes to better align the Commission’s authorities with those of CBP and other health and safety agencies for targeting and addressing unsafe products at import. In addition, to complete the technology piece of the import targeting system, CPSC estimated the costs to be $40 million from fiscal years 2013 through 2019. CPSC’s planned next step in this effort is to reduce the duplication of effort between cases initiated by the Office of Compliance and Field Operations and those initiated by the Office of Import Surveillance by creating a case management system, as part of upgrading its information system. Timeliness of CPSC’s Actions to Assess and Address New Risks Depends on the Specific Product or Hazard CPSC assesses product risks on a case-by-case basis using information it collects from various sources. Once it has assessed the risk and determined the need to address a product hazard, CPSC can take a number of actions to reduce the risks of product-related injuries to consumers. CPSC’s Risk Assessment Varies with the Particular Product or Hazard Being Assessed Once CPSC identifies product risks, it assesses those risks on a case-by- case basis. According to CPSC staff, an assessment could pertain to a particular model of a product or to a class of products, such as drop-side cribs, or it may be specific to a type of hazard, such as fire hazards associated with appliances. In addition, according to CPSC officials, the types of information CPSC collects to assess product risk depend on the product and the type of assessment being conducted. In general, CPSC requires information on the severity of an injury, the probability of the hazard occurring, consumers’ ability to recognize hazardous conditions, and how the consumer uses the product. In addition, officials stated that manufacturer, model, serial number, number of products sold, life-cycle of the product, and safety incidents involving the products are all useful information. As noted earlier, most of CPSC’s information sources are limited in the information available. Additionally, CPSC officials told us that most information on sales of a particular product is not readily available, and surveys to establish use and exposure information are costly and often take up to a year to get approval (from the commissioners and the Office of Management and Budget) to conduct. As a result, CPSC often tries to estimate consumers’ exposure using assumptions based on sales data and product life-cycle information. As part of its assessment, CPSC evaluates consumer products to identify both acute and chronic hazards. Acute hazards are conditions that create the potential for injury or damage to consumers as a result of an accident or short-duration exposure to a defective product. Chronic hazards are presented by substances that can damage health over a relatively long period, after continuous or repeated exposures. Hazards may be either physical or chemical in nature. The adverse effects from exposure to a chemical substance can be acute, such as poisonings, or chronic, such as cancer or reproductive or genetic abnormalities. As stated earlier, CPSC relies on its criteria for establishing priorities in assessing risk. More specifically, CPSC staff can assess a product’s potential health effects to consumers using well-established chronic hazard guidelines based on the Federal Hazardous Substances Act. CPSC staff with whom we spoke said CPSC relies on the knowledge and judgment of its staff to review and analyze incident reports in order to identify emerging hazards that the agency could address. According to CPSC’s documentation, as part of their analysis, Integrated Product Team staff read all the incidents within each product code assigned to them. If a pattern emerges, they are required to review historical records and update those records accordingly. These teams are also responsible for other risk- related activities, such as requesting investigations; recommending new activities to management as needed, depending on the severity and addressability of emerging hazards; and monitoring follow-up status on compliance corrective actions and status of projects for standard development (see fig. 2). According to CPSC staff, the agency plans to develop standard operating procedures tailored to each team and to establish benchmarks for the teams to use in completing their analyses of hazards and identifying a strategy to address the hazards. When one of the Integrated Product Teams identifies a potentially new hazardous product, the team may request an investigation. CPSC staff, one Commissioner, and product safety experts said that assessing the risks posed by new products is challenging because hazards from new products are not readily apparent because historical data are not available for analysis. An investigation provides staff an opportunity to obtain additional information about use of the product that could potentially assist in their assessment. Investigation reports, which are prepared by the Office of Compliance and Field Operations staff, provide details about the sequence of events surrounding the incident, human and environmental factors, and product involvement.reports generally contain the consumer’s version of what occurred based on discussion with the incident victim or individual most knowledgeable about the incident. CPSC staff noted that the investigative activity is an ongoing process and the Integrated Product Teams decide whether to The incident continue the investigative process as they evaluate new evidence they receive. Investigations may also include follow-up inspections at retail stores, discussion with fire and police investigators, as well as the inclusion of fire and police reports. CPSC’s guidance for staff involved in risk-assessment activities identifies certain factors based upon the Commission’s criteria for establishing priorities. As discussed earlier, these factors include the frequency and severity of injuries, consumers’ exposure to the risk, causality of injuries, foreseeability of the risk, and the vulnerability of the population at risk. CPSC’s guidance specifically states that staff should consider these factors when deciding whether to investigate hazards or initiate corrective actions. According to CPSC officials, staff consider these factors throughout the risk-assessment process and in prioritizing which product hazards require action by the Commission. As an example, a CPSC official said that in a hypothetical situation involving an appliance that poses a fire hazard, staff may first determine the number of incidents involving this product, the extent of injuries, the level of exposure, and the likelihood that exposure to this appliance will result in death or serious injury. To evaluate the hazard, CPSC would collect samples of the product in order to determine the source of the defect and gather market data, such as the useful life of the product and the number of products in the marketplace. As part of their assessment, CPSC would also consider whether other types of products may be subject to this type of hazard, potentially extending the time needed for the assessment. CPSC evaluates some products, which it has identified through investigation and market surveillance, at CPSC’s National Product Testing and Evaluation Center. Integrated Product Teams’ evaluation and analysis of products being tested is generally geared toward improving standards or initiating rulemaking. The testing center is staffed with engineers and scientists from the Office of Hazard Identification and Hazard Reduction, some of whom are members of the Integrated Product Teams. According to CPSC laboratory staff, many of the samples at the testing center were imported products that CPSC intercepted at the ports before they were distributed into commerce. During our tour of CPSC’s test facility, we observed, for example, several bunk beds being tested to ensure they did not pose an entrapment hazard for children. We also observed an off-road stroller that was submitted for testing. The staff explained that the Integrated Product Team was testing this stroller for stability. As designed, the stroller had three wheels and posed a tip-over hazard. As noted in table 2, according to CPSC staff, the time needed to complete testing of regulated products varies. These times reflect typical duration to complete the tests once a sample is received by laboratory staff. The Office of Compliance and Field Operations relies on the expertise of the Emerging Hazards Team statisticians and other staff in the Office of Hazard Identification and Hazard Reduction to perform other safety assessments, such as database reviews and engineering file reviews. As part of this process, the Office of Compliance and Field Operations may request that the Emerging Hazards Team conduct a technical evaluation of a specific type of product, such as all gas appliances that showed a pattern of fire or explosion hazard. This assessment entails searching CPSC’s database for all incidents involving certain types of gas appliances with reports of gas leaks or fires using certain selection criteria. The Office of Compliance and Field Operations may also request that engineering staff review the full report from a manufacturer about a product and check the company’s information against CPSC’s database. According to CPSC officials, the timeliness of completing a risk assessment varies. For example, the risk assessment process for a chemical substance may be completed in a matter of days if acceptable and valid toxicity and exposure data are readily available. CPSC is familiar with the hazard posed by lead and has developed a testing method that can be performed quickly. As a result, testing toys for compliance with lead content regulation can be completed within 1 to 4 days, depending on whether the product can be tested using X-ray fluorescent equipment or requires traditional chemical analysis. In contrast, the risk assessment process of some chemical substances may take years to complete if CPSC needs to generate toxicity and exposure data through laboratory experiments. For example, in assessing the risk to children from playing on wood playground equipment treated with chromated copper arsenate (CCA), CPSC staff reviewed toxicity data and determined that there were insufficient data available on the exposure to arsenic from CCA-treated wood on which to base a recommendation to the Commission on the risk to children. As a result, CPSC staff designed and performed new laboratory and field studies to obtain exposure data to assess the health risk to children. CPSC began this project in 2001 and presented the results of its study to the Commission in 2003. CPSC’s timeline for conducting other safety assessments varied from 4 hours to perform a consultation by a technical engineer on a hazard classified as a high priority (where the risk of death or grievous injury or illness is likely or very likely or serious risk of illness is very likely) to 8 weeks to test a product sample for a routine case identified as a hazard that is possible but not likely to occur. Furthermore, CPSC faces challenges assessing the risks associated with products manufactured using nanomaterials. In particular, the introduction of consumer products containing nanomaterials into the marketplace may require unique approaches to determine exposure and risk and poses new regulatory challenges for CPSC. According to CPSC’s statement on nanomaterial, the potential safety and health risks of nanomaterials, as well as other compounds that are incorporated into consumer products, can be assessed under existing CPSC statutes, regulations, and guidelines. However, because testing methods are still being developed, conducting its risk assessment of such products will take longer. Neither CPSA nor the Federal Hazardous Substances Act requires the premarket registration or approval of consumer products. Thus, CPSC would usually not evaluate the product’s potential risk to the public until a product containing nanomaterials has been distributed into commerce. CPSC Uses Various Approaches to Address Product Hazards, but Faces Challenges in Addressing New Product Risks To address product-related hazards, CPSC uses various approaches designed to reduce injuries and deaths. CPSC’s enforcement role is based on its statutory authority to address unreasonable risks associated with consumer products. Based on CPSC’s documents, CPSC staff use investigations and assessments of product hazards to determine (1) whether corrective action is appropriate and (2) what type of actions may be appropriate to address potential risks of injury to the public. Before deciding to take action, CPSC must consider whether the risk is one that the Commission can address. For example, the blade of a kitchen knife can harm a consumer, but the sharpness of the knife, by design, is not a defect and the risk it poses cannot be addressed by CPSC’s actions. However, according to CPSC staff, if the handle of the knife breaks while the knife is in use and injures the consumer, CPSC would consider the product to be defective and the risk to be addressable. CPSC’s actions to address and reduce the risks of injury to consumers include the following. Compliance—conducting compliance activities, such as voluntary recalls and corrective actions, product bans, and enforcement of existing regulations by seeking civil and criminal penalties, and injunctive relief against prohibited acts. Standards—developing mandatory safety standards or participating in the voluntary standards process. Public Education—notifying the public of safety hazards and educating them about safe practices. According to CPSC, its multifaceted approach is intended to not only address immediate problems but also future problems. For instance, CPSC identified fire pots used with gel fuel as an emerging hazard in June 2011, after a severe injury was reported (see fig. 3). As of September 2011, CPSC was aware of 76 incidents involving fire pots used with gel fuel that resulted in two deaths and 86 injuries. CPSC reported that preliminary testing and evaluation of fire pots and gel fuels showed that they pose a serious risk of burn injuries to consumers due to certain features of the fire pot design, the burning and physical characteristics of the gel fuel, and the packaging of the gel fuel container. In the short term, CPSC worked with the individual manufacturers to recall the product. To address longer term concerns with the product, the agency is also working to develop mandatory standards to address risks associated with similar and future products. Between June and October 2011, CPSC announced 12 voluntary recalls involving more than 2 million bottles of gel fuel. In December 2011, the Commission issued an Advance Notice of Proposed Rulemaking (ANPR) to address the injuries and deaths associated with this product. As we previously reported, according to CPSC, the time required for mandatory rulemaking varies depending on the complexity of the product or legal requirements for enacting the rules, the severity of the hazard, and other agency priorities, among other factors. For example, a legal expert told us that a mandatory rulemaking for cigarette lighters took 10 years from the decision to take action to final rule. CPSC also has been considering a mandatory rule to address the risk of fire associated with ignitions of upholstered furniture since 1994. GAO-12-582. Interested parties generally have 60 days to comment on an ANPR and Gel fuel for fireplaces has been available in single-use cans since the mid-1980s. An incident may include more than one death or injury. According to CPSC briefing to commissioners, the earliest incident known to staff occurred on April 3, 2010. In some cases, the incident is reported to CPSC days after it occurred, and in other cases, it has taken more than a year. Several incidents that occurred in 2010 were reported to CPSC in 2011. The purpose of the ANPR was to determine what voluntary or mandatory standards should be implemented, what, if any, changes should be made to labeling, and if the products should be banned or no regulatory action taken. According to CPSC, in fiscal year 2013 staff plan to review comments to the ANPR and develop performance criteria and test methods for a potential mandatory rule. In fiscal year 2014, CPSC plans to prepare a Notice of Proposed Rulemaking package for the Commission’s consideration. Reliance on Voluntary Standards CPSC’s statutory authority requires the Commission to rely on voluntary standards to build safety into consumer products if the Commission determines that compliance with a voluntary standard is likely to result in the elimination or adequate reduction of risk of injury identified and that there will be substantial compliance with the voluntary standard. CPSC officials told us that compliance with applicable voluntary standards would be one of many factors in the decision on whether an unregulated product is defective and poses a risk of injury, thus requiring corrective action. In addition to taking steps to ensure compliance, the agency may address the risk presented by unregulated products—that is, products not subject to mandatory standards—by recommending revisions to voluntary standards. However, having a voluntary standard that does not address the particular defect or hazard that is being examined can slow down the process of getting a corrective action. In some instances, the manufacturer may disagree with CPSC’s finding that a product can meet a voluntary standard but has a defect that creates a serious risk of injury or death. If the strategy to address a risk is to develop a voluntary standard, the Office of Hazard Identification and Reduction will work to develop the standard. If CPSC finds that a manufacturer’s product fails to comply with voluntary standards or presents a substantial product hazard, it can take an enforcement action, such as seeking a public notice or recall. When a recall is deemed necessary, the Office of Compliance and Field Operations negotiates with the responsible firm to seek a “voluntary” or a negotiated recall whenever possible. According to CPSC officials, if the firm does not cooperate, CPSC can seek to (1) issue a unilateral press release asking consumers to discontinue use of the product, (2) ask distributors and retailers to stop selling the unsafe products, (3) obtain injunctive relief, (4) file an administrative complaint before an administrative law judge to affirm its position, although this process can take several months or years to complete, or (5) pursue an action against the product and manufacturer under the imminent hazard provision of CPSA. CPSC staff told us that for each recall, the Office of Compliance and Field Operations works with the Office of Hazard Identification on a case-by-case basis to determine whether standards (voluntary or mandatory) need to be developed to address similar or future products. In addition, CPSC can assess civil penalties if a manufacturer, distributor, or retailer knowingly fails to report potential substantial product hazards. CPSC has established the Fast-Track recall program, which provides firms the opportunity to streamline the recall process by removing hazardous products from the marketplace immediately. Under section 15(b) of CPSA, if a company suspects that a product could be hazardous, the company must report it to CPSC. The Fast-Track recall program allows the company to propose a plan for an expedited recall. If CPSC considers the firm’s plan satisfactory—and finds no other cause for concern in its review—it approves the plan and works with the firm to expedite the recall to begin within 20 days of the initial report to CPSC. This program is intended to remove dangerous products from the marketplace faster and save the company and CPSC both time and money. While some industry representatives have questioned the timeliness of the Fast-Track program, CPSC stated that a number of factors could slow the process, such as delays in receiving information from the firm, delays in completing product safety assessments, or evaluation of the remedy being suggested. CPSC reported that in 2011 staff completed technical reviews of hazardous products and initiated corrective actions within 20 days 95 percent of the time, thereby exceeding the Commission’s goals for initiating Fast-Track recalls by 5 percent. Since August 1997, CPSC reported that it has used the Fast- Track recall program to conduct 2,000 recalls on over approximately 200 million products. The timeliness of CPSC’s response to new and emerging hazards depends, in part, on the extent to which U.S. companies are motivated to quickly institute and enforce stringent product safety standards because selling products that cause injury or death can have negative impacts on their brands. In addition, the tort system in the United States—by exposing companies selling unsafe products to lawsuits—helps ensure that companies are motivated to comply with product safety standards. CPSC faces a trade-off between consumer protection and industry cooperation when deciding what actions to take, such as developing standards or banning a particular product, and whether industry self- regulation can be used to protect consumers. Balancing the interests of both consumers and industry participants adds complexity and affects the timeliness of CPSC’s response. If CPSC does not act quickly enough, a consumer may be harmed by using an unsafe product. However, if CPSC acts too quickly, it can be subject to lawsuits from companies that claim it has not presented sufficient evidence to prove a product hazard, which could result in a reversal of its decision and any action taken against a company. Although CPSC has broad regulatory powers, the agency’s efforts to address product hazards are also carried out using other methods, such as through consumer and manufacturer outreach. For example, CPSC can provide information to consumers on safety practices that can help prevent product-related accidents. These outreach efforts are carried out by the Office of Education, Global Outreach, and Small Business Ombudsman. This office’s primary responsibility is to coordinate and provide education and outreach activities to various domestic and international stakeholders. The office is also responsible for working with manufacturers to help build safety into their products to prevent dangerous products from ever entering the marketplace. CPSC uses a range of communication strategies to inform the public about safety issues. This information is intended to help consumers make informed choices about the products they purchase and to educate consumers on how to use the products safely and to act quickly if they own a recalled product. According to CPSC, the Commission has had success in educating the public through increased use of social media to communicate safety messages and through targeted campaigns that aim to reach the most vulnerable populations affected by certain product hazards. Examples include the “Safe Sleep” and “Pool Safely” campaigns, which addressed risks associated with baby cribs, baby monitor cords and sleep positioners, and swimming pools and spas, respectively. CPSC posts recalls and press releases to its website in a format that allows television stations and other media to obtain information from CPSC’s website to post on their own websites. Consumers also have the option of accessing www.SaferProducts.gov or calling the CPSC hotline to ask questions about recalls or request safety information. CPSC finds it challenging to address hazards posed by new products because first, the product defect or hazard must be identified; second, the associated risk must be assessed; and as noted earlier, it is harder to identify and assess the risk associated with new products when there is no historical data to assess. Furthermore, according to one agency official, because CPSC does not have authority to require pre-approval of products before they enter the U.S. market, CPSC cannot take action unless a product creates a risk of harm. Generally, new products are unregulated—that is, they are not subject to existing mandatory standards. To illustrate the challenge CPSC faces with addressing risks associated with new products, an agency official cited an instance where the agency collected a handful of incident reports involving a new infant sleep product. They performed a hazard profile on the product but because there had been no injury associated with the product, CPSC could not make a good case to have the manufacturer remedy an identified potential problem. In instances where CPSC may identify a potential hazard before a product is introduced into commerce, the agency’s only action is to alert the manufacturer of the potential hazard or product defect. Moreover, CPSC may not have prior experience with the potential hazard from a new consumer product and may need to take a number of actions to address a specific hazard, which can take years. For example, CPSC has recognized for several years that the ingestion of small magnets can pose a hazard for children. After 34 incidents were reported, 1 resulting in the death of a 20-month old child, and after investigating these incidents, CPSC issued a recall of children’s toys with magnets in March 2006. After further incidents of magnet ingestion were reported, CPSC issued an expanded recall in April 2007. From 2007 to 2008, CPSC worked with the toy industry and other stakeholders to develop a voluntary standard, which the Commission made mandatory in August 2009. However, high- powered magnet sets became available during 2008, with sales increasing in 2009. In February 2010, CPSC received its first report of an ingestion of high-powered magnets by a child. Although there was no injury associated with this magnet ingestion, CPSC noted that the product was inappropriately labeled for children and did not comply with the mandatory toy standards. In response, in May 2010, CPSC worked with one manufacturer to issue a voluntary recall due to the improper labeling. In December 2010, CPSC received another report of high-powered magnet ingestion by a child that required surgery. Because the circumstances differed from those of previous incidents, CPSC continued to track these incidents and conducted a follow-up investigation. In November 2011, CPSC and two manufacturers issued a public service announcement related to ingestion of magnets. CPSC continued to receive reports of incidents involving the ingestion of high-powered magnets. In 2012, the majority of manufacturers agreed to stop selling the product, but two manufacturers, one of which sold more than 70 percent of the magnet sets purchased in the United States, did not. To address the hazard associated with the products remaining in the market, CPSC filed administrative actions against the companies in July and August 2012. On September 4, 2012, CPSC took further action and issued a notice of proposed rulemaking to prohibit high-powered magnet sets. The public comment period ended on November 19, 2012. See figure 4 for a timeline of CPSC’s actions in response to hazards associated with magnets. Conclusion CPSC has broad authority for identifying, assessing, and addressing risks from unsafe consumer products. However, it faces challenges in identifying risks from new and emerging products largely because statutorily CPSC was established to respond to risks after products have been introduced into the U.S. market. Neither CPSA nor any other acts administered by CPSC require a premarket registration or approval of consumer products. Thus, CPSC does not evaluate a product’s potential risk to the public until a product is introduced into commerce. CPSC also faces challenges in identifying product risks in a timely manner because of the large quantity of information it must gather and manage. CPSC has taken steps to improve its responsiveness through better technology for identifying risks, more targeted surveillance of imported products, and a program for manufacturers to streamline the process for conducting recalls. CPSC’s efforts to improve its ability to identify unsafe products and target unsafe imported products through IT improvements are still under way, and the agency projects that they will be completed in 3 to 7 years. Because CPSC faces challenges in identifying and targeting unsafe products at import, it has attempted to update information-sharing agreements with its foreign counterparts, as Congress expected when it amended CPSA by including section 29(f). However, restrictive language in CPSA, as amended by CPSIA, has hindered CPSC’s ability to share certain information with its counterparts internationally. Therefore, the Commission has been unable to enter into any international agreements pursuant to section 29(f) because CPSC’s foreign counterparts will only share information under reciprocal terms that permit those foreign counterparts to make nonpublic information available to their own governing bodies or court systems. Based on our analysis of the statute, section 29(f) has not achieved the results expected by Congress when it enacted this provision and CPSC may benefit from having more flexibility to exchange information with its counterparts in other countries, which would help CPSC prevent unsafe products from entering the U.S. marketplace. Matter for Congressional Consideration To better enable CPSC to target unsafe consumer products, Congress may wish to amend section 29(f) of CPSA to allow CPSC greater ability to enter into information-sharing agreements with its foreign counterparts that permit reciprocal terms on disclosure of nonpublic information. Agency Comments and Our Evaluation We provided a draft of this report to CPSC for comment. In their written comments, reproduced in appendix III, CPSC supported our matter for congressional consideration and believed that it would benefit from having more flexibility to exchange information with its counterparts from other countries through agreements that permit reciprocal terms on disclosure of information. CPSC staff also provided technical comments that we incorporated, as appropriate. We are sending copies of this report to appropriate congressional committees and the Chairman and commissioners of CPSC. The report also is available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology The Consolidated Appropriations Act requires GAO to analyze the potential safety risks associated with new and emerging consumer products, including chemicals and other materials used in their manufacture, taking into account the Consumer Product Safety Commission’s (CPSC) ability and authority to identify, assess, and address the risks of new and emerging consumer products in a timely manner and to keep abreast of the effects of these products on public health and safety. Our objectives were to evaluate the authority and ability of CPSC to (1) stay generally informed about new risks associated with consumer products and use available information to identify product hazards, and (2) assess and address new risks posed by consumer products in a timely manner. To address these objectives, we reviewed the statutes and regulations that provide the basis for CPSC’s authorities related to protecting consumers from unreasonable risk of injury. We also examined guidance developed by CPSC that informs their approach to identifying, assessing, and addressing new and emerging risks, such as CPSC’s policy on establishing priorities for action by the Commission, guidance on risk- related activities, and information-quality guidelines. In addition, we reviewed CPSC’s operating procedural manuals for coding incident reports into its data-management system and for assigning hazard codes to these reports, performance and accountability reports, strategic plans, budget operating plans, 2013 performance budget request, and annual reports. We reviewed existing information about CPSC data systems and interviewed agency officials knowledgeable about the data. Based on our review of documentation, we believe the data are reliable for our purposes. We also reviewed prior GAO reports on CPSC, risk assessment in the federal government, and nanotechnology, and consulted GAO’s Standards for Internal Control in the Federal Government and Internal Control Management and Evaluation Tool to assess CPSC’s policies and procedures. We also examined the chronic hazard guidelines based on the Federal Hazardous Substance Act that CPSC uses to assess a product’s potential health effects. In addition, we reviewed data on CPSC corrective actions. To assess CPSC’s timeliness in identifying, assessing, and addressing new and emerging risks, we examined the Office of Management and Budget’s (OMB) Memorandum on Principles for Risk Analysis, OMB’s 2006 Proposed Risk Assessment Bulletin, and the National Research Council’s Review of OMB’s Proposed Risk Assessment Bulletin. We also reviewed CPSC’s performance goals and obtained data on its time frames for performing product safety assessments and testing at the National Product Testing and Evaluation Center. To assess CPSC’s authority to obtain and share information that could help identify new hazards posed by consumer products, we reviewed our prior work on CPSC’s authorities and legislation related to the agency.addition, we reviewed CPSC’s list of its collaborative efforts with other federal agencies to remain informed of new and emerging risks. We reviewed memorandums of understanding between CPSC and some of its foreign counterparts as well as information on risk management practices developed by other countries such as the European Union. In addition to our document review, we interviewed CPSC officials and staff as well as all of CPSC’s current commissioners and the Chairman to understand the organizational structure and the roles and responsibilities of the offices involved in safety operations and data collection, as well as to gain their perspectives on CPSC’s ability and authority to identify, assess, and address new and emerging risks in a timely manner. We also interviewed national consumer and industry organizations and legal professionals and toured CPSC’s National Product Testing and Evaluation Center. At the center, we watched staff conduct flammability testing of a product and learned of other types of testing CPSC conducts such as chemical, combustion, and durability testing. We also observed, through CPSC staff’s illustration, the data-management system CPSC uses to code and screen incident data in order to identify and assess risks. Finally, through a demonstration of CPSC’s import targeting system, we viewed the type of information CPSC is using in piloting its target system to identify unsafe products at the ports. We conducted this performance audit from January 2012 to December 2012 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: CPSC’s and Other Agencies’ Coordinated Responses to Hazards Posed by Defective Chinese Drywall When an emerging risk related to drywall (i.e., sheetrock used in construction) was identified in 2008 that crossed the jurisdiction of several federal agencies, CPSC took the lead in coordinating what the agency reported as the largest investigation in its history. CPSC participated in an intergovernmental task force with the Department of Housing and Urban Development (HUD), Environmental Protection Agency (EPA), Centers for Disease Control and Prevention (CDC), and Department of Homeland Security. In 2008, CPSC was informed of a high level of hydrogen sulfide emissions in drywall made in China that was imported into the United States from 2001 through 2008. The bulk of the almost 4,000 complaints involved homes built in 2006 through 2007. A high level of hydrogen sulfide emissions is associated with metal corrosion, which can damage household appliances and electrical systems. CPSC performed testing and found the level of hydrogen sulfide emissions in Chinese drywall to be 100 times that of non-Chinese drywall. Some of the Chinese manufacturers were aware of the issue in 2006 but did not share the information with CPSC, as required. CPSC coordinated with EPA to conduct an elemental analysis on the components contained in Chinese and non-Chinese drywall, as well as develop a protocol for conducting air- quality testing. CDC’s role was to assess health effects and develop a public awareness campaign. HUD’s role was to develop guidance for the identification and remediation of problem drywall in homes and provide grants to help in these efforts. Customs and Border Protection (CBP) worked to identify any imports of Chinese drywall. CPSC also worked closely with the Federal Council on Environmental Quality and the Domestic Policy Council. In addition, the Commission worked with state partners including state attorneys general and health departments. The timeline in figure 5 illustrates how CPSC addressed the emerging risk. Appendix III: Comments from the U.S. Consumer Product Safety Commission Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Debra Johnson (Assistant Director), Tim Bober, Christine Broderick, Marcia Crosse, Philip Curtin, DuEwa Kamara, Yola Lewis, Alexandra Martin-Arseneau, Marc Molino, Nadine Garrick Raidbard, Jessica Sandler, Jennifer Schwartz, Sushil Sharma, Andrew Stavisky, and Henry Wray made key contributions to this report.
Growing numbers of consumer product recalls in 2007 and 2008, particularly of imported toys and children's products, focused increased attention on CPSC. In the 2012 Consolidated Appropriations Act, Congress directed GAO to analyze the potential safety risks associated with new and emerging consumer products. CPSC's approach focuses on new hazards, which could be risks associated with both new and existing products. Therefore, this report evaluates the authority and ability of CPSC to (1) stay generally informed about new risks associated with consumer products and use available information to identify product hazards, and (2) assess and address new risks posed by consumer products in a timely manner. GAO reviewed CPSC's statutory and regulatory authorities to respond to product hazards; reviewed agency documents on risk assessment; reviewed CPSC corrective actions; and met with agency officials and representatives from national consumer, industry, and legal organizations with expertise in consumer product safety and risk assessment. GAO observed CPSC's testing facility and demonstrations of its information system upgrades. The Consumer Product Safety Commission (CPSC) has broad authority to identify, assess, and address product risks, but faces some challenges in identifying and responding to new risks in a timely manner. CPSC uses various means to stay informed about risks that may be associated with new or existing products. These methods include (1) market surveillance activities for imported products, retail stores, and Internet sales; and (2) formal agreements and various activities with other agencies. However, certain legal restrictions may hamper CPSC's ability to stay informed about new product hazards to public health and safety. Specifically, because of certain restrictions in the Consumer Product Safety Act (CPSA), CPSC cannot agree to allow foreign agencies to disclose nonpublic information they receive from CPSC. While the Consumer Product Safety Improvement Act (CPSIA) allows CPSC greater freedom to disclose information to U.S. courts, Congress, and state and local agencies, CPSC has been unable to complete information-sharing agreements with foreign counterparts as envisioned because it cannot offer its counterparts reciprocal terms on disclosure of nonpublic information. Due to the growing number of imported consumer products, this restriction on sharing information may hinder CPSC's ability to identify risks from new products in a timely manner, possibly leading to injury and death if unsafe products enter the U.S. market. CPSC also faces challenges in collecting and analyzing large quantities of data in order to identify potential product risks. Some sources CPSC uses to identify injuries or death are dated--for example, death certificates can be 2 or more years old--or contain limited information about the product involved in the incident. To respond to these challenges, the agency has key efforts under way. First, CPSC is upgrading its data management system. According to CPSC, the upgrades are designed to enhance CPSC's efficiency and effectiveness, enable a more rapid dissemination of information, and allow consumers to search the database through a publicly available Internet portal. CPSC officials expect the upgrades to be completed in fiscal year 2013 and fully operational in fiscal year 2014. Second, in response to a CPSIA requirement, CPSC is working with Customs and Border Protection to test a new approach for identifying unsafe consumer products at the ports. CPSC port investigators have found this approach to be effective and have prevented hundreds of consumer products that were in violation of U.S. safety rules or found to be hazardous from entering commerce. Timeliness of CPSC's actions to assess and address new risks depends on the specific product or hazard. For example, the simplest assessments may only take a few days, such as testing a product for lead content. More complex assessments can take years to complete, such as tracking potential chronic hazards from certain chemicals and nanotechnology (which involves the ability to control matter at the scale of one billionth of a meter) because no standard method for measuring toxicity associated with nanotechnology currently exists. CPSC uses various approaches to address product hazards, including conducting compliance activities, developing mandatory safety standards, and educating the public about safety hazards and safe practices. CPSC can take action to address a product hazard more quickly if it is addressing a known hazard. However, addressing a new or emerging risk can take CPSC years because it may need to develop new standards or approaches.
GAO_GAO-05-83
Scope and Methodology To meet our study’s challenges, we used several methods from ethnography, and in certain cases we blended them with survey methods to provide in-depth knowledge of organizational culture from the perspective of VA’s frontline staff—its physicians, nurses, and others directly responsible for patient care. We intend this study to complement our earlier reports on organizational culture and changing organizations. We chose ethnography because several of its techniques and perspectives helped us study aspects of patient safety that would otherwise have remained overlooked or would not have been observed, such as informal mores, and to assist GAO in the development of new evaluation methods. These aspects were ethnography’s research traditions of (1) conversational interviews, enabling interviewers to explore a participant’s own view of and associations with an issue of interest, (2) the researchers’ observations of real processes to further understand the meaning behind patient safety from the natural environment of staff, and (3) systems thinking. Our study measures, at the facility level, the extent of familiarity with, participation in, and cultural support for the Program, and it complements a cultural survey VA conducted in 2000. VA expects to resurvey staff in the near future, using its past survey data as a baseline. VA’s original, nonrandom survey contained questions regarding shame and staff willingness to report adverse events when the safety of patients was at hazard during their care. The VA survey did not establish staff familiarity with key concepts of the Program, participation in VA safety activities, or the facilities’ levels of cultural support for the Program. Conversational Survey Interviews We recognized that a tradition of fear of being blamed for adverse events and close calls might make staff reluctant to talk about their experience of potential harm to patients. Besides breaking through an emotional barrier, we wanted to understand the private views of staff on what facilitates patient safety. To achieve the informal, open, and honest discussions we needed, we conducted private, nonthreatening, conversational interviews with randomly selected clinicians and other staff in a judgmental sample. At each site, we chose one random and one judgmental (nonrandom) sample of staff to interview in a conversational manner, using similar semistructured questionnaires (see app. III). For the first sample, we interviewed a random selection of 10 physicians and 10 nurses at each of the four facilities. While this provided us with a representative sample of clinicians (physicians and nurses) from each facility, the sample size was too small to provide a statistical basis for generalizing our survey results to an entire facility. To give us a better understanding of the culture and context of patient safety beyond the clinicians involved in direct patient care at each facility, we also interviewed more than a hundred other staff in the four study sites, including medical facility leaders, Patient Safety Managers, and hospital employees from all levels—maintenance workers, security officers, nursing assistants, technicians, and service chiefs. (Appendix I contains more technical detail about our analysis.) Reporting adverse events and close calls is a highly sensitive subject and can successfully be explored with qualitative methods that allow respondents to talk privately and freely. When staff did not recognize a key element of the Program, our interviewers explained it to them. (We were not giving the respondents a test they could fail.) Selecting clinicians randomly at each of four facilities, and asking some close-ended questions such as those expecting “yes” or “no” answers, allowed us to analyze and present some issues as standard survey data. This combined survey and ethnographic approach afforded us most of the advantages of standard surveys while establishing an environment in which the respondents could talk, and did talk, at length about the cultural context of patient safety in their own facilities. Clinicians responded to a standard set of questions, many open ended, such as, To what extent do you perceive there to be trust or distrust within your unit or team? Among the advantages these questions had were that they allowed the clinicians to discuss issues spontaneously and they allowed us to discover what facilitates trust from their point of view. Thus, if clinicians thought leadership was important, we had an opportunity to see this from their viewpoint rather than starting from the premise that leadership would be important to them. An important part of our approach was content analysis, which we used to analyze answers to both the standard and open-ended questions. Content analysis summarizes qualitative information by categorizing it and then systematically sorting and comparing the categories in order to develop themes and summarize them. We determined, by intercoder reliability tests, that our content analysis results were trustworthy across different raters. (See app. I.) Observation We added another ethnographic technique in order to more completely understand the culture within each facility. Since responses to surveys are sometimes difficult to understand out of context, our in-depth ethnographic observations of the patient care process gave us a more complete picture of how the elements of the Patient Safety Program interacted. They also gave us a better understanding of VA’s medical facility systems. We observed staff in their daily work activities at each medical facility, which helped us understand patient safety in context. For example, we attended staff meetings where the Program was discussed and we attended RCA meetings, and we followed a nurse on her rounds. We took detailed field notes from our observations, and we analyzed and summarized our notes. We reviewed files to examine data on adverse events, close calls, and RCA reports. We read files from administrative boards, reward programs, and patient safety committee minutes. And we interviewed high-level VA officials. Systems Thinking Finally, our ethnographic research approach was systemic. This was to help us appreciate interactions between the elements of the Program and the facilities’ existing culture. Ethnography has traditionally been used to provide rich, holistic accounts of the way of life of a people or a community; in recent decades, it has also been used successfully to study groups in modern societies. A systems approach casts a wide net over the subject. In this case, we chose to study the Patient Safety Program in relation to other aspects of culture in VA’s medical facilities that might affect its adoption, such as the extent to which staff have mutual trust. We also developed a model, or flow chart, to guide our study of the Program and the culture of the facilities. The model, in figure 2, helped us conceptualize the important safety activities within the Program and analyze and present our results. We looked not only at the Program’s key elements, in the darkly shaded boxes in figure 2, but also at what surrounds them—the context of the medical facilities’ culture—and whether the culture supports the adoption of the Program. Our model illustrates that our primary focus was measuring clinicians’ supportive culture for reporting close calls and adverse events and their familiarity with and participation in reporting programs and RCAs. The model also depicts the interaction between clinicians’ receiving feedback and being rewarded and their desire to continue reporting close calls and adverse events. It also allows us to describe how clinicians’ reporting close calls and adverse events, and the subsequent investigation of the root causes of them, developed into system changes that in turn resulted in patients being safer. We conducted the study at three medical facilities that VA had recommended as being well managed. We selected a fourth facility for geographic balance. Thus, the four facilities were in different regions of the country. Using rapid assessment techniques, we conducted fieldwork for approximately a week at each of two facilities, for 3 weeks at a third, and for 25 days at the fourth. We did our work from November 2002 to August 2004 in accordance with generally accepted government auditing standards. Background The Patient Safety Goal “to make health care safe we need to redesign our systems to make error difficult to commit and create a culture in which the existence of risk is acknowledged and injury prevention is recognized as everyone’s responsibility. A new understanding of accountability that moves beyond blaming individuals when they make mistakes must be established if progress is to be made.” This vision of making patients safe through “redesign . . . to make errors difficult to commit” led to VA’s National Center for Patient Safety (NCPS), established to improve patient safety throughout the largest health care system in the United States. To transform the existing culture of patient care in VA’s medical facilities, VA’s leaders aimed to persuade clinicians and other staff in health care settings to adopt a new practice of reporting, free of fear and with mutual trust, identifying vulnerabilities, and taking necessary actions to mitigate risks. The Under Secretary had recognized risk to patients during care and that a focus on VA’s existing culture could improve patient safety. Related research shows that if complex decision making organizations are to change, they must modify their organizational culture. Traditionally, clinicians involved in an adverse event could be blamed or sued, but the roots of unintentional errors are now understood as originating often in the institutions and structures of medicine rather than in clinicians’ incompetence or negligence. Several contextual factors influence how the Patient Safety Program is experienced at the medical facilities we visited and show the increasingly complex world of patient care. Our study’s limitations meant that we could not study these factors, but health care facilities in general, as well as VA’s, are experiencing difficulty in hiring and retaining nurses, as well as potential staffing shortages. Patients admitted to VA medical facilities have more multiple medical problems that require more extensive care than in the past. VA’s eligibility reform allowed veterans without service- connected conditions to seek VA services, leading to a 70 percent increase in the number of enrolled veterans between 1999 and 2002. The Patient Safety Process VA has provided funding of $37.4 million to NCPS for its Patient Safety Program operations and related grants and contracts for fiscal years 1999–2004. In fiscal year 1999, NCPS defined three major initiatives: (1) a more focused system for mandatory close call and adverse event reporting, including a renewed focus on close calls; (2) reviews of close calls and adverse events, including RCAs, using interdisciplinary teams at each facility to discover system flaws and recommends redesign to prevent harm to patients; and (3) staff feedback on system changes and communication about improvements to patient safety. Close Call and Adverse event Reporting Starting with the NCPS program in 1999, reporting of close calls increased dramatically as their value for patient safety improvement was widely disseminated and increasingly recognized by VA personnel. A close call is an event or situation that could have resulted in harm to a patient but did not, either by chance or by timely intervention. VA encourages reporting close calls and adverse events, since redesigning system flaws depends on staff revealing them. VA’s Patient Safety Managers told us that only adverse events and not close calls were traditionally required to be reported to supervisors and then up the chain of command. Under the Program, staff also have optional routes for reporting—through Patient Safety Managers or a confidential system outside their facilities. Staff can now report close calls and adverse events directly to the facilities’ Patient Safety Managers. They, in turn, evaluate the reports, based on criteria for deciding which adverse events or close calls should be investigated further. NCPS also has a confidential reporting option— the Patient Safety Reporting System (PSRS)—through a contract with the National Aeronautics and Space Administration (NASA). NASA has 27 years of experience with a similar program, the Federal Aviation Administration’s Aviation Safety Reporting System. Under the contract with VA, NASA removes all identifying information and sends selected items of special interest to the NCPS. NASA also publishes a newsletter based on reports that have had their identifying information removed. Root Cause Analysis Teams Working on interdisciplinary teams of usually five to seven participants, staff focus on either one or a group of similar close calls or adverse events to investigate their causes. Then they search for system flaws and redesign patient care so that mistakes are harder to make. Under the Program, NCPS envisioned that these teams would be a key step to improving patient safety through system change and one of its primary mechanisms of introducing clinicians to the Program. In 1999, NCPS began RCA implementation. In this on-the-job training, Patient Safety Managers guide local interdisciplinary teams in studying reports of close calls or adverse events to identify and redesign system weaknesses that threaten patients’ safety. Teams are allowed 45 days to learn as much as possible from a close call or adverse event or a group of similar close calls or adverse events such as falls, missing persons, medication errors, and suicides called aggregated reviews. Within the given time period, teams are to develop action plans for system improvement. Personal experience on interdisciplinary RCA teams investigating close calls and adverse events at their home facilities is the clinicians’ key training experience. VA expected that the RCA experience would persuade staff that VA was changing its culture by encouraging a different approach to reporting. Feedback Mechanisms Staff need to receive proof that the Program is working by receiving timely feedback on their reporting. A feedback loop fosters and perpetuates close call and adverse event reporting. Without it, staff may feel the effort is not worth their time. NCPS built in feedback loops at several levels of the system. For example, individuals who report a close call or adverse event are supposed to get feedback from the RCA team on actions recommended as a result of their reports. Also, NCPS issues an online bimonthly newsletter that reports safety changes. In chapter 2, we measure clinicians' familiarity and participation in the Program at the four facilities we visited. Chapter 3 is an examination of whether the culture at the four facilities supports the Patient Safety Program and chapter 4 provides examples of management practices that promote patient safety. We asked VA to comment on our report; VA’s comments are in appendix IV. Our response to their comments is in the conclusions located in chapter 5. VA also provided some additional comments to emphasize that it believes that it has taken steps to address the issue of mutual trust. VA describes those steps in the report on page 67. In general, we found progress in clinicians’ understanding and participation in the Patient Safety Program. Three facilities had medium or higher familiarity with and participation in the Program’s core elements, and one had lower. At that facility, the staff were not following VA’s policy of reporting close calls and were not being educated in the benefits of doing so. Examining the data across our total random sample, we found that some clinicians were familiar with several core concepts of the Program and were unfamiliar with others—a picture NCPS officials said did not surprise them. About three-quarters of clinicians were familiar with the concept of RCAs (newly introduced in 2000) and the concept of the close call. About half the clinicians recognized the new confidential reporting process—another equally important program. One-third had participated in an RCA or knew someone who had. NCPS staff told us that participation in RCAs is crucial to culture change at VA, and clinicians who were on RCA teams indicated that they experienced the beginning of a culture shift. Of the staff who had participated in RCAs, many indicated that it was a positive learning experience, but facilities varied in ensuring clinicians’ broad participation. Facilities Shared Safety Hazards but Not Program Familiarity and Participation VA has made progress in familiarizing and involving clinicians with the Program’s key concepts. But while the facilities we studied shared basic safety problems, three had made more progress than the fourth. First, all four experienced similar hazards to patient safety. Second, we report clinicians’ familiarity with and participation in the Program in two ways— grouped first by facility and then across the four sites. Facilities’ Share Common Safety Reporting Pattern The four facilities shared an overall pattern in the types of adverse events they reported, reflecting their common safety challenge. To establish the Program’s context, we asked at the four facilities to review documents related to close calls and adverse events reported over a one-month period (June 2002). All the facilities reported falls for this period, while two facilities or more recorded patients’ violence toward staff, patients’ suicides and suicide attempts, missing patients, and medication errors (see fig. 3). Although our data reflect a limited time period, the highly overlapping types of reporting at the facilities parallel those found in the wider VA patient care system, as documented in an earlier review by the VA Medical Inspector. Facilities’ Differences in Participation and Familiarity with the Program Staff at one facility had less familiarity with and participation in the Program than staff at the three others (see fig. 4). In the interviews with the random sample, we found Facility D had lower familiarity with the Program’s concepts than the other facilities and lower participation in RCAs; this pattern was buttressed by additional interviews at Facility D. For example, the quality manager who supervised Patient Safety Managers at that facility did not realize that close call reporting was mandated, and the education officer who trained staff in patient safety told us that staff were generally not acquainted with the concept of reporting close calls. Because knowing that an initiative exists is often the first step to participation, the lower familiarity with the Program at Facility D in the fifth year of implementation was a likely impediment to the adoption of the Program there. Differences in Facilities’ Adhering to Close Call Reporting Policy The four medical facilities we studied also varied in their adherence to close call reporting policies under the Program. We found three out of four facilities followed the policy of reporting close calls. One facility, in particular, showed a marked increase in the number of close calls in a short period of time; close call reports were rare in the 6 months before but numbered 240 in the 6 months after its leaders told staff patient safety was an organizational priority and introduced a simple reward system for close call reporting. However, one facility we visited was not reporting close calls in the Program’s fifth year. Familiarity with and Participation in the Program across Four Facilities We looked at interview responses with randomly selected clinicians across all four facilities. We found that three-quarters of the clinicians knew the meaning of close call—that is, when a potential incident is discovered before any harm has come to a patient—but only half were aware of the option of reporting close calls and adverse events confidentially. (See table 1.) Close calls are presumed to occur more often than adverse events, and reporting them in addition to adverse events is central to the Program’s goal of discovering and correcting system flaws. Staff who do not recognize the close call concept cannot bring to light system flaws that could harm patients. Further, because changing from traditional blaming behavior to reporting without fear can take time, staff familiarity with the confidential reporting option is important. However, only half the clinicians surveyed at the four facilities knew that they could report adverse events or close calls confidentially under the NASA reporting contract. Culture Shift through Root Cause Analysis Clinicians who had participated in interdisciplinary RCA teams found that their participation enabled them to understand the benefits of using a systems approach rather than blaming individuals for unintentional adverse events and close calls. To understand the RCA process from close call reporting to RCA team analysis, we provide an example from fieldwork that shows how two misidentifications in a surgery ward led to a reexamination of the preoperative process in an RCA. (See “Developing Patient Safety from Examining Close Calls and an RCA.”) While examining how many RCAs were conducted from 2000 to 2003 at the four facilities, we found that the most active facility we studied had performed twice as many RCAs as the least active. The RCAs have the potential to promote a cultural shift from blaming staff for unintentional close calls and adverse events to a rational search for the root causes, but clinicians at the four facilities had inconsistent opportunities to participate in the Program. Illustrating the Steps from Close Calls to RCAs “Developing Patient Safety from Examining Close Calls and an RCA” illustrates an RCA team’s initial steps by following a series of events involving two close calls of mistaken identity in surgery at one facility. Developing Patient Safety from Examining Close Calls and an RCA The Patient Safety Manager had an unusual visit from the Chief Surgeon. He had come to report two recent instances of patients being mistakenly scheduled for surgery. The identity mix-ups had been discovered before the patients were harmed—a situation the surgeon recognized as fitting the Program’s mandate to report close calls in order to identify hazards in the system. After each close call, he had filled out a form and made a report to NCPS, which had called him back within 24 hours to ask for more information and to offer some reengineering suggestions. At the next weekly surgery preoperation meeting, the Chief Surgeon and his staff discussed their schedule and details of coming surgeries, using a matrix timetable projected for all to see. Then he discussed the two close calls. In both cases, the correct patient had come to the surgery preparation room, but the staff had been expecting someone else. In one case, the scheduling staff had confused two similar names. In the other case, the scheduling staff had, as usual, used the last four digits of the Social Security number to help identify the patient but had had two patients with the same last four digits. In the meeting’s discussion, the staff tried to understand how such mistakes could happen. The Patient Safety Manager convened an expedited RCA team of three other VA staff to get at the root cause of such identification problems. She opened the meeting by saying, “If we don’t learn from this [close call], we’re all fools.” She announced that the RCA would be limited to two or three meetings rather than several weeks. After introductions, the staff members explained their role in scheduling and what happened in such cases. As they spoke, the staff tried to outline the scheduling process: what forms were completed, whether they were electronic or paper, how they moved from person to person, and who touched the forms. Several problems emerged. (1) Some VA patients might not always know their identity or surgical site because of illness or senility or both. Also, patients with multiple problems cannot always relay them to staff, because they may focus on one problem while the appointment scheduled is for another problem. (2) Two key VA staff may be absent at the same time and a substitute may make the error. (3) In one case, two patients’ names differed only by m and n. (4) A scheduler noted that scheduling is filled with interruptions and opportunities for confusion. For example, it is not uncommon that scheduled patients have overlapping numerals for the last four digits of their Social Security numbers. The RCA team’s next meeting was scheduled. In future meetings, the RCA team would consider various ways of preventing or minimizing similar events. Clinicians’ Belief in RCAs as a Positive Learning Experience Staff who had participated in RCAs told us that their experience was a valuable and convincing introduction to the Patient Safety Program. In lieu of giving clinicians formal training in the central concepts of the Program, NCPS expected to change the culture of patient care one clinician at a time by their individual experience in RCAs. NCPS intended that experience on multidisciplinary RCA teams investigating the underlying causes of reported close calls and adverse events at their home facilities would be clinicians’ key educational experience and that it would persuade them that VA was taking a different approach to reporting. All facilities are expected to perform RCAs, in which local interdisciplinary teams study reports of close calls and adverse events in order to identify and redesign systems that threaten patients’ safety. Staff also reported that RCA investigations created a learning environment and were an excellent way to introduce staff to redesign systems to prevent harm to patients. Two doctors at one facility, for example, told us that the RCAs they participated in were a genuine “no blame learning experience” that they felt good about or found valuable. Two nurses at another facility reported being amazed at the change from a blaming culture to an inquiring culture as they experienced the RCA process. However, staff also told us that the RCA process took too much time or took time away from patient care. At another facility, where trust was low and only 5 of 20 clinicians had a positive view of reporting, each of those 5 clinicians had a positive experience with RCAs under the new Program. “How Participating in RCAs Affects Clinicians’ Work” presents some clinicians’ own stories of their participation in RCAs. How Participating in RCAs Affects Clinicians’ Work Physician 1: I participated in an RCA through my work in the blood bank. It taught me to look at errors systematically and not rush to blame individuals. But if an employee were eventually found responsible, then the Lab would hold that person accountable. [This example reflects the decision leaders must make between personal accountability and systemic change.] Physician 2: RCAs are a good thing. It’s fixing a potential disaster before it can coalesce and become a disaster. Nurse 1: I think RCAs are a good thing, because usually the problems are system problems. I think if you fix the system, you fix the problem. It seems to be that way in surgery. You try and concentrate on the things you can fix. Nurse 2: They used to have a process in psychiatry called “post mortem.” That process often led to the conclusion that a suicide could not have been prevented. By contrast, in the new RCA process, we look at how the RCA can promote system changes. Nurse 3: RCA does a good job of identifying not only the actual adverse event but also the contributing factors. This is very helpful because it allows us to better understand what to do about an adverse event. Nurse 4: RCA is a good system. It’s a good way to share information and avoid recurring error. Nurse 5: My general impression is that RCAs are great. They’re especially important when teams look for results and action items. Variation in Facilities’ RCA Activity Over the 4 years of the RCA implementation, the most active facility we studied (Facility A) had performed twice as many RCAs as the least active facility (Facility D). (See table 2.) The number of RCAs, similar to the number of close calls and adverse events, does not reflect the actual numbers of adverse events or close calls that occurred or how safe the facility is; rather, it reflects whether organizational learning is taking place, through increasing participation in a core Program activity. Similarly, NCPS staff recently reported to a facility leaders’ training session that networks of their facilities varied fourfold in fiscal year 2002 with respect to number of RCAs conducted. Facility D’s director told us that NCPS had recently identified his facility as having too few RCA reviews. Inconsistent Opportunities to Participate in RCAs One facility was more successful than the three others at providing busy physicians with the opportunity to participate in RCA teams by adopting a mandatory rotation system. RCAs have been required under the Program since 2000. About three- fourths of the respondents were familiar with the RCA concept. Seventy- five percent staff familiarity represents substantial learning, given when the concept was introduced. However, only about a third had participated in an RCA or knew someone who had. At one facility, we found broad participation by physicians because management required it. NCPS envisions RCA experience as central to changing to a culture of safety, but many VA clinicians (approximately 65 percent) at the facilities we studied had yet to participate in the nonblaming process that NCPS’s director told us he viewed as the most effective experience for culture change: “We don’t want professional root cause analysis people doing this stuff. Then you don’t change the culture.” We found a wide spectrum of methods being used to recruit physicians into RCA teams. One facility had broad physician participation in RCAs as its policy, and at another facility one unit had a rotational plan that encouraged its own clinicians to participate, in contrast to the whole facility. Administrators at three of the four had no policy across the facility to ensure physician participation on the teams. At two facilities, Patient Safety Managers told us it was difficult to get physicians to participate because of their busy schedules. Understandably, most of the clinicians we surveyed had not served on RCA teams. Varying Cultural Support Clinicians at three of the four facilities had medium or higher cultural support for the Program. One facility had lower support, and many clinicians indicated that they would not report adverse events because they feared punishment. This suggests that the Program will not succeed unless cultural support is bolstered. We explored the cultural support from these four groups in two ways: (1) by graphically comparing the groups’ levels of mutual trust and comfort in reporting close call and adverse events with their levels of familiarity with the Program and (2) by graphically demonstrating the barriers clinicians see as blocking their close call and adverse event reporting, in conjunction with some elements of basic familiarity with and cultural support for the Program. Clinicians’ Trust and Comfort in Reporting Varies by Facility In figure 5, we compare our findings on clinicians’ mutual trust and their comfort in reporting close calls and adverse events at the four facilities. The levels of these components of a supportive culture appeared to vary among the clinician groups. For example, staff at Facility A had medium familiarity with the Program but had the lowest levels of comfort in reporting adverse events and close calls and mutual trust among the four facilities. Knowledge from specific safety training or RCA participation was not sufficient for them to readily change to safety practices under the Program if levels of comfort in reporting and mutual trust were not high enough. Figure 5 contrasts information on the supportive culture (mutual trust and comfort in reporting) with a measure of staff familiarity with the Program from figure 4. Many staff at Facility A were afraid of being punished, and they mistrusted management and other work units. One staff member explained why staff would not report adverse events: “We have a culture of back-stabbing here. They are always covering themselves.” Many other staff members echoed this characterization of the atmosphere, linking the lack of cultural support to their decision not to perform the most basic of the Program’s activities. Staff at that facility needed a boost in supportive culture to fully implement the Program. In contrast, Facility D, with the least familiarity with the Program, had trust and comfort levels almost as high as any of the others, indicating that if the Program were to be pursued with greater vigor there, cultural support would not be a barrier to reporting close calls and adverse events. Barriers to Reporting In interviewing clinicians, we found that barriers remain to reporting adverse events and close calls. Even for staff familiar with the concepts, reporting required overcoming numerous remaining obstacles. These staff indicated that reporting formally would be a time-consuming diversion from patient care or, worse, “an invitation to a witch hunt.” In figure 6, we display the cumulative effect of the barriers to reporting close calls that staff told us about, in conjunction with familiarity with and cultural support for the Program. Clinicians told us about barriers to their participation in reporting, including (1) limited perceived value, (2) not knowing how to report, (3) not having enough time to report, (4) fearing traditional blame or punishment, (5) lacking trust that coworkers would not shame them, and (6) lacking knowledge of the confidential reporting option. Staff at all four sites reported such barriers in reporting both close calls and adverse events. We present some of their views in “Clinicians’ Barriers to Reporting Close Calls and Adverse Events.” Clinicians’ Barriers to Reporting Close Calls and Adverse Events Nurse 1: Some clinicians feel comfortable reporting adverse events and close calls. I agree with the concept. It depends on the person. Some would feel it would be used against them. I’ve seen nonreporting, because, before, they got written comments such as “This is not a near miss.” “This is not a close call.” We get shut down instead of worked with. [By “shut down,” she meant that management told her it was not a close call and not to report it.] It happened to me. Management generally discourages and does not empower staff to feel comfortable reporting patient safety conditions. Instead, I reported and it was used against me. Physician 1: I can’t remember if I’ve written a close call. That does not happen here—only very, very rarely. Maybe I wrote one early on in my career, but I’m not sure. Physician 2: I thought I had a close call once and showed it to the chief of staff and he told me that it was not a close call. I’m unclear what the definition of a close call is. Physician 3: I know what a close call is in other settings, but not in the hospital setting. They are not reporting on close calls in this hospital. Physician 4: Yes, I know what a close call is. I’ve not reported a close call, but if I were to, I would go to a nurse supervisor and tell her about it orally and have her report it. I would not use incident reports to report a close call—only actual events. Physician 5: I have not reported a close call. I’m removed from the nursing communications. Physician 6: I’m unsure if it is safe to report close calls without punishment. Nurse 2: If I saw a close call, I would go talk to the nurse who did it. Writing up a close call on someone would be cruel. I would not write up a close call or adverse event report on someone else. If something happened to the patient, I would write it up. Writing up another person would cause conflict. We need to help each other, and writing each other up is not considered helpful. Additional Steps to Stimulate Culture Change The themes for work conditions that promote a supportive culture for patient safety that clinicians articulated most often were (1) leadership, (2) communication, (3) professional values, and (4) workflow. Building a Supportive Culture A few strong patterns emerged from the clinicians’ responses to our open- ended interview questions about what affects trust and comfort in reporting close calls and adverse events. First, across the survey, the clinicians said their leaders’ actions were most likely to increase or decrease comfort and trust. Attributes of communication were the second most common aspect of their work that they said influenced their comfort and trust. Third, and somewhat less commonly, clinicians thought that the values and norms that they had developed in their professional training and that had been reinforced on the job influenced their culture, but they also thought that workflow could support or undercut trust generally. In their view, trust literally could be made or broken, depending on whether tasks shared between individuals or between units went smoothly and cooperation was maintained. Table 3 shows the results of our content analysis, listing the clinicians’ four top themes—leadership, communication, professional values, and workflow—and how many times we found these themes in our analysis. When we asked clinicians what affected a culture that supported comfort in reporting and trust among the different professions, departments, teams, and shifts they worked with, their most frequent answers were effective leadership and good two-way communication. Moreover, the clinicians told us that an unsupportive culture lacks these characteristics. Clinicians gave us these same answers, whether we asked about comfort in reporting or mutual trust. Further, we found that the culture of blame and punishment traditionally learned in medical training hampers close calls and adverse event reporting but that mutual trust is developed more by workplace conditions. Effective Leadership Leadership’s role is important in fostering a supportive cultural environment for the Program. Clinicians reported examples of leaders facilitating comfort in reporting and mutual trust that enabled them to participate in the Program. But at several facilities we also heard about distrust of the Program that resulted from leaders’ action or lack of action. Clinicians told us that some VA leaders had not focused sufficiently on building the supportive culture that the Program requires. Staff reported that in order to trust, they needed information and needed to take part in decisions about their workplace and policies that affect their work. For example, clinicians told us that they wanted to be part of management’s decisions or, at the very least, to be informed about management’s decisions when a number of changes were being introduced, such as when medical supplies and software were purchased, clinicians were assigned temporary rotations, and performance measures were implemented. Their observations are in line with other studies that show that leaders’ making decisions without consulting frontline workers can cause serious problems of trust. In “Clinicians’ Perspectives on Leaders’ Supporting Trust,” we illustrate staff’s positive attitudes toward patient safety and how leadership is instrumental in developing mutual trust and comfort. Clinicians’ Perspectives on Leaders’ Supporting Trust Nurse 1: I asked my staff what the role of leaders should be so I could serve staff better. Many answered, “communication” and “knowing what is happening at the facility is important.” Physician 1: Leaders often bring up patient safety. They’re “taking a lead in making staff aware of patient safety.” At my facility, they hold staff meetings to review the patient safety goals of the Joint Commission on Accreditation of Healthcare Organizations (JCAHO). The chief of staff constantly brings up patient safety in meetings. The administration takes the lead, not only “talking the talk” but also “walking the talk.” Nurse 2: Trust is sustained, in part, because of weekly meetings with management, where they talk about patient safety. Physician 2: It’s leadership’s responsibility to communicate that staff are accountable for cooperation and coordination of patient care. Conversely, respondents said leaders’ actions can diminish clinicians’ comfort and trust, as summarized in “Clinicians’ Perspectives on Leaders’ Undercutting Trust.” Physicians and nurses at different facilities told us that trust is diminished when staff do not work in stable teams. Some of the policies that clinicians told us were obstacles to building a stable team include assigning floating or nonpermanent supervisory personnel, rotating physicians on and off the ward, and the monthly rotation of student nurses and doctors. Clinicians’ Perspectives on Leaders’ Undercutting Trust Physician 1: For 20 years, there was nothing but “blame and train.” In the past, an adverse event or close call was associated with a person you had to blame, and the “fix” was to train them. Nurse 1: We have a panel of nurse managers who have discouraged adverse event reports for medication errors. I vow to encourage reporting errors without blame. We still have a way to go to be honest about reporting. Nurse 2: I know of instances when staff reported adverse events, they were transferred, so that does not make staff comfortable reporting them. There is no trust of management. Nurse 3: Decisions that affect our work are made without talking to staff or understanding our work situation. Physician 2: If you don’t know what’s going on, you invent it. Physician 3: The most critical change needed at this facility is in the area of leadership. Leaders are ineffective because they are not good at communication. We hear about reasons why we are blamed. This causes a feeling of distrust. Physician 4: Leadership has little grasp of patient care and, thus, policy directives have little impact. If we’re given a policy to spend a maximum of 20 minutes per patient, including completing records, I do what the patient needs. Management can just yell at me. Communication Staff indicated that communication in the workplace affects trust and comfort in reporting. Further, they told us that communication is challenging, since it involves coordinating tasks with and between leaders and teams and their empowerment, all of which can be problematic in the medical setting. Some VA staff told us that unequal power relationships and hierarchical decision making are often obstacles to patient safety. They also elaborated on the kinds of communication that support patient safety, including empowering staff so that they can be heard. Traditionally, a nurse’s status is lower than a physician’s in hospitals, and some nurses could find it difficult to speak up in disagreement with physicians. For patients to be safe, however, nurses indicated they wanted to be empowered to openly disagree with physicians and other staff when they found an unsafe situation. For example, nurses told us that they had to speak up when they disagreed with the medication or dosage doctors had ordered. They also said that they had problems when physicians telephoned nurses and gave directions orally when policy stated that physicians’ orders must be written. The clinicians spoke to us about empowerment and their involvement or lack of involvement in decision making. “Clinicians’ Perspectives on How Communication Promotes Trust” gives some examples of what they told us about communication that they believed supports patient safety. Clinicians’ Perspectives on How Communication Promotes Trust Nurse 1: We interact with doctors and nurses in clinic. If something happens, we share with one another about how we might have done it differently. This goes on daily. Nurse 2: The director of the medical facility is a good communicator; he keeps us informed. He maintains a personal newsletter. Our nurse manager is well rounded and she listens. Nurse 3: Peers and coworkers communicating with one another supports patient safety. For instance, sometimes we have patients who have a history of violence. This information is reflected in the computer and comes up when they “chart them in,” but sometimes a nurse may still not know of such a history. Therefore, in the nurses’ reports, the history of violence and the need for caution is passed on. Extra information about the patients can also help them deescalate confrontations between patients. Physician 1: VA’s Computerized Order Entry system [a computerized method for ordering medications] promotes patient safety. Before, it was hard to read the physicians’ handwriting. The Computerized Order Entry at least eliminated the legibility problem. They do not have Computerized Order Entry at the university where I also work. VA also got rid of using Latin abbreviations. Now everything has to be written out. Physician 2: Open communication promotes team buy-in and therefore better customer service. (continued from previous page) Physician 3: We have a good department because staff can communicate their complaints. Nurse 4: We do an RCA on our own close call or adverse event or those from other sources, and then we present the results to the staff. I brought a PowerPoint briefing to our staff meeting about another hospital’s wrong site surgery, so we could know what had happened. If JCAHO published an adverse event, I put it in our staff notes and have it discussed at the next staff meeting. Nurse 5: Management is more involved with the workers. It seems that they are listening more. Physician 4: Within the unit, we have good trust. Outside the unit, the administration has more trust and more communication. We’re in the loop more. In the clinic, we have good trust in nurse-to-doctor and doctor-to-doctor relationships and with leadership. Physician 5: I reported a close call recently and feared blame, but it was not that way at all. It was a learning experience for all who heard about it. I think it’s wonderful that VA has created this open atmosphere. Formerly, you might be a scapegoat, have backlash, and get a poorer rating. Today, we don’t feel we’re going to be punished. In “Clinicians’ Perspectives on How Faulty Communication Diminishes Trust,” we give clinicians’ examples of management’s undermining patient safety by deciding policies without consulting them, as when nurses were not included in decision making. Such policies sometimes proved dysfunctional or were ignored. Clinicians’ Perspectives on How Faulty Communication Diminishes Trust Nurse 1: I have to double-check changes in supplies in order to safeguard patients, because Supply often sends ABC instead of XYZ. Since we’re not included in decisions about product changes, we’re forced to continually double-check Supply to keep patients safe. Nurse 2: We have poor communication between other units and the radiology unit. They send incontinent or violent Isolation patients without notifying X-ray staff to be wary. Facility staff also wanted additional and more timely feedback on what happens to their reports of close calls, adverse events, and the results of RCAs. Some Patient Safety Managers often felt too busy to provide feedback to staff because their jobs included a number of activities, including facilitating RCAs. At one facility, Patient Safety Managers routinely reported system changes back to staff who made the reports, but at the other facilities, they did not have a routine way of doing this. Many staff at the four facilities told us that they did not know the recommendations of the RCA teams or the results of close call or adverse event reports. NCPS agrees that feedback to staff is necessary but inadequate, and it plans to focus on the need for feedback at facilities in the near future. NCPS’s Web site publicizes selected results of RCAs and alerts and system changes that result from reporting. Some of what VA’s leaders and frontline clinicians told us about the need for more feedback is presented in “Facility Staff Concerns about Limited Feedback.” Facility Staff Concerns about Limited Feedback Nurse Manager: We do a good job of following up on close call or adverse event reports in my unit, but not as good a job following up on the recommendations from RCAs. I was able to implement the action items right away in my unit after I participated in an RCA on patients’ falls, but other nurse managers didn’t hear about the results from the RCA for 2 or 3 months. The RCA teams develop really good ideas, but we need follow-through to make sure everyone knows that this is what we’re going to do to change the system. Delays result from organizational routing and financial constraints. Even when the recommendation is signed, sometimes there’s a delay getting the information down to the nurse managers. Physician 1: There should be an annual report of actions taken as a result of reporting adverse events and close calls. For example, if three units have developed a different way of labeling medication that used to be labeled alike, then the rest of the staff should know about it. [This was a reference to medication that looks alike and confuses staff. One solution is for the pharmacy to buy the two medications from different manufacturers so that the labels will be different.] It makes people feel better to know the information they reported helped make things better. I’d make sure that the information on improved medical care gets reported back to the staff. Administrative Official: The distribution of RCAs has been limited to staff responsible for the action or system change, but in the future the results will be distributed more broadly. Physician 2: I haven’t heard any results from the RCAs. A pamphlet on the results would be a good idea. Workflow and Professional Training In addition, staff spoke to us frequently about workflow issues—how safely handing off tasks between shifts and teams required trust but could cause mistrust when the transition was not smooth or efficient. VA clinicians clarified for us that mutual trust could be either gained or lost between workers and units, depending on coordination. And they drew conclusions about the importance of the quality and nature of workflow to patient safety. Clinicians also elaborated on aspects of the values they learned in training that did not facilitate a blame-free workplace. They indicated that shifting patient care between groups was an ongoing challenge to patient safety. For analysis purposes, we found these issues in continuity of care to be part of the larger problem of workflow, because they entailed the coordination of tasks and communication within and across teams. In the views of the clinicians at the facilities we studied, if staff, teams, or units begin to feel they cannot adequately communicate their patients’ needs for care because of workflow problems, then trust may be lost, in turn diminishing patient safety. At one facility, where trust and comfort were lower than at the others, clinicians told us that workflow failures diminished trust and threatened patient safety. In “Clinicians on Workflow Problems and Patients’ Safety,” some physicians and nurses talk about these problems and how they tried to find solutions to promote patients’ safety. Clinicians on Workflow Problems and Patients’ Safety Nurse 1: Some units are less particular about paperwork and records than others, so when we transfer patients, their information is sometimes incomplete. Patients don’t come back to my unit as quickly from one unit as from other units, and sometimes their information is not available. Physician: Personnel tends to lose things, and this makes it hard to recruit new staff. Nurse 2: We often have difficulty getting the supplies we need. For example, it’s especially difficult to obtain blood on the night shift. Nurse 3: At the change of a shift, I had to discharge one patient and admit another. Since I couldn’t do both at the same time, I chose to admit but not to discharge. But my relief nurse expressed unhappiness about the situation, suggesting that I had left my work for another crew to do. I spoke with the relief nurse, and the problem of mistrust was resolved when everyone understood the work context better. When people communicate across shifts this way, they have a better understanding of and appreciation for one another. Nurse 4: I go to the ward before my shift starts to make sure the patients’ wounds have been properly dressed. I take dressings to homebound patients when they weren’t sent home with them. I cultivate motivated individuals from the ward staff, letting them see the procedures in the Dialysis Unit, and give them responsibility for those patients when they’re back on the ward and reward them. I stock snacks because feeble elderly patients are sent to Dialysis without breakfast, and then they’re expected to get to breakfast after their dialysis session and pay for their own meal. I see this situation as inherently unsafe, so I supply them with free snacks. The professional values physicians and nurses learned in their formal education or on the job can also be an obstacle to the Program, because these values do not always foster a nonpunitive atmosphere. Some of the values clinicians have been trained in run counter to the Program’s expectations for open reporting, as we show in “Clinicians’ Professional Values and the Patient Safety Program.” Clinicians’ Professional Values and the Patient Safety Program Nurse 1: There is much trust within the nursing profession. We have to trust each other because of the critical nature of passing patients from one shift to another. Nurse 2: The only group I worry about is Clerical. Their work is frontline and high-stress, but it’s entry level, so they may have never worked in a hospital before. We have to double-check their work because there’s no system in the clinic to verify orders, as there is in the hospital. Nurse 3: We trust those we work with. The exception is Housekeeping. We have to continually call to complain about the cleanliness of the clinic. Nurse 4: Nurses have a value system in which we “eat our young,” which undercuts comfort in reporting errors. Traditionally, older nurses taught younger ones their way of doing things, and the younger ones were punished when they failed to do things that way. Now, we must allow nurses to do things a new way without punishment. Nurse 5: I keep hearing that we’re looking to learn and not blame. Nursing culture is a blaming culture, and is helping to stop this. Nurse 6: The model in nursing is “a nun with a ruler.” Physician 1: The culture is changing, but it’s taking a while. I’m impressed with administration here that tries to say, “How can we learn from this?” Physician 2: To promote the Program, you have to have a change to a no-blame culture. Physician 3: Clinicians have to stop blaming each other and learn from their mistakes. VA clinicians explained that nurses see themselves as the patients’ first and last guard against harm during care. Nurses are expected to be double- checking physicians’ orders, medicines, and dressings and, for example, preventing falls or suicide attempts. Generally speaking, in their traditional role, nurses feel personally responsible for patients’ welfare and are designated to fulfill that role. They hold fast to protocols as safety devices, follow rules, and double-check work orders. Some spoke favorably of a bygone era when nurses could be counted on to back up one another, while many others thought this described their current work environment. In contrast, VA staff told us that physicians have thought of themselves as taking more original and independent actions but not as part of a multidisciplinary team. Their actions, based on traditional professional values, would thus undercut mutual trust. Physicians told us that patient safety would be improved if they were better trained to work on teams. Both nurses and physicians face many obstacles to improving patients’ safety in the increasingly complex and ever changing world of medicine. VA clinicians take seriously their mission as caretakers of the nation’s veterans, many of whom are older and have multiple chronic diseases, making these efforts to improve patient safety even more challenging. Many told us that they feel ethically and morally bound as frontline caretakers to keep their patients safe by reducing the number of adverse events and close calls. Improving Assessment of, Familiarity with, Participation in, and Cultural Support for the Program Although VA conducted a cultural assessment survey in 2000 and plans to resurvey VA staff in the near future, it has not measured staff familiarity with, participation in, and cultural support for the Program. For example, it did not ask about staff knowledge and understanding of key concepts (close call reporting, RCAs, and VA’s confidential reporting system to NASA) or RCA participation. Although the 2000 survey did describe some important attitudes about patient safety, such as shame and punishment related to reporting adverse events, it did not explicitly measure mutual trust among staff, a central theme of VA clinicians in describing what affected patient safety and a supportive culture. Finally, while NCPS staff asked each facility to administer the survey to a random sample, many facilities did not follow their directions. The VA survey may serve as a baseline measure of national local trends, but it could not be used to identify facility-level improvements or interventions. Using Storytelling to Promote Culture Change VA leaders at some facilities we studied showed staff they support the Program by telling stories. They used the stories to publicly demonstrate a changed and open atmosphere for learning from adverse events and close calls, for example. While leaders must still distinguish episodes that warrant professional accountability, they must fairly draw the line between system fixes and performance issues. One way to do this is by repeating stories that demonstrate that VA leaders encourage a culture that supports the Program and an atmosphere of open reporting and learning from past close calls and adverse events. Leaders supported the Program by telling staff stories that demonstrated a systems change to safeguard patients after a medical adverse event was reported. Storytelling has a long tradition in medicine as way of teaching newcomers about a group’s social norms. One leader shared with us the story he used to kick off VA’s Patient Safety Program. Each time he tells the story, he confirms the importance of changing VA’s culture and helps transform the organization because staff remember it. Instead of dismissing an employee who has reported not giving a patient the drug the patient was supposed to receive, the leader judged the adverse event to be a systems problem. In discussions with NCPS, the leader recognized that this story was an opportunity to show his staff that the facility was following the Program by taking a systems rather than a disciplinary approach and to highlight that reporting close calls and adverse events was critical in changing the patient care practice so that such problems would not recur. “Leaders’ Effective Promotion of Patient Safety in Staff Meetings” contains another example of storytelling to change communication practice. Leaders’ Effective Promotion of Patient Safety in Staff Meetings [The Administrative Official met with a unit leader and about 20 physicians and residents.] Administrative Official: The Patient Safety Program includes close calls as reportable incidents. [That is, VA is accepting staff reports of close calls.] A culture change is needed at VA, brought about by sharing a vision of what is valuable to us. We also want to show that leadership endorses the Program. [He walked the meeting through an aviation example that showed that the first officer should have challenged the captain, raising parallels with failure to question authority—or to “cross-check”— at this facility. He asked the group how they challenged authority effectively. Finally, he introduced RCAs as a new type of system analysis. Physicians continued their discussion.] Physician 1: Cross-checking is more effective if it’s not hostile. Physician 2: There are fewer errors in medical settings where there’s a stable team, but recently VA has been trying to do things more quickly with fewer staff. Physician 3: Communication is a problem on my unit, where we have 28 contract nurses. Physician 4: Could it be bad if one unit reported a lot of close calls? Physician 5: : VA has 50 years of being punitive. The Patient Safety Managers will be looking for patterns across a large number of reports, not seeking to blame individuals. Physician 6: Why can’t the reporting simply be open and the names of the reporters known? [Several members of the meeting talked about the fear of punishment that still existed.] Physicians 7 and 8: Are the forms discoverable? Can they be subpoenaed? Can the reports be anonymous? (continued from previous page) [In a subsequent interview, leaders told about how the Program was progressing.] Leader 1: We must change doing what you’re told without questioning orders. We tell nurses that it’s OK to challenge physicians in an atmosphere of mutual respect. We’re establishing it as a facility goal, keeping it on the front burner and keeping it a priority. Leader 2: Since leaders began visiting staff meetings to get the word out on close call reporting, we’ve noticed a change—a significant reduction in the fear of reporting close calls. Not all fear is gone, but the close call program is a success. Leader 3: Leadership raised safety consciousness with the close call airplane accident lesson. If it had been handed to us as just another memo, it might have been thrown away, but when leaders are there in person to answer questions, then it raises people’s awareness of patient safety. Physician 1: Leadership here went out and talked about patient safety. Their support and emphasis and bringing their level of importance to it made the Program happen. Deliberate Teaching, Coaching, and Role Modeling Staff at one facility told us that VA’s leadership supported the Program and the patient safety culture by teaching, coaching, and role modeling patient safety concepts to their staff in more than a hundred small meetings. VA’s leaders had a three-part agenda in their initial staff meetings. First, they taught a scenario in which two pilots failed to communicate well enough to avoid a fatal crash. The first officer did not cross-check and challenge an order from his captain to descend in a wind shear, resulting in the plane’s crashing and killing 37 people. Facility leaders depicted the strong parallels—-including the communication effects of unequal power relationships and hierarchical decisionmaking discussed earlier—- between the pilots’ communication to save the plane and clinicians’ communications to save the patient. Second, they discussed the importance of communications in medical care, coaching lower-level staff to speak up when they saw adverse events and emphasized the importance of two-way communication. Finally, they introduced a new close call reporting program at the facility and modeled for staff that they supported this type of reporting in introducing the new Program and its elements. “Leaders’ Effective Promotion of Patient Safety in Staff Meetings” presents a portion of one such meeting and also interviews with VA staff when they discussed how the staff meetings had raised their consciousness about patient safety. “Leaders’ Effective Promotion” represents more than a hundred small meetings conducted at one facility that successfully demonstrated that patient safety was a priority for the organization. When top leaders attended staff meetings, staff listened to their message. It may be no coincidence that this facility had the highest rating for comfort in reporting, according to the findings of our survey. Many staff at this facility told us that because their top leaders spoke to them about the Program, they concluded that the Program and its culture change were a priority for their leaders. Midlevel staff also acknowledged progress but admitted to some remaining fear. Participants heard their leaders say that challenging authority—here called “cross-checking”—was important for patient safety. They were asked to compare their own communication patterns with the aviation crew’s communication in a similarly high-risk setting that depended on teamwork. The administrative official at the medical facility meeting, drawing an analogy between the aviation example and participants’ work, noted that an RCA had found that an adverse event could have been prevented if authority had been challenged. His message to the meeting’s participants was that VA’s leadership saw cross-checking as acceptable and necessary. Rewarding Close Call Reporting The same facility that held small meetings for staff developed a close call reward system that reinforced the idea that reporting a close call not only did not result in punishment but was actually rewarded. Staff feared a negative atmosphere when the close call program was first established, with staff telling on one another, but this did not occur. The number of close calls at this facility was few before the reward program began. In the first 6 months of the program, 240 close calls were reported. While we were visiting the Patient Safety Managers, many staff called them to report close calls; each staff member was given a $4 cafeteria certificate. Patient Safety Managers at this facility told us that they rewarded reporting, no matter who reported or how trivial the report. The unit with the month’s best close call received a plate of cookies. The Patient Safety Manager reported that a milestone had been reached when a chief of surgery reported a close call—a first for surgery leadership. “Rewarding Close Call Reporting” paraphrases leaders and clinicians on the success of the close call program at their facility. Leader 1: With the close call program, the wards do not feel as secretive. VA leadership thought the new close call program might cause staff to turn on one another and begin to blame one another for reporting close calls, but this has not happened. Nurse 1: People are rewarded for reporting close calls and adverse events—and not punished. Nurse 2: I feel comfortable about reporting close calls and adverse events. When management first introduced the close call program, we thought everyone was going to tell on each other. If everyone starts to find out things about you, you could lose your job, because it could be on your record. You would have to ask yourself, “Is this something I would really want to tell someone about?” We thought it would be like “Big Brother Is Watching You.” But that is not what it’s like. I feel comfortable reporting close calls and adverse events. Administrative Official: To promote patient safety, we did a lot of reward and recognition to let staff know that what they have done is important. Other facilities did not have as extensive a reward system. At one facility, the Patient Safety Manager had recently given a certificate to someone who had done a good job in describing an adverse event. However, at another facility, the quality manager who supervised Patient Safety Managers told us that she thought it improper to reward staff for reporting: She did not want to reward people for almost making a mistake. Clinicians in our interviews, however, pointed to the need to develop reward programs around patient safety. For example, one nurse said that if she were the director, she would call staff to thank them for reporting close calls and adverse events and would develop a reward system. Measuring Clinicians’ Familiarity with and Cultural Support for the Program Clinicians’ familiarity with the Program and opportunities to participate in RCAs could be measured at each facility in order to identify facilities that require specific interventions. Because low familiarity or participation can hinder the success of the Program, VA could attempt to measure and improve basic staff familiarity with the Program’s core concepts and ensure opportunities to participate in RCA teams. Our study developed measures of familiarity with and participation in the Program by analyzing responses from interviews of a small random sample of clinicians, and these could be further developed into useful measures in a larger study. These measures could also be developed into goals to be achieved nationally and, more importantly, locally for each facility. According to the clinicians we interviewed, the supportive culture of individual facilities plays a critical role in clinicians’ participation in the Program and warrants VA leadership’s priority. In one of the three facilities where staff had above average familiarity with the Program, staff told us that fear prevented them from fully participating in the Program. From the clinicians’ vantage point, their leaders need not accept given levels of mutual trust or comfort in reporting close calls and adverse events; instead, once facilities are identified as having low cultural support for the Program, that can be a starting point for change. In our conversational interviews with clinicians, they consistently pointed to specific workplace conditions that fostered their mutual trust and comfort in reporting. Notably, management can take actions to stimulate culture change by developing a work environment that reinforces patient safety. Drawing from their own experience, clinicians had views that were consistent with many studies of culture change in organizations, indicating that leaders’ actions and open communication are important in the transformation sought under the Program. We were able to directly observe practices that have convinced frontline workers that the Program is a priority for VA, that it is worth their while to participate in it, and that by doing so medical facilities are safer for patients. These practices included leadership’s demonstrating to staff that patient safety is an organizational priority—for example, by coaching and by communicating safety stories in face-to-face meetings with all staff— and that the organization values reporting close calls because it rewards and does not punish staff for reporting them. Recommendations for Executive Action To better assess the adequacy of clinicians’ familiarity with, participation in, and cultural support for the Program, we recommend that the Secretary of Veterans Affairs direct the Under Secretary for Health to take the following three actions: 1. set goals for increasing staff familiarity with the Program’s major concepts (close call reporting, confidential reporting program with NASA, root cause analysis), participation in root cause analysis teams, and cultural support for the Program by measuring the extent to which each facility has mutual trust and comfort in reporting close calls and adverse events; 2. develop tools for measuring goals by facility; and 3. develop interventions when goals have not been met. Agency Comments and Our Evaluation We provided a draft of this report to VA for its review. The Secretary of Veterans Affairs stated in a December 3, 2004, letter that the department concurs with GAO’s recommendations and will provide an action plan to implement them. VA also commented that the report did not address the question of whether VA’s work in patient safety improvement serves as a model for other healthcare organizations. GAO’s study was not designed to evaluate whether VA’s program was a model, compared with other programs, but was limited to how the program had been implemented in four medical facilities. VA also provided several technical comments that we incorporated as appropriate. Appendix I: Content Analysis, Statistical Tests, and Intercoder Reliability Content Analysis To analyze the data we collected, we used content analysis, a technique that requires that the data be reduced, classified, and sorted. In content analysis, analysts look for, and sometimes quantify, patterns in the data. We conducted tests on clinicians’ responses to our key variables and found a number of significant differences. We also conducted intercoder reliability tests—that is, we assessed the degree to which coders agreed with one another. The tests showed that the consistency among the coders was satisfactory. Ethnography Ethnography is a social science method, embracing qualitative and quantitative techniques, developed within cultural anthropology for studying a wide variety of communities in natural settings. It allowed us to study the Program in VA’s medical facilities. Ethnography is particularly suited to exploring unknown variables, such as studying what in VA’s culture at the four facilities affected the Program. In our open-ended questions, we did not supply the respondents with any answer choices. We allowed them to talk at length, and therefore the interviews lasted anywhere from a half hour to an hour or more. Ethnography is also useful for giving respondents the confidence to talk about sensitive topics. We anticipated that clinicians would find the study of VA’s medical facility culture, including staff views of close calls and adverse events, a sensitive subject. Therefore, we gave full consideration to the format and context of the interviews. Although ethnography is commonly associated with lengthy research aimed at understanding remote cultures, it can also be used to inform the design, implementation, and evaluation of public programs. Governments have used ethnography to gain a better understanding of the sociocultural life of groups whose beliefs and behavior are important to federal programs. For example, the U.S. Census Bureau used ethnographic techniques to understand impediments to participation in the census among certain urban and rural groups that have long been undercounted. Data Collection We conducted fieldwork for approximately a week at each of two facilities, for 3 weeks at a third, and for 25 days at the fourth. Although ethnographers traditionally conduct fieldwork over a year or more, we used a more recent rapid assessment process (RAP). RAP is an intensive, team-based ethnographic inquiry using triangulation and iterative data analysis and additional data collection to quickly develop a preliminary understanding of a situation from the insider’s perspective. We drew two samples, one judgmental and one random. To understand how the Program was implemented at each medical facility, we conducted approximately a hundred nonrandom interviews with facility leaders, Patient Safety Managers, and a variety of facility employees at all levels, from maintenance workers, security officers, nursing aides, and technicians to department heads. This allowed us a detailed understanding of how the Program was implemented at each facility. To ensure that we represented clinicians’ views at all four facilities, we selected a random sample of 80, using computer-generated random numbers from an employee roster of clinicians, yielding 10 physicians and 10 nurses at each facility. While this provided us with a representative sample of clinicians (physicians and nurses) from each facility, the size of this sample was too small to provide a statistical basis for generalizing from our survey results to the entire facility or to all facilities. For both samples, we used a similar semistructured questionnaire (see app. III). It consisted of mostly open-ended questions and a few questions with yes-or-no responses. At every interview, we asked staff for their ideas, and we incorporated a number of their perspectives into this report. A hallmark of ethnography is its observation of behavior, attitudes, and values. Observation is conducted for a number of purposes. One is to allow ethnographers to place the specific issue or program they are studying in the context of the larger culture. Another, in our case, was to allow some facility staff to feel more comfortable with us as we interviewed them. Both purposes worked for us in this study. Because we had observed meetings and RCA teams at work, we could better understand respondents’ answers. Respondents noted how comfortable they were in talking to us and how different our conversational interviews were from other interviews they had experienced in the past. We observed staff in their daily activities. For example, we accompanied a nurse while she administered medication using bar code technology that scans the medication and the patient’s wristband. We also observed staff at numerous meetings, including RCA team meetings, patient safety conferences, patient safety training sessions, staff meetings in which patient safety was discussed, and daily leadership meetings. Our methodology included collecting data from facility records. We examined all close calls and adverse events reported for a 1-month period and all RCA reports conducted at each facility, and we reviewed administrative boards and rewards programs. We read minutes from patient safety committees and other committees that addressed safety issues. Data Analysis Our data were mostly recorded, but some interviews were written, depending on respondents’ permission to record. Using AnnoTape, qualitative data analysis software, we coded the interviews for both qualitative and quantitative patterns, and we used the software to capture paraphrases for our analysis. We developed a prescriptive codebook to guide the coders in identifying interviews and classifying text relevant to our variables. After several codebook drafts, we agreed on common definitions and uses for the codes. In the content analysis of our random sample data, we looked for patterns, associations, and trends. AnnoTape allowed us to mark a digital recording or transcribed text with our codes and then sort and display all the marked audio or text bites by these codes. Because all the coders operated from a common set of rules, we achieved a satisfactory intercoder rater reliability score. AnnoTape also allowed us to record prose summaries of the interviews, some of which paraphrased what the clinicians said; the paraphrases we present in the report reflect the range of views and perceptions of VA staff at the four medical facilities. A rough gauge of the importance of their views is discernible in the extent to which certain opinions or perceptions are repeatedly expressed or endorsed. Using the statistical package SAS, we analyzed the variables with two- choice and three-choice answers and transferred them to an SAS file for quantitative analysis. Among the quantifiable variables were five yes-or-no questions asking about respondents’ familiarity with key elements of the Patient Safety Program. We created a new variable that reflected a composite familiarity score for the Program, using the five questions about familiarity with the key elements (the questions are listed in the note to fig. 4). We also assessed respondents’ levels of comfort in reporting close calls and adverse events and mutual trust among staff at each facility, based on each whole interview. We used these two assessments, rated high, middle, or low to characterize cultural support for the Patient Safety Program. In quantifying verbal answers for display and comparison purposes, we decided that the maximum individual familiarity, trust, and comfort levels should be 10. Thus, in each key elements question, we let “yes” equal 2 and “no” equal 0, ensuring that an individual who knew all of the five elements would achieve a composite score of 10. Finally, we averaged composite scores to get an average score for each facility. In the trust and comfort summary judgments, we let “high” equal 10, “medium” equal 5, and “low” equal 0. Rather then display these numbers, we used a scale of high, medium, and low for 10, 5, and 0 and placed the answers accordingly. Significance Testing We were able to determine statistically significant differences in clinicians’ responses by facility and, unless otherwise noted, we report only significant results. First, we conducted a nonparametric statistical test, called Kruskal-Wallis, on all possible comparisons in the subset of variables that we report in our text. Four of these variables were central to the report: comfort summary score, trust summary score, close call score, and root cause score. In the Kruskal-Wallis test, each observation is replaced with its rank relative to all observations in the four samples. Tied observations are assigned the midrank of the ranks of the tied observations. The sample rank mean is calculated for each facility by dividing its rank sum by its sample size. If the four sampled populations were actually identical, we would expect our sample rank means to be about equal—that is, we would not expect to find any large differences among the four medical facilities. The Kruskal- Wallis test allows us to determine whether at least one of the medical facilities differs significantly from at least one other facility. This test showed that—for each of the comfort, trust and close call variables—at least one of the medical facilities differed significantly from at least one of the other medical facilities. Next, we conducted a follow-up test to determine specifically which pairs of medical facilities were significantly different from other pairs on key variables. This follow-up test is a nonparametric multiple comparison procedure called Dunn’s test. Our using Dunn’s test meant testing for differences between six pairs of medical facilities: A vs. B, A vs. C, A vs. D, B vs. C, B vs. D, and C vs. D. Table 4 presents the results of Dunn’s test, along with each facility’s sample rank mean and sample size. The pairs of facilities that are statistically significantly different from one another are in the far right column. Note that for the root cause characteristic, there are no statistically significant findings from the multiple comparison testing, which conforms to the results of the earlier Kruskal-Wallis test on root cause. Intercoder Reliability Consistency among the three coders was satisfactory. We assessed agreement among the coders for selected variables for interviews with seven clinicians—that is, we assessed the extent to which they consistently agreed that a response should be coded the same. To measure their agreement, we used Krippendorff’s alpha reliability coefficient, which equals 1 when coders agree perfectly or 0 when coders agree as if chance produced the results, indicating a lack of reliability. Our Krippendorff’s alpha values ranged from 0.636 to 1.000 for nine of the selected variables (see table 5). Compared with Krippendorff’s guidelines that alpha is at least 0.8 for an acceptable level of agreement and ranges from 0.667 to 0.8 for a tentative acceptance, we believe our overall our results are satisfactory. Appendix II: A Timeline of the Implementation of VA’s Patient Safety Program This timeline highlights the training programs and other events NCPS completed between 1997 and 2004. VA announces a special focus on patient safety VA drafts patient safety handbook VA develops Patient Safety Event Registry Patient Safety Awards Program begins Expert Advisory Panel is convened to look at reporting systems Four Patient Safety Centers of Inquiry are funded NCPS is established and funded VA informs Joint Commission on Accreditation of Healthcare Organizations that it will go beyond JCAHO’s sentinel event reporting system to include close calls VA pilots RCAs at six facilities Institute of Medicine issues To Err Is Human VA and NASA sign interagency agreement on the confidential Patient Safety Reporting System NCPS adverse event and close call reporting system established throughout VA NCPS trains clinical and quality improvement staff in patient safety topics, including the RCA process VA establishes Patient Safety Manager (hospital level) and Officer (network level) positions Online and print newsletter Topics in Patient Safety begins publication RCA software is rolled out Facilities and networks are given the performance measure of completing RCAs in 45 days Healthcare Failure Mode and Effect Analysis (HFMEA), a proactive risk assessment tool is developed by VA and rolled out through multiple videoconferences Aggregate RCA implementation is phased in over the year New hires are trained in RCAs and Patient Safety Officers and Managers are given refresher training The Veterans Health Administration’s Patient Safety Improvement Handbook, 3rd rev. ed. (VHA 1050.1), is officially adopted Facilities are given a new performance measure, being required to conduct proactive risk assessment, using HFMEA to review contingency plans for failure of the electronic bar code medication administration system The American Hospital Association (AHA) sends Program tools developed by VA to 7,000 hospitals Rollout of confidential reporting to NASA is largely complete Facility directors receive a day of training to reinforce what they could do to improve the success of their patient safety programs Facilities are given a performance measure for timely installation of software patches to critical programs VA begins to provide training, funded by the Department of Health and Human Services, for state health departments and non-VA hospitals as the “Patient Safety Improvement Corps, an AHRQ/VA Partnership” From 1999 through 2004, NCPS has conducted training in the Patient Safety Program. It was attended primarily by quality managers and Patient Safety Officers and Managers. Typically, the training lasted 3 days and included an introduction to the new Patient Safety Improvement Handbook and small group training in the RCA process. Trainees, especially Patient Safety Managers, were expected to take the Program back to their medical facilities, collect and transmit reported adverse events and close calls to NCPS, and guide clinicians in the RCA teams. We observed health fairs at several of the four facilities. Beginning in 2003, NCPS convened medical facility directors and other managers in 1-day sessions that introduced them to the systemic approach to improving patient safety, including a blame-free approach to adverse events in health care. Appendix III: Semistructured Interview Questionnaire Appendix IV: Comments from the Department of Veterans Affairs Appendix V: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments Additional staff who made major contributions to this report were Barbara Chapman, Bradley Trainor, Penny Pickett, Neil Doherty, Jay Smale, George Quinn and Kristine Braaten. Donna Heivilin, recently retired from GAO, also played an important role in preparing this report. Glossary A research and development arm of NCPS’s Patient Safety Program. The centers concentrate on identifying and preventing avoidable, adverse events, and each has a different focus. Close Call An event or situation that could have resulted in harm to a patient but, by chance or timely intervention, did not. It is also referred to as a “near miss.” Frontline Staff Staff directly involved with patient care. Adverse Event An incident directly associated with care or services provided within the jurisdiction of a medical facility, outpatient clinic, or other Veterans Health Administration facility. Adverse events may result from acts of commission or omission. Joint Commission on Accreditation of Healthcare Organizations JCAHCO is an accrediting organization for hospitals and other health care organizations. Medical Facility A VA hospital and its related nursing homes and outpatient clinics. National Center for Patient Safety NCPS is the hub of VA’s Patient Safety Program, where approximately 30 employees work, in Ann Arbor, Michigan. Other employees work in the Center of Inquiry in White River Junction, Vermont, and in Washington, D.C. Patient Safety Reporting System PSRS, a confidential and voluntary reporting system in which VA staff may report close calls and adverse events to a database at the National Aeronautics and Space Administration. Root Cause Analysis Team An interdisciplinary group that identifies the basic or contributing causes of close calls and adverse events.
The Department of Veterans Affairs (VA) introduced its Patient Safety Program in 1999 in order to discover and fix system flaws that could harm patients. The Program process relies on staff reports of close calls and adverse events. GAO found that achieving success requires a cultural shift from fear of punishment for reporting close calls and adverse events to mutual trust and comfort in reporting them. GAO used ethnographic techniques to study the Patient Safety Program from the perspective of direct care clinicians at four VA medical facilities. This approach recognizes that what people say, do, and believe reflects a shared culture. The focus included (1) the status of VA's efforts to implement the Program, (2) the extent to which a culture exists that supports the Program, and (3) practices that promote patient safety. GAO combined more traditional survey methods with those from ethnography, including in-depth interviews and observation. GAO found progress in staff familiarity with and participation in the VA Patient Safety Program's key initiatives, but these achievements varied substantially in the four facilities we visited. In our study conducted from November 2002 through August 2004, three-fourths of the clinicians across the facilities were familiar with the concepts of teams investigating root causes of unintentional adverse events and close calls. One-third of the staff had participated in such teams, and most who participated in these teams found it a positive learning experience. The cultural support clinicians expressed for the Program also differed. At three of four facilities, GAO found a supportive culture, but at one facility the culture blocked participation for many clinicians. Clinicians articulated two themes that could stimulate culture change: leadership actions and open communication. For example, nurses need the confidence to disagree with physicians when they find an unsafe situation. Although VA has conducted a cultural survey, it has not set goals or explicitly measured, for example, staff familiarity and mutual trust. Clinicians reported management practices at one facility that had helped them adopt the Program, including (1) story-telling techniques such as leaders telling about a case in which reporting an adverse event resulted in system change, (2) management efforts to coach staff, and (3) reward systems. The Patient Safety Program Process shows how ideally (1) clinicians have cultural support for reporting adverse events and close calls, (2) teams investigate root causes, (3) systems are changed, (4) feedback and reward systems encourage reporting, and (5) patients are safer.
GAO_GAO-12-973
Background Medicare consists of four parts—A, B, C, and D. Medicare Part A provides payment for inpatient hospital, skilled nursing facility, some home health, and hospice services, while Part B pays for hospital outpatient, physician, some home health, durable medical equipment, and preventive services. In addition, Medicare beneficiaries have an option to participate in Medicare Advantage, also known as Part C, which pays private health plans to provide the services covered by Medicare Parts A and B. Further, all Medicare beneficiaries may purchase coverage for outpatient prescription drugs under Medicare Part D, and some Medicare Advantage plans also include Part D coverage. The fee-for-service portion of the Medicare program (Parts A and B) processes approximately a billion claims each year from about 1.5 million providers who deliver and bill Medicare for health care services and supplies. In delivering patient care, providers need to not only ensure that claims for services covered by Medicare and other health care insurers are submitted correctly, but to also ensure that beneficiaries receive benefits to which they are entitled. To do this, these providers need access to accurate and timely eligibility information to help them determine whether and how to properly submit claims for payment to Medicare and other insurers on behalf of their patients. Many health care insurers have implemented information technology systems to help providers make this determination at the time services are being delivered—that is, at the point of care—by providing electronic data on a real-time basis regarding patients’ benefits covered by their insurance plans. CMS’s Implementation of HETS to Assist Providers To assist providers with verifying beneficiaries’ eligibility for services under Medicare, and in response to HIPAA requirements, CMS provided an electronic mechanism that allowed providers to access real-time data at the point care is scheduled or delivered. To meet this requirement, CMS officials stated that they implemented the initial version of HETS in May 2005. CMS’s Business Applications Management Group and the Provider Communications Group are the system and business owners of HETS. As such, these groups are responsible for the development, implementation, maintenance, and support of the system, as well as establishing business rules regarding the use of the system application, such as agreements regarding the use and protection of the data provided by HETS. CMS awarded cost-plus-award-fee contracts to two contractors to assist the agency with developing and maintaining HETS, performing independent testing, production support, help desk, and project integration services. HETS operates from CMS’s data center in Baltimore, Maryland, and is accessed by users via the CMS extranet. The system is comprised of software that processes query and response transactions, along with hardware, such as servers that support connections with users’ facilities and the internet, and devices that store the data provided by the system. The system software is designed to process transactions according to standards and formats defined by HIPAA. It was designed to allow the release of patients’ data to Medicare providers, or their authorized billing agents, to support their efforts to complete accurate Medicare claims when determining beneficiaries’ liability and eligibility for specific services. CMS officials stated that the agency does not receive any payments for the use of HETS, nor does the agency require Medicare providers to use HETS to verify eligibility prior to filing claims CMS intended for HETS to be used by health care providers; health care clearinghouses, which are entities that provide electronic data exchange services for their customers; and Medicare Administrative Contractors (MACs) that assist CMS in processing claims. Health care providers may request beneficiary eligibility data from HETS directly via CMS’s extranet or by utilizing the services of clearinghouses. According to clearinghouse officials with whom we spoke, many providers use clearinghouses to conduct transactions with HETS because they may not have the technical capability to connect directly to CMS’s extranet, or they may chose to employ the services of clearinghouses for financial or other reasons. For example, these providers may use clearinghouses to conduct electronic transactions with CMS and other different payers’ systems, and avoid expenses associated with establishing and maintaining the in-house technology and expertise needed to connect with multiple systems. Rather, they can conduct these transactions by establishing one connection with a clearinghouse. However, the MACs access HETS via CMS’s extranet. In all cases, users gain access to the extranet through a vendor-supplied network service. According to documented system descriptions, when requesting information from HETS, a user initiates a transaction by entering data into its workstation using software systems installed within its facility. The end- users’ systems may be developed in-house by individual providers, clearinghouses, or MACs, or by commercial software vendors. The data entered into the workstation identify the provider, beneficiary, and services for which eligibility is to be verified. The data are translated by the end-user software into the standard HIPAA transaction format, then transmitted from the user’s workstation to the HETS system via either the agency’s extranet, or the vendor-supplied network service which connects to the CMS extranet. The system validates the incoming data and, if the request is valid, returns response data back to the user’s workstation. If the request data are not valid, the system responds with error codes that indicate the type of error detected in the request data. Responses are transmitted from HETS in the HIPAA format and translated by the users’ software before being presented. According to reports provided by program officials, the number of HETS transactions has grown each year since its initial implementation in May 2005. The business and system owners with whom we spoke attributed the growth primarily to increases in the number of new users of HETS, particularly during the first 2 years of implementation, and the growth in the number of Medicare beneficiaries. Nonetheless, while the number of transactions has continued to increase, the annual rate of increase in transaction volume has declined since the system’s initial implementation. Table 1 shows HETS utilization, measured by the number of incoming transactions processed each fiscal year, from its initial implementation in May 2005 through fiscal year 2011. CMS’s internal operational requirements for HETS established a goal for the system to respond to query transactions in 5 seconds or less. According to program officials, from 2005 to 2010, HETS responded to transaction inquiries well within this goal. However, reports of the system’s performance showed that beginning in January 2010, response times began to exceed 5 seconds and progressively worsened throughout most of the year. CMS attributed this performance degradation to outdated software and increases in the number of eligibility verification transactions submitted to the extent that the volume exceeded the hardware capacity. The business and system owners with whom we spoke stated that in July 2010 they began to implement a series of major improvements to the HETS operating environment and system, including hardware and software upgrades. However, users continued to experience lengthy response and system down times. Program officials stated that in January 2011 they took additional steps to address the slow response and system availability problems. In this case, they doubled the hardware capacity, replaced the operating system, and upgraded the system’s software. According to these officials, the revisions, upgrades, and replacements were more complex than expected and were not fully implemented until April 2011. Subsequently, from mid April 2011 to May 2011, CMS conducted a phased migration of HETS users to the upgraded system. Federal Requirements for Protecting Individually Identifiable Health Information Because HETS processes and transmits personal information related to individuals’ Medicare program eligibility, the system is subject to federal requirements for protecting the personally identifiable health information. In this regard, the Privacy Act of 1974 regulates the collection, maintenance, use, and dissemination of personal information by federal government agencies. It also prohibits disclosure of records held by a federal agency or its contractors in a system of records without the consent or request of the individual to whom the information pertains unless the disclosure is permitted by the Privacy Act. The Privacy Act includes medical history in its definition of a record. Other federal laws and regulations further define acceptable use and disclosure activities that can be performed with individually identifiable health information, known as protected health information. These activities include—provided certain conditions are met— treatment, payment, health care operations, and public health or research purposes. For example, HIPAA and its implementing regulations allow the entities they cover to use or disclose protected health information for providing clinical care to a patient.associates, such as medical professionals, pharmacies, health These covered entities and their business information networks, and pharmacy benefit managers, work together to gather and confirm patients’ electronic health information that is needed to provide treatment, such as a beneficiary’s eligibility, benefits, and medical history. Key privacy and security protections associated with individually identifiable health information, including information needed by providers to verify patients’ eligibility for coverage by Medicare or private health plans, are established under HIPAA. Key privacy principles associated with individually identifiable health information, including information needed by providers to verify patients’ eligibility for coverage by Medicare of private health plans, are reflected in HIPAA’s Administrative Simplification Provisions provided for HIPAA.the establishment of national privacy and security standards, as well as the establishment of civil money and criminal penalties for HIPAA violations. HHS promulgated regulations implementing the act’s provisions through its issuance of the HIPAA rules. Specifically, the HIPAA Privacy Rule regulates covered entities’ use and disclosure of protected health information. Under the Privacy Rule, a covered entity may not use or disclose an individual’s protected health information without the individual’s written authorization, except in certain circumstance expressly permitted by the Privacy Rule. These circumstances include certain treatment, payment, and other health care operations. As such, the disclosure of beneficiary eligibility information by HETS is permitted in accordance with the rule since it is used in making treatment and payment decisions. The HIPAA Privacy Rule reflects basic privacy principles for ensuring the protection of personal health information, such as limiting uses and disclosures to intended purposes, notification of privacy practices, allowing individuals to access their protected health information, securing information from improper use or disclosure, and allowing individuals to request changes to inaccurate or incomplete information. The Privacy Rule generally requires that a covered entity make reasonable efforts to use, disclose, or request only the minimum necessary protected health information to accomplish the intended purpose. In addition to the Privacy Act and the HIPAA Privacy Rule, the E- Government Act of 2002 includes provisions to enhance the protection of personal information in government information systems. the act requires federal agencies to conduct privacy impact assessments to determine the impact of their information systems on individuals’ privacy. The act also states that the assessment should be completed to analyze how information is to be handled and to evaluate needed protections and alternative processes for handling information in order to mitigate potential privacy risks. HETS Is Operational and Provides Responses to Users’ Requests in Real Time After experiencing performance problems throughout 2010, HETS is currently operating on a real-time basis and with few user concerns being noted. As of June 2012, CMS reported that 244 entities were using the system; these included 130 providers, 10 Medicare Administrative Contractors, and 104 clearinghouses that conduct query and response The agency further reported transactions for about 400,000 providers.that, during the first 6 months of 2012, the system processed more than 380 million transactions from these users. System performance data showed that, since May 2011, HETS has been consistently providing service to its users 24 hours a day, 7 days a week, except during regularly scheduled maintenance periods, which occur on Monday mornings from midnight until 5:00 a.m. (CMS sometimes schedules additional outages for system maintenance and upgrades, usually during one or two weekends each month.) E-Government Act of 2002, Pub L. No. 107-347, Dec. 17, 2002, codified at 44 U.S.C. § 3501 note. occurring between 8:00 a.m. and 4:00 p.m. eastern time, Monday through Friday. About 90 percent of these transactions were initiated by the clearinghouses. Daily reports of system performance that were generated by the system showed that the average response time for 99 percent of the transactions was less than 3 seconds during the first 6 months of 2012. Appendix II provides our detailed analysis of the system’s transaction volumes and response times from January 2010 through June 2012. Users of the system told us that since CMS completed hardware and software improvements in spring 2011, they have been satisfied with its operational status. They stated that they are not currently experiencing operational or communication issues. Records of contacts with CMS’s help desk regarding the operational status of HETS show that the number of calls by users declined from an average of 133 calls per week during the first quarter of 2011 to an average of 64 per week during the second quarter of 2012. The users also stated that health care insurers in the commercial sector conduct electronic eligibility verifications in a manner similar to that of CMS. They told us that, based on their experiences with using those insurers’ systems, HETS provides faster response times as well as more complete information and reliable service than the other beneficiary eligibility verification systems they use. CMS Has Taken Steps to Ensure Users’ Satisfaction and Is Making Plans to Implement Improvements to Meet Future Requirements CMS’s efforts to correct operational problems experienced with HETS in 2010 and early 2011 led to improved performance and overall user satisfaction with the system. To ensure that the agency is able to maintain performance that satisfies users and meets goals for response and system availability times, HETS program officials have taken steps to provide ongoing support for users through help desk procedures, system status notifications, and management of contractors based on incentive awards for performance that exceeds contractual requirements. Additionally, these officials have begun to plan for improvements and enhancements to the system in efforts to position themselves to meet future demands on the system as they projected transaction volume to increase at a rate of about 40 percent a year. Among other improvements, the officials described plans to redesign the system and upgrade hardware, and to establish service level agreements with HETS users. CMS Has Taken Steps to Ensure Users Remain Satisfied CMS has taken various steps to improve the operational status of HETS and to ensure user satisfaction with its performance. With regard to ensuring the availability of the system, CMS notifies users of the status of operations on a daily basis and whenever a change in status occurs. For example, CMS contractors perform daily health checks each morning to determine the status of HETS. If system performance or availability issues are identified, help desk contractors post messages to that effect on the system website and a trouble ticket is opened. The appropriate staff is assigned to troubleshoot and resolve the issues. Additionally, when users have complaints or issues related to the system’s operations, they are instructed to contact the help desk. Upon receipt of the problem, the help desk staff are to triage the problem and generate a ticket if the problem cannot be resolved at the help desk level. For example, if a user is unable to access the system and contacts the help desk, staff are to determine if the problem is an operational issue or is an issue with the user or another component of the system, such as the network services provided by a vendor. They are to then track the issue until the problem is resolved. According to HETS program officials, problems are generally reported when the system response time begins to slow down. CMS’s help desk contractors who support HETS post announcements on the agency’s website and send e-mails to notify users when the system is to be brought down to allow corrections to system operation problems, or to perform upgrades or maintenance. The contractors post a second announcement and send e-mails to notify users when the system becomes available after an outage. The past 6 months’ help desk announcements on the HETS website showed that additional maintenance or system upgrades were performed outside the scheduled maintenance period. Specifically, during this time CMS notified users that maintenance would be performed one to two times per month on weekends, with the system down from as few as 6 hours to as many as 3 days. In most cases, CMS sent a notice to its HETS users 2 weeks in advance of the outages. In discussions with provider, clearinghouse, and MAC users, two of the users expressed concerns with the frequency that CMS conducts maintenance outside the scheduled maintenance time. These users stated that they do not have access to the system for 1 day three to four weekends per month. However, one of these users, a provider, told us that during these times the system was accessible via an alternate portal, which indicated that HETS was operational and likely not a cause of the problem. A clearinghouse user stated that, while these outages are inconvenient, CMS notifies users well in advance of the outages and that there are some times during the announced outages when transactions can be processed. All the users with whom we spoke told us that the CMS help desk notified them in advance of any unscheduled system outages that were planned in addition to the regularly scheduled maintenance downtime. CMS has also taken steps to ensure that its contractors meet quality and service requirements related to the development, maintenance, and support of HETS. Program officials told us that the contractors’ performance is reviewed and evaluated every 6 months in addition to annual evaluations, based on measures for overall technical performance and management. The evaluations identify strengths and weaknesses noted during the evaluation periods. The contractors may be awarded financial incentives for exceeding performance expectations in certain categories, such as software maintenance and support for the system’s operations. For example, a May 2012 report on the results of the most recent 6-month evaluation of the help desk contractor’s performance documented its strengths and weaknesses. The report showed that program officials were satisfied with the contractor’s efforts to meet measures in technical performance and, therefore, provided the full financial incentive. However, they noted weaknesses in one category for which the contractor did not receive the full incentive amount. In this case, the contractor failed to deliver required reports and identify infrastructure changes that impacted the implementation of HETS. Additionally, a November 2011 report on the development contractor’s performance showed similar results. In both reports, program officials stated overall satisfaction with the contractors’ performance and noted areas of needed improvements. CMS Is Making Plans to Ensure the System Supports Future Requirements To help ensure the current level of service is sustained during projected increases in transaction volumes, the system owners have initiated various activities aimed at helping to prevent operational problems similar to those experienced with the system in 2010 and early 2011. In this regard, CMS projected the increase in transaction volume to continue at a rate of about 40 percent for the next several years. This increase is expected in part because of the discontinuance of some providers’ use of other means to obtain eligibility information from CMS and the migration of that user population to HETS by the end of March 2013. Program officials also anticipate that more Medicare Administrative Contractors will begin to offer beneficiary eligibility verification services to the providers they support and will use HETS to conduct these verifications. The system and business owners described steps they took in 2011 and 2012 that were intended to help plan for future increases in the number of transactions. In March 2011, CMS tasked its HETS development contractor to prepare a plan and process for long-term improvements to the system and its operating environment. The agency tasked an additional contractor to evaluate the existing architecture, monitoring tools, and the extent to which the existing system platform could be scaled to meet future requirements. This contractor was also tasked to propose and analyze alternatives for future system implementation and recommend future service levels, monitoring tools, and practices for managing the application. In July 2011, CMS released a Request for Information to obtain knowledge and information of current marketplace solutions that may meet future needs. As stated in the request, this action was intended to compile information that would assist CMS in the identification of potential options for creating an enterprise-level health care eligibility inquiry system that would support both real-time and batch transaction exchanges. In August 2011, 12 companies responded to the request and provided information on how their existing products could address CMS requirements. CMS analyzed the responses to the Request for Information and concluded that while 3 of the companies provided information that was not useful, others offered a range of products that CMS could consider when they begin to survey the marketplace for viable products and solutions for a future implementation of HETS. In January 2012, the two contractors completed the evaluations that were initiated in March 2011 and submitted reports that included recommendations regarding steps needed to accommodate projected eligibility transaction volumes while maintaining appropriate availability, security, and costs of HETS operations. The first report stated the existing architecture is sufficient to handle current transaction volumes and, with minor changes, should be able to handle transaction volumes anticipated for the next 2 years. The report also included recommendations to address the increases in transaction volume projected beyond the next 2 years. For example, the contractor who conducted the evaluation recommended that CMS reassess and change the architecture as transaction volumes grow, and automate routine processes, including troubleshooting practices and application start-up and shutdown procedures. This contractor also recommended that CMS establish service level agreements with its users to define and agree upon service parameters for HETS, including system availability and performance. The second contractor’s report provided technical evaluations of six commercial-off-the-shelf products that were capable of meeting future estimated transaction volumes and presented recommendations for three alternate solutions, spelling out the strengths and weaknesses of each. Program officials stated that they agree with the recommendations identified in the contractors’ reports and are making plans to address many of them in the near term. Specifically, they are planning to automate some processes, such as the application start-up and shutdown procedures. Additionally, HETS business owners stated that they are currently working to establish and document service level agreements with users, as recommended by one of the evaluation contractors. They plan to complete this activity and have agreements in place by January 2013. The officials we spoke with also described several technical improvements they intend to take to increase the system’s capacity to handle growing numbers of transactions, including some consistent with the contractors’ evaluations. For example, according to CMS’s plans for modifying and improving the system through 2015, in fiscal year 2011 CMS began to plan for development of a redesigned system to be completed by the end of June 2014. The agency awarded a contract for defining and writing requirements for the redesigned system in June 2012. Among other capabilities, as part of the system redesign CMS plans to implement batch processing of transactions in addition to the According to HETS business owners, this current real-time process.capability is needed to support users’ needs since some clearinghouses receive batch files from providers and have to convert them for real-time submission. The implementation of batch processing capabilities within the system will remove the need for clearinghouses to take this extra step. Among several other initiatives to be conducted are plans to procure a contract for maintenance of the current system until the redesign is complete. This activity is necessary because the terms of the current contract expire at the end of September 2013 and the system redesign is not planned to be complete until the end of June 2014. CMS’s plans also identified a step to, by the end of August 2012, migrate the current HETS database to a new operating platform that is scalable to accommodate the expected increase in transaction volume. Further, agency officials stated that while they plan to make these improvements to the system over the next 3 years, their ability to conduct the activities they have planned is dependent on the agency’s budget. These officials stated that, to mitigate risks associated with the level of funding the program receives in the future, they prioritized improvements planned for the existing system and began to implement those that they determined to be the most cost-effective during this and early next fiscal year. Among other things, these include activities to support the current system until the redesigned system is implemented, including development of tools that enable the HETS contractors to proactively monitor system components, additional services to enhance production capacity, and automated processes for starting up and shutting down the application. Program officials stated that they will review and prioritize other activities for improving the system as part of the HETS redesign project. CMS Established Policies and Procedures Intended to Address Privacy Principles and Assessed Impact and Risks of Sharing Data The Privacy Act of 1974 and the HIPAA Privacy Rule protect personally identifiable health information, such as Medicare beneficiary information, to ensure that it is disclosed only under specified conditions and used only for its intended purpose. In accordance with these privacy protections, the information provided by HETS is to be used only for confirming eligibility of patients to receive benefits for services provided under the Medicare fee-for-service program. CMS is governed by the Privacy Act and all covered entities that use HETS—health care providers, clearinghouses, and Medicare contractors—are required to comply with the HIPAA Privacy Rule. In accordance with provisions of the Privacy Rule, the protected health information provided by HETS is to be disclosed and used only for certain activities. Among other activities, these include treatment of patients and payment for services—the activities supported by the use of HETS. CMS has taken actions intended to ensure that the personal health information sent to and from the system is protected from misuse and improper disclosure. For example, CMS documented in the HETS Rules of Behavior that users must adhere to the authorized purposes for requesting Medicare beneficiary eligibility data. Specifically, the rules state that users are authorized to request information to determine whether patients who were determined to be Medicare eligible are covered for specific services that are to be provided at the point of care. However, users are not authorized to request information for the sole purpose of determining whether patients are eligible to receive Medicare benefits. According to program officials, CMS enforces its rules of behavior by monitoring inquiries to identify behaviors that may indicate intentional misuse of the data. For example, inquiries from one user that result in high rates of errors or a high ratio of inquiries compared to the number of claims submitted may indicate that a user is searching the system to identify Medicare beneficiaries rather than using HETS for its intended purpose. Users engaging in these types of behavior may be contacted or, when appropriate, referred for investigation for inappropriate use of the data, such as health care identity theft or fraudulent billing practices. Additionally, system documentation described mechanisms that were implemented to prevent access by requesters with invalid provider identifications or certain providers who have been excluded or suspended from participating in the Medicare program. For example, CMS maintains databases of National Provider Identifiers, another HIPAA standard. The eligibility request transactions submitted by HETS users include these identifiers, and, before providing beneficiary data in response to requests, the system validates the identifiers against data stored in an agency database. Additionally, according to the HETS business owners, providers who have been identified by HHS’s Office of Inspector General and the General Services Administration as ones conducting activities In intended to defraud Medicare may be included on a “do not pay” list.this case, providers excluded from the program would not “need to know” information about patients’ personal health, including whether or not they are eligible for Medicare benefits. According to HETS officials, these data are also incorporated into the National Provider Identifier database that is used to validate identifiers submitted to HETS and, as a result, these excluded providers are also not allowed to receive information from the system. HETS system documentation also described mechanisms for securing the data transmitted to and from HETS. For example, access to the system is only allowed through CMS’s secured extranet. To gain access, the providers and clearinghouses must first submit a Trading Partner Agreement. In addition to including information needed to enable CMS and its trading partners, or users, to establish connectivity and define data exchange requirements, the agreement defines responsibilities for securing the data of the entities receiving beneficiary eligibility information from CMS. After users submit the agreement, CMS contacts them to authenticate their identity and, once authentication has been determined, CMS help desk staff provide the requester with a submitter ID that is required to be included on all transactions. Users then may request access to the CMS extranet from one of four network service vendors which establish a secure software connection to the system. The table below summarizes these and other actions CMS described that address key HIPAA privacy principles relevant to the implementation of HETS. Further, the E-Government Act of 2002 requires federal agencies to conduct privacy impact assessments, and the Office of Management and Budget (OMB) provides guidance to agencies conducting these assessments. The act and OMB’s implementing guidance require that these assessments address: (1) what information is to be collected; (2) why the information is being collected; (3) the intended use of the information; (4) with whom the information will be shared ; (5) what opportunities individuals have to decline to provide the information or to consent to particular uses of the information, and how individuals can grant consent; (6) how the information will be secured ; and (7) whether a system of records is being created under the Privacy Act. According to the OMB guidance, agencies should conduct a privacy impact assessment before developing or procuring IT systems or projects that collect, maintain, or disseminate information in identifiable form from or about members of the public. Agencies are required to perform an update as necessary when a system change creates new privacy risks. Additionally, in a previous report, we identified the assessment of privacy risks as an important element of the privacy impact assessment process to help officials determine appropriate privacy protection policies and techniques to implement those policies. We noted that a privacy risk analysis should be performed to determine the nature of privacy risks and the resulting impact if corrective actions are not in place to mitigate those risks. CMS conducted a privacy impact assessment of HETS as called for by the E-Government Act, and updated the assessment in April 2011. The assessment addressed the seven OMB requirements for implementing privacy provisions. For example, in addressing how HETS information would be secured, it stated that the system is accessible only via the CMS private network to authorized users. The assessment also stated that the intended use of the system is to allow providers to confirm patients’ enrollment in the Medicare program and provide information that is needed to correctly bill for payment of claims. Additionally, as part of a security risk assessment, program officials also completed a privacy risk analysis of the system that addressed several privacy risks. For example, CMS assessed privacy risks related to improper disclosure of the protected health information processed by HETS and determined that the risk level was low to moderate. By establishing practices and procedures intended to protect the privacy of Medicare beneficiaries’ personal health information, and assessing the impact and risks associated with the use of HETS, CMS took required steps to address privacy principles reflected by HIPAA, the HIPAA rules, and the Privacy Act and has acted in accordance with OMB’s guidance for protecting personally identifiable information. According to officials in HHS’s Office for Civil Rights, no violations of the HIPAA Privacy Rule resulting from the use and disclosure of data provided by HETS have been reported since the system was implemented. Agency Comments and Our Evaluation In written comments on a draft of this report, signed by HHS’s Assistant Secretary for Legislation (and reprinted in appendix III), the department stated that it appreciated the opportunity to review the report prior to its publication. The department added that it regretted the poor service that resulted from operational problems in 2010 and early 2011 and that it is continuing to take steps to maintain and improve the performance of the system. The department also provided technical comments, which we incorporated as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies to interested congressional committees, the Secretary of HHS, the Administrator of CMS, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions about this report, please contact me at (202) 512-6304 or by e-mail at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives were to (1) identify the operational status of HETS, (2) identify any steps CMS has taken to ensure users’ satisfaction and plans to take to ensure the performance of the system supports future requirements, and (3) describe CMS’s policies, processes, and procedures for protecting the privacy of beneficiary eligibility data provided by the system. To identify the operational status of HETS, we collected and analyzed documentation from program officials that described the use and daily operations of the system, such as reports on incoming transaction volume, response time, and downtime, along with documents that describe outcomes of the system, such as reported problems. To determine whether CMS provided the level of service agreed upon with HETS users, we compared the information we collected to business requirements defined in program and system plans, and to any agreements with users. Additionally, we obtained users’ views of the extent to which the current implementation of HETS satisfied their needs for timely information by holding structured interviews with selected representatives of providers; clearinghouses, which provide services for about 90 percent of Medicare providers; and a Medicare Administrative Contractor who used the system. The selected HETS users included three clearinghouses; two fee-for- service providers, including a visiting nurse agency and a medical equipment supplier; and one Medicare Administrative Contractor. Based on data provided by system performance reports for the week of March 12th through the 18th 2012, we selected the highest volume users among each user type throughout the United States. The selected users submitted about 44 percent of the 14.5 million total transactions processed during the selected period of time. Specifically, the clearinghouses submitted a total of about 40 percent of the transactions, the Medicare contractor submitted about 2 percent, and the provider and supplier submitted less than 1 percent of the transactions, respectively. We discussed with the users their experiences and satisfaction with the level of service the system has provided over the last 2 years, and the results of CMS’s efforts to resolve any problems or system-related issues. In addition, we interviewed program officials knowledgeable of the management of the program to gain additional understanding of the agency’s practices for defining performance requirements for HETS contractors, and for managing and assessing their performance relevant to ensuring efficient operations of HETS. We also discussed with the users their experiences with other automated eligibility verification systems provided by commercial health insurers. We held these discussions to determine whether these officials could share any lessons that could be beneficial to CMS in operating HETS. To identify the steps that CMS has taken to ensure that HETS users remain satisfied with the performance of the system and that the agency plans to take to ensure the system provides the level of service needed to support future requirements, we reviewed agency documents, such as project timelines and system release notes, and reports of users’ calls to the help desk. These documents described steps taken to address problems reported by users, identified systems modifications to correct problems, and showed patterns in the numbers of help desk calls over the past 2 years. We also identified steps the agency initiated to help alleviate problems introduced by increasing transaction volume as the number of Medicare beneficiaries has increased over the past 2 years. Further, through our review of relevant agency documents, contractors’ performance reports, and discussions with program officials, we identified steps CMS took to assess contractors’ performance toward providing efficient and quality service to users of HETS, and any necessary corrective actions. Additionally, we identified steps the agency plans to take toward defining and addressing future requirements of the system that may be introduced by increasing numbers of verification inquiries, and collected and reviewed documentation that provided information about projected growth in transaction volume as providers were faced with the need to conduct HETS queries of more patients filing Medicare claims. We also collected available program planning documentation that described long-term plans for the system and assessed these plans against projections of future requirements and recommendations from independent studies of CMS’s implementation of HETS. Finally, to describe the policies, processes, and procedures established by CMS to ensure that the privacy of beneficiary eligibility data is protected, we evaluated agency documentation such as HETS privacy impact and risk assessments, and agreements with users that describe CMS’s and users’ responsibilities and requirements for protecting the data processed and provided by the system. We compared the information from these documents to requirements and privacy practices derived from provisions of the Privacy Act and the HIPAA Privacy Rule. We also held a discussion with an official with HHS’s Office for Civil Rights to determine whether any complaints related to the use of HETS had been noted. In conducting our work, we did not review or test controls implemented by the agency to secure the data processed by HETS. We supplemented data collection for all objectives with interviews of agency officials, including system and business owners, who were knowledgeable of the system’s operations and improvements, contract management and oversight, and requirements and practices for protecting the privacy of personal health information. Among these officials, we held discussions with directors in CMS’s Provider Communications Group and the Business Applications Management Group, Office of Information Services. We used computer-maintained data provided by CMS program officials when addressing our first objective, and we determined the reliability of these data by obtaining corroborating evidence through interviews with agency officials who are knowledgeable of the operations of the system and its user population. We also conducted a reliability assessment of the data provided by CMS. We found the data sufficiently reliable for the purposes of this review. Appendix II: HETS Transaction Volumes and Response Times HETS program officials provided system-generated data that reflected the performance of the system in terms of the numbers of transactions processed each month and the response time in four categories. The data were provided for the time period beginning in January 2010, when the operational problems began to occur, through June 2012. Table 1 shows the percentage of transactions that received responses from HETS in less than 3 seconds increased from 60.8 percent to 99.9 percent during this time period. Appendix III: Comments from the Department of Health & Human Services Appendix IV: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contacts named above, Teresa F. Tucker, Assistant Director; Tonia D. Brown; LaSherri Bush; Sharhonda Deloach; Rebecca Eyler; and Monica Perez-Nelson made key contributions to this report.
Medicare is a federal program that pays for health care services for individuals 65 years and older and certain individuals with disabilities. In 2011, Medicare covered about 48.4 million of these individuals, and total expenditures for this coverage were approximately $565 billion. CMS, the agency within the Department of Health and Human Services that administers Medicare, is responsible for ensuring that proper payments are made on behalf of the program's beneficiaries. In response to HIPAA requirements, CMS developed and implemented an information technology system to help providers determine beneficiaries' eligibility for Medicare coverage. In May 2005 CMS began offering automated services through HETS, a query and response system that provides data to users about Medicare beneficiaries and their eligibility to receive payment for health care services and supplies. Because of the important role that HETS plays in providers having access to timely and accurate data to determine eligibility, GAO was asked to (1) identify the operational status of HETS, (2) identify any steps CMS has taken to ensure users' satisfaction and plans to take to ensure the system supports future requirements, and (3) describe CMS's policies, processes, and procedures for protecting the privacy of data provided by HETS. To do so, GAO collected and analyzed documentation from program officials, such as reports on transaction volume and response times, agreements with users, and CMS's privacy impact and risk assessments of HETS. GAO also interviewed program officials and system users. The Centers for Medicare and Medicaid Services (CMS) currently offers to Medicare providers and Medicare Administrative Contractors the use of the Health Insurance Portability and Accountability Act of 1996 (HIPAA) Eligibility Transaction System (HETS) in a real-time data processing environment. HETS is operational 24 hours a day, 7 days a week, except during regularly scheduled maintenance Monday mornings, from midnight until 5:00 a.m., and when CMS announces other maintenance periods during one or two weekends each month. According to program officials, 244 entities were using HETS in 2012, including about 130 providers, 104 clearinghouses that provide data exchange services to about 400,000 health care providers, and 10 Medicare contractors that help CMS process claims for services. From January through June 2012, HETS processed each month an average of 1.7 million to 2.2 million queries per day with most of the queries submitted between the hours of 8:00 a.m. and 4:00 p.m. eastern time. The users with whom we spoke confirmed that operational problems they experienced with the system in 2010 and the first few months of 2011 were resolved in spring 2011 after CMS implemented several hardware and software replacements and upgrades. System performance reports for the first 6 months of 2012 showed that the average response time per transaction was less than 3 seconds. Users described experiences with the system that were consistent with these data. They told us that they are currently satisfied with the operational status of HETS and that the system provides more complete information and reliable service than other systems that they use to verify eligibility with commercial health insurers. CMS took steps to ensure users remain satisfied with the system's performance, including notifying users in advance of system downtime, providing help desk support, and monitoring contractors' performance. The agency had also planned several technical improvements intended to increase HETS' capacity to process a growing number of transactions, which the agency projected to increase at a rate of about 40 percent each year. These plans include a redesign of the system and migration to a new database environment that is scalable to accommodate the projected increase in transaction volume. According to HETS program officials, near-term plans also include the implementation of tools to enable proactive monitoring of system components and additional services intended to enhance production capacity until the planned redesign of the system is complete. To help protect the privacy of beneficiary eligibility data provided by HETS, CMS established policies, processes, and procedures that are intended to address principals reflected by the HIPAA Privacy Rule. For example, in its efforts to ensure proper uses and disclosures of the data, CMS documented in user agreements the authorized and unauthorized purposes for requesting Medicare beneficiary eligibility data. Additionally, the agency conducted privacy impact and risk assessments of HETS as required by the E-Government Act of 2002. Officials from the Department of Health and Human Services' Office for Civil Rights stated that no privacy violations had been reported regarding the use of the protected health data provided by HETS since its implementation in 2005.
GAO_GAO-14-584
Background FAR Part 15 allows the use of several competitive source selection processes to meet agency needs. Within the best value continuum, DOD may choose a process that it considers the most advantageous to the government, either the LPTA or the tradeoff process (see figure 1). DOD may elect to use the LPTA process where the requirement is clearly defined and the risk of unsuccessful contract performance is minimal. In such cases, DOD may determine that cost or price should play a dominant role in the source selection. When using the LPTA process, DOD specifies its requirements in the solicitation. Contractors submit their proposals and DOD determines which of the contractors meet or exceed those requirements, no tradeoffs between cost or price and non-cost factors are permitted, and the award is made based on the lowest price technically acceptable proposal submitted to the government. By contrast, DOD may elect to use a tradeoff process in acquisitions where the requirement is less definitive, more development work is required, or the acquisition has a greater performance risk. In these instances, non-cost evaluation factors, such as technical capabilities or past performance, may play a dominant role in the source selection process. Tradeoffs among price and non-cost factors allow DOD to accept other than the lowest priced proposal. The FAR requires DOD to state in the solicitation whether all evaluation factors other than cost or price, when combined, are significantly more important than, approximately equal to, or significantly less important than cost or price. In October 2010, we reported that DOD used best value processes for approximately 95 percent of its new, competitively awarded contracts in which $25 million or more was obligated in fiscal year 2009. DOD awarded approximately 26 percent using the LPTA process and 69 percent using the tradeoff process. DOD awarded the remaining 5 percent using sealed bidding, which is a competitive process where award is made to the responsible bidder whose bid conforms to the invitations for bid and is most advantageous for the government considering only price and price-related factors included in the solicitation. At that time, we found that the majority of the contracts were awarded using a tradeoff process in which all evaluation factors other than cost or price, when combined, were significantly more important than cost or price. Our analysis showed that DOD considered past performance and technical capability evaluation factors as the most important among the non-cost factors. Further, we found using a tradeoff process can be more complex and take more time than other source selection methods, and requires that acquisition staff have proper guidance, needed skills, and sound business judgment. While DOD and the military departments had taken steps to improve source selection procedures, acquisition personnel noted a lack of training to assist them in deciding whether or not a price differential is warranted when making tradeoff decisions. We recommended that to help DOD effectively employ best value tradeoff processes, DOD develop training elements, such as case studies, that focus on reaching tradeoff decisions, as it updates its training curriculum. DOD concurred and implemented the recommendation in August 2012. DOD issued new guidance that emphasizes affordability and standardization of best value processes since our analysis of fiscal year 2009 contracts. In September 2010, the Under Secretary of Defense for Acquisition, Technology, and Logistics (USD(AT&L)) issued a memorandum that established its Better Buying Power Initiative to obtain greater efficiency and productivity in defense spending. In its memorandum, USD(AT&L) emphasized that DOD must prepare to continue supporting the warfighter through the acquisition of products and services in potentially fiscally constrained times. USD(AT&L) noted that DOD must “do more without more.” In April 2013, USD(AT&L) issued another memorandum to update the Better Buying Power Initiative. This memorandum identifies seven areas USD(AT&L) is pursuing to increase efficiency and productivity in defense spending. One area is incentivizing productivity and innovation in industry and government. As part of this guidance, USD(AT&L) states that “best value” in a competitive source selection should generally indicate that the government is open to paying more (up to some amount) than the minimum price bid in return for a product that provides more than the minimum needed performance. In addition, USD(AT&L) states that LPTA should be used in situations where DOD would not realize any value from a proposal exceeding its minimum technical or performance requirements and that another process should be used when standards of performance and quality are subjective. A second area of this guidance includes improving the professionalism of the total acquisition workforce. DOD has previously reported that training is a critical element of improving and sustaining a high quality workforce with the right skills and capabilities. USD(AT&L) also issued source selection procedures in March 2011 to standardize the methodology and process that DOD uses to conduct competitively negotiated source selections. For example, USD(AT&L) outlined a common set of principles and procedures for conducting acquisitions using the best value processes including the use of standardized rating criteria and descriptions for technical capability and past performance factors. Further, similar to information presented in the Better Buying Power Initiative, USD(AT&L) stated in the procedures that the LPTA process may be used in situations where the government would not realize any value from a proposal exceeding minimum technical or performance requirements, often for acquisitions of commercial or non-complex services or supplies which are clearly defined and expected to be low risk. In its April 2013 memorandum updating the Better Buying Power Initiative, USD(AT&L) directed the director of Defense Procurement and Acquisition Policy to update the guidance to describe the characteristics of a technically acceptable solution by July 1, 2013. As of July 2014, DOD officials are coordinating comments on a draft revision of the guidance. The Defense Procurement and Acquisition Policy official in charge of the revision told us the original due date of July 1, 2013 was established before they decided to do a more comprehensive update of the guidance, which has contributed to the date slipping for its completion. During the time that USD(AT&L) issued these initiatives and guidance— specifically, between fiscal years 2009 and 2013—DOD experienced a number of changes in its contracting activity, including: Total obligations for products and services decreased from $380 billion in fiscal year 2009 to $310 billion in fiscal year 2013, Obligations on new, competed contracts decreased from $70 billion in fiscal year 2009 to $43 billion in fiscal year 2013, and Obligations on new, competed contracts of $25 million or more decreased from $39 billion in fiscal year 2009 to $24 billion in fiscal year 2013. See figure 2 for our analysis of DOD’s contract obligations from FPDS-NG for fiscal year 2013. Even though DOD’s obligations decreased between fiscal year 2009 and 2013, it did acquire a similar mix of products and services in both years. In addition, the percentage of commercial items purchased in those 2 fiscal years was approximately the same. DOD Predominately Used Best Value Processes in Fiscal Year 2013, but Increased Its Use of LPTA for Higher Dollar Contracts since Fiscal Year 2009 DOD predominately used best value processes—tradeoff and LPTA—to evaluate offers from potential vendors in fiscal year 2013. DOD used best value processes for approximately 93 percent of the 2,851 new, competed contracts for which it had obligated over $1 million in fiscal year 2013 and used sealed bid for approximately 7 percent. For contracts with obligations of $25 million or more, DOD used the tradeoff process for approximately 58 percent of the contracts and the LPTA process for approximately 36 percent of the contracts. For contracts with obligations over $1 million and less than $25 million, DOD used tradeoff and LPTA at about the same overall rate—47 percent and 45 percent, respectively. In our sample of 171 contracts that used best value processes, DOD used tradeoff for 96 contracts and LPTA for 75 contracts. We found some variation in terms of what process was used to acquire products and services at the different thresholds we reviewed (see figure 3). As seen in the above figure, DOD used the tradeoff process most often in our sample to acquire services, including those related to construction projects, aircraft maintenance, and other support services, regardless of obligation amount. For contracts with obligations of $25 million or more, DOD used the LPTA process primarily to acquire commercial products such as fuel. In contrast, for contracts with obligations over $1 million and less than $25 million, DOD used the LPTA process to acquire a mix of products and services, including fuel, aircraft parts, computer equipment, construction-related services, engineering support services, and ship maintenance and repairs. The desire to weigh non-cost factors such as technical approach and past performance was a key factor cited in the majority of the solicitations issued for the 96 contracts in our sample that DOD awarded using the tradeoff process, regardless of obligation value (see table 1). For the 76 contracts for which non-cost factors were more important than price, DOD acquired both products and services, such as computer equipment, aircraft maintenance services, and communication network support services.and past performance were the factors most often identified as more important than price among the non-cost factors. For example, 48 out of the 76 contracts in our sample identified technical approach as the most important factor. Additionally, 23 out of the 76 contracts in our sample identified past performance as the most important factor. Other non-cost factors considered in some of the solicitations with much less frequency than technical approach and past performance include small business participation and delivery schedule. In addition, our analysis found that technical approach While data on DOD’s use of source selection processes were not readily available, our analysis found that DOD increased its use of LPTA from fiscal year 2009 to fiscal year 2013 for contracts with obligations of $25 million or more (see table 2). We cannot make a comparison between fiscal year 2009 and fiscal year 2013 for the lower dollar range, because our prior report only focused on contracts with obligations of $25 million or more in fiscal year 2009. Several contracting and program officials said that their commands gave more attention to whether LPTA is an alternative option in light of declining budgets and Better Buying Power Initiatives. Further, declining budgets encouraged contracting and program officials to streamline requirements. For example: The Executive Director of Army Contracting Command—Aberdeen Proving Ground, one of five Army Contracting Command centers— said that overall there is an increased cost consciousness regarding acquisitions, resulting from the Better Buying Initiatives and declining budgets. As a part of that increased cost consciousness, there is an increased willingness and necessity to re-examine tools that could present better prices. For example, the Executive Director referred to LPTA as “a tool that has been at the bottom of the source selection tool box collecting dust for some time.” As it became necessary to take a look at what is really needed, they have “dusted off” the LPTA tool and had more discussions about how to set the technical acceptability at an appropriate level where there is no additional benefit from paying for more than that level. Contracting officials from Naval Facilities and Engineering Command stated that in the current fiscal environment of “doing more with less,” they are educating their contracting personnel to use LPTA when appropriate. For example, on March 28, 2013, the Command sent an email communication to its contracting staff that provided guidance on the use of LPTA for task orders on multiple award contracts that are less than $10 million. The guidance stated that the contracting officer may choose to consider only price or cost for award purposes when the requirement is valued at less than $10 million, considered to be non-complex, and where non-cost factors are deemed unnecessary. These officials stated LPTA is less complex and less time consuming than tradeoff, and as a result, they can save personnel resources. In addition to internal guidance, Navy officials told us that the Better Buying Power Initiative also directs acquisition personnel to look for efficiencies and streamlining in acquisitions. Contracting officials from Naval Supply Systems Command stated they increased their scrutiny on tradeoff acquisitions, which has contributed to a cultural shift to increase the consideration of LPTA as an alternative source selection process. The command issued an October 9, 2012 memorandum to contracting activities that states if non-cost factors are more important than price, the acquisition must be reviewed by a senior level acquisition executive. Similarly, Air Force Materiel Command contracting and program officials stated that given the budget environment, it is increasingly difficult to justify higher dollar solutions from a technical standpoint when solutions may exist that meet the minimum requirement. DLA contracting officials stated that in light of resource constraints, it is increasingly common to purchase products that meet the program’s needs without overstating the requirement. These officials told us LPTA is a good choice for mature, commercial requirements where there is no added value in conducting a tradeoff given the need to stretch budgets. Knowledge of Requirements and Potential Vendors Underpin Decisions about Source Selection Process Our review of contract documents and interviews with program and contracting officials from our 16 case studies found that for these specific acquisitions, DOD’s ability to clearly define its requirements and its knowledge of potential vendors were the key factors that underpinned the decisions about whether to use tradeoff or LPTA. For example, in the eight case studies in which DOD used LPTA, DOD contracting and program officials generally stated they had sufficient knowledge of the requirements or vendors to feel confident that the lowest priced vendor, after meeting technical acceptability requirements, could deliver the product or service. In contrast, in our eight tradeoff case studies, contracting and program officials were less certain about requirements, were looking for innovative solutions, or wanted to use non-cost factors, such as past performance, as a differentiator when selecting the vendor. We found that for these 16 case studies DOD’s reasons for choosing LPTA or tradeoff were generally consistent with guidance in the FAR and DOD’s source selection procedures. Table 3 provides several highlights from the case studies that illustrate where DOD’s ability to clearly define its requirements and its knowledge of potential vendors affected the source selection decision making process. Policy officials from some military departments noted that setting technical acceptability levels is important for contracts awarded through LPTA to be successful. Defense Procurement and Acquisition Policy officials told us the ongoing efforts to revise DOD’s 2011 source selection procedures is intended, in part, to further define how to conduct best value processes. According to these officials, the revised guidance will emphasize that for LPTA, the solicitation must clearly describe the minimum evaluation standards. In addition, they expect the guide will provide additional information on how to determine when to pay a price premium. DOD Provides Online and Classroom Training on Source Selection Processes, but On-the-Job Training Considered Essential for Making Sound Source Selection Decisions DOD, through courses offered by DAU and the military departments, provides both classroom and online training related to source selection processes to its acquisition personnel. Both DAU and military department officials stressed, however, the importance of on-the-job training in preparing personnel to make informed source selection determinations. Congress passed the Defense Acquisition Workforce Improvement Act (DAWIA) in 1990 to both ensure effective and uniform education, training, and career development of members of the acquisition workforce, including contracting and other career fields, and established DAU to provide training. The act also required DOD to establish career paths, referred to by DOD as certification requirements, for the acquisition workforce. DOD military departments must track acquisition workforce personnel to ensure that they meet mandatory standards established for level I (basic or entry), level II (intermediate or journeyman), or level III (advanced or senior) in a career field, such as contracting, life cycle logistics, and program management. Similar requirements and levels are established for each of the acquisition career fields identified by DOD. DOD identified a need to increase the capacity and size of the acquisition workforce over the past several years. For example, in a DOD assessment of the contracting workforce completed in September 2008, senior DOD contracting leaders identified the importance of not only mastering the “what,” but in using critical thinking and sound judgment to apply the knowledge—thus mastering the “how” of contracting among its entry-level and mid-career personnel. To help address concerns that DOD had become too reliant on contractors to support core functions and to rebuild the capacity and skill sets that eroded in the years that followed the downsizing of the workforce in the 1990s, DOD increased its number of acquisition workforce positions from 133,103 in fiscal year 2009 to 151,355 in fiscal year 2013—including a 9.5 percent increase or an additional 2,616 positions—in the contracting career field. DAU officials identified five training courses that are taken either online or in the classroom to provide acquisition personnel, including contracting and program officials, the knowledge and skills necessary to make source selection decisions. Contracting personnel are required or recommended to complete all five of the identified training courses at some point in their career to obtain specific DAWIA certifications. Additionally, DAU makes these classes available to personnel outside the DAWIA acquisition workforce. Based on our analysis of student self-reported exit data in fiscal year 2013 and our discussion with DAU officials, we found that many graduates for these courses did not indicate their career field when completing the course registration or exit survey, particularly for online courses, which makes it difficult to know how many personnel outside of the DAWIA workforce with acquisition-related responsibilities took these courses. In September 2011, we reported on personnel working on service acquisitions who are outside the DAWIA acquisition workforce with acquisition-related responsibilities and found the number of these individuals to be substantial. As such, we recommended that the Secretary of Defense establish criteria and a time frame for identifying personnel outside the DAWIA acquisition workforce with acquisition-related responsibilities. DOD concurred with the recommendation and, as of June 2014, is developing a way to identify all of the non-DAWIA personnel with acquisition-related responsibilities and the appropriate training curriculum they should receive. Table 4 outlines each of these five courses. We also found that military departments provided source selection training—offering both overview and refresher courses—to contracting staff and others involved in the source selection process. Table 5 identifies examples of the training courses offered by various military departments. DAU and military department officials we spoke with pointed to their training as providing educational resources from which the acquisition workforce can understand the basics of appropriate source selection processes. These officials also stressed the role on-the-job training plays when making such determinations. For example, policy officials within the office of the Assistant Secretary of the Army for Acquisition, Technology, and Logistics told us that on-the-job training provides important exposure for less experienced acquisition staff to the source selection decision making processes. As a result, contracting officials have a better understanding of situations where a particular source selection process may be more appropriate than others. Many officials told us that contracting officials can best understand the acquisition process and apply their in-classroom training through making real world source selection decisions. As such, several military department officials, including contracting officials from our case studies, provided examples of why they consider on-the-job training to be important, including the following: Air Force Installation Contracting Agency contracting officials from one of our case studies and a command official told us that on-the-job training and experience are important factors that affect the source selection process determination. They stated that on-the-job training provides experience and opportunities for contracting officers to make critical decisions that can only occur in a source selection environment. To that end, these officials told us that informal mentoring relationships are established wherein newer, less experienced staff is assigned to work with more senior staff. Naval Facilities Engineering Command officials and contracting officials from one of our case studies stated that the task of identifying when requirements would better suit a particular source selection process is learned through gaining experience from on-the-job training. Naval Sea Systems Command officials from one of our case studies stated that the best training they received is on-the-job training. These officials explained that more senior contracting officers help newer contracting staff with their acquisitions. They consider mentor type training invaluable in learning how to conduct an acquisition. Concluding Observations Best value processes continued to underlie the vast majority of DOD’s new, competitively awarded contracts. DOD has increased its use of the LPTA process in recent years for higher value contracts, and its decision making regarding which source selection process to use did not appear to be ill-advised. Its decision making was generally rooted in knowledge about the requirements and vendors. In our sample of 16 cases, we identified instances in which DOD used LPTA for what appeared to be complex acquisitions, such as the system to mimic an anti-aircraft missile, but the acquisition team had considerable knowledge about the requirements or vendors. In other cases, DOD used the tradeoff process for what appeared to be relatively simple acquisitions, such as fabric dyeing, yet the acquisition team identified complexities about the proposed acquisition. Amid the climate of rapidly building fiscal pressures and cost consciousness, selecting the right source selection approach continues to be essential to ensure the department acquires what it needs without paying more than necessary. Agency Comments We are not making recommendations in this report. We provided a draft of this report to DOD for comment. DOD did not provide written comments on this report but did provide technical comments, which we incorporated as appropriate. We are sending copies of this report to appropriate congressional committees and the Secretary of Defense. The report will be available at no charge on GAO’s website at http://www.gao.gov. If you or your staff have questions about this report, please contact me at (202) 512-4841 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix III. Appendix I: Objectives, Scope, and Methodology Committee reports from the Senate and House Armed Services committees and the Joint Explanatory Statement accompanying the National Defense Authorization Act for Fiscal Year 2014 mandated GAO to report on the Department of Defense’s (DOD) use of best value processes. We determined 1) the extent to which DOD used best value processes in fiscal year 2013; (2) the factors DOD considers when choosing a source selection process; and (3) training DOD provides to its acquisition personnel on source selection processes. In addition, in response to a matter identified in a 2013 report from the House Armed Services Committee, appendix II includes information on the military departments’ acquisitions of body armor vests in fiscal year 2013. To determine the extent DOD used the best value processes in fiscal year 2013, we used data from the Federal Procurement Data System-Next Generation (FPDS-NG) as of October 2013 to identify a population of contracts based on the following criteria: (1) newly awarded by DOD in fiscal year 2013, (2) competitively awarded, and (3) had obligations of over $1 million in fiscal year 2013. This analysis identified a population of 2,851 contracts, and from this population we selected a stratified random sample of 227 contracts, with the strata defined by whether the contract had obligations of $25 million or more, or whether its obligations totaled over $1 million and less than $25 million. We divided the data into two groups including contracts with higher obligations of $25 million or more and contracts with lower obligations over $1 million and less than $25 million. We used the $25 million threshold to divide our data set based on a Defense Federal Acquisition Regulation Supplement (DFARS) requirement that contracts for products or services with $25 million or more in estimated total costs for any fiscal year have written acquisition plans, which contain information on the anticipated source selection process. more, we compared the percentage of contracts solicited using best value processes to fiscal year 2009 data we reported in October 2010.prior report did not include contracts with lower obligations of less than $25 million. DFARS § 207.103(d)(i)(B). We obtained and analyzed the solicitation documents for all of the contracts in our sample to identify the source selection process DOD used. We verified the contract award fields in FPDS-NG with contract and solicitation data to ensure that the contracts within our sample were in-scope. Based on that analysis, we determined that a total of 44 contracts were out of scope for our review. These 44 contracts were excluded from our analysis, because they were either incorrectly coded in our key parameters, or were awarded using processes outside of the Federal Acquisition Regulation (FAR) Part 14 on sealed bidding or Part 15 on contracting by negotiation (which includes best value processes) and consequently should not have been in our sample, resulting in a total of 183 contracts in our review (see table 6). After accounting for these errors, we assessed the reliability of FPDS-NG data by electronically testing the data to identify problems with consistency, completeness, or accuracy and reviewed relevant documentation. We determined that the FPDS-NG data were sufficiently reliable for the purposes of our review. Because we followed a probability procedure based on random selection, our sample is only one of a large number of samples that we might have drawn. Since each sample could have provided different estimates, we express our confidence in the precision of our particular sample’s results as a 95 percent confidence interval (e.g., plus or minus 8 percentage points). This is the interval that would contain the actual population value for 95 percent of the samples we could have drawn. Unless otherwise noted, percentage estimates of contracts with obligations of $25 million or more have 95 percent confidence intervals within +/- 8 percentage points of the estimate itself. Similarly, for contracts with obligations over $1 million and less than $25 million, percentage estimates have confidence intervals within +/- 10 percentage points of the estimate itself. In addition, to compare characteristics of contracts in our sample that used best value processes for both strata, we determined contract type, the type of procurement (product versus service), and if commercial item procedures were used for our sample using FPDS-NG data and conducted data reliability analysis on these fields, by verifying this information with the contract and solicitation documents. For the contracts identified as tradeoff, we analyzed the contract and solicitation documentation to identify the most frequently used non-cost evaluation factors and their relative importance to price. To identify what factors DOD considers when choosing a source selection process, we analyzed the FAR, DFARS, and DOD and military departments’ regulation, policy, and guidance on source selection. We interviewed senior DOD policy officials at Defense Procurement and Acquisition Policy, and at the Army, Navy, and Air Force headquarters. We also interviewed officials from at least two buying commands—based upon such factors as the number of contract actions and obligation amounts—at each military department (Army, Navy, and Air Force), as well as the Defense Logistics Agency (DLA) to discuss factors affecting their decision process on which source selection process to use. In addition, we analyzed our sample of 183 contracts and selected 16 new, competitively awarded contracts with obligations ranging from $1.1 million to $150.7 million to further our understanding of why acquisition officials chose the source selection process. Our 16 case studies—8 tradeoff and 8 LPTA—included at least 1 from each military department and DLA, different product and service types, and amount of dollars obligated in fiscal year 2013. For the case studies, we interviewed DOD contracting and program officials and reviewed contract documentation, including the acquisition plan, solicitation, and source selection decision memorandum to further understand the source selection decision making process. The results from our review of these selected contracts cannot be generalized beyond the specific contracts selected. During the course of our review, we also interviewed officials from the following commands: Department of the Army, Army Contracting Command, Aberdeen Proving Ground, Maryland; Medical Command, Fort Detrick, Maryland; and Intelligence and Security Command, Fort Belvoir, Virginia Department of the Army, United States Army Corps of Engineers, Washington, D.C., and Huntsville Center, Alabama Department of the Navy, Naval Air Systems Command, Patuxent River, Maryland; Naval Facilities Command, Navy Yard, Washington, D.C.; and Naval Supply Systems Command, Mechanicsburg, Pennsylvania Department of the Navy, United States Marine Corps Installations and Logistics Command, Navy Annex, Virginia; and Marine Corps Systems Command, Quantico, Virginia Department of the Air Force, Installation Contracting Agency and Air Force Materiel Command, both located at Wright-Patterson Air Force Base, Ohio Defense Logistics Agency-Energy, Ft. Belvoir, Virginia; and Defense Logistics Agency-Troop Support, Philadelphia, Pennsylvania Joint Theater Support Contracting Command, Kabul, Afghanistan. To determine what training DOD provides to its acquisition personnel on source selection processes, we met with Defense Acquisition University (DAU) officials and instructors and reviewed training materials. We also obtained attendance and workforce data from the DOD Office of the Under Secretary of Defense (Acquisition, Technology, and Logistics), Human Capital Initiatives. Further, we collected and reviewed military department and command specific training documents to identify if additional source selection training is given in addition to DAU provided training. We also interviewed DOD policy officials at Defense Procurement and Acquisition Policy, several commands at the military departments, as well as contracting and program personnel at the contracting offices of the selected military departments from the 16 case studies on training provided related to source selection processes. We supplemented these case studies with interviews with industry associations to identify their perspectives about DOD’s source selection processes. We conducted this performance audit from September 2013 through July 2014 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Body Armor Vest Acquisitions in Fiscal Year 2013 The Marine Corps, Defense Logistics Agency (DLA), and Army bought similar soft body armor vests made of ballistic material in fiscal year 2013 using different source selection processes. Knowledge of requirements or vendors were key considerations in each acquisition, but distinct needs led to different decisions about which source selection process to use even when acquiring the similar product. The Marine Corps issued one delivery order to purchase soft body armor vests for $2.3 million in fiscal year 2013 using the lowest price technically acceptable (LPTA) process. It issued this order from a multiple award, indefinite delivery indefinite quantity (IDIQ) contract awarded to two vendors in fiscal year 2009 using the LPTA process. The contracting officer told us they chose to use LPTA for the base contract because they consider soft body armor vests to be a commodity product with clearly defined technical performance specifications. Further, the contracting officer, in consultation with the program office, saw no opportunity for tradeoff above industry standard because the industry standard met their current needs. Ongoing research and development showed that any tradeoff for enhanced performance would lead to the armor being heavier, an unacceptable outcome. For the base contracts, the Marine Corps awarded to the second and third lowest priced vendors, because the lowest priced vendor was deemed non-responsible. DLA issued 23 delivery orders to purchase soft body armor vests for $288.1 million in fiscal year 2013. It issued these orders from three separate IDIQ contracts awarded to three vendors in fiscal years 2011 and 2012 using the tradeoff process. DLA contracting officials told us that they chose to use the tradeoff process for these contracts because they wanted to use past performance as a key discriminator, which is generally not allowed using the LPTA process. Further, because DLA buys for sustainment purposes and its quantity needs fluctuate, officials told us that past performance was a critical determination factor requiring the use of the tradeoff process, in addition to the vendor’s historic production capacity, delivery schedule, and other performance capabilities. The Army issued one delivery order to purchase soft body armor vests for $10,201 in fiscal year 2013 using the LPTA process. It issued this order from one of the multiple award, IDIQ contracts awarded to eight vendors in fiscal years 2009 and 2010 using the tradeoff process. Army contracting officials told us they chose to use the tradeoff process for the base contract, because it provided the Army more discretion in evaluating past performance as well as leaving open the possibility that industry vendors might offer a more innovative solution. Once the Army had a group of qualified vendors on contract, they could then use the LPTA process for subsequent buys. Appendix III: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact name above, the following staff members made key contributions to this report: Molly Traci, Assistant Director; James Kim; Anh Nguyen; Erin Stockdale; Jina Yu; Claire Li; Jessica Drucker; Danielle Greene; Roxanna Sun; John Krump; Mark Ramage; Julia Kennon; Virginia Chanley; and Carol Petersen.
DOD obligated about $310 billion in fiscal year 2013 for products and services needed to support its mission. To competitively acquire what is needed, DOD may use best value processes—including tradeoff and LPTA—to evaluate vendors' proposals. When using the tradeoff process, DOD weighs the relative importance of price against non-cost factors. By contrast, DOD may use the LPTA process and award the contract based on lowest price once technical requirements are met. Congress mandated GAO to review DOD's use of best value processes. GAO identified, among other things, (1) the extent to which DOD used best value processes in fiscal year 2013, (2) the factors DOD considers when choosing a source selection process, and (3) training DOD provides to its acquisition personnel on source selection processes. GAO identified and reviewed solicitations for a projectable sample of 183 contracts out of 2,851 new, competitively awarded contracts that DOD awarded in fiscal year 2013 with obligations over $1 million. GAO also reviewed DOD and military departments' guidance regarding their use of the best value process. GAO selected 16 contracts for case studies based on military department, best value process used, and other factors. GAO reviewed contract documents and interviewed program and contracting officials for these case studies. GAO also reviewed DAU and military departments' training on source selection procedures. DOD provided technical comments that GAO incorporated as appropriate. The Department of Defense (DOD) used two best value processes—tradeoff and lowest price technically acceptable (LPTA)—for approximately 93 percent of the 2,851 new, competitively awarded contracts awarded in fiscal year 2013 with obligations greater than $1 million. DOD used the tradeoff process most often in GAO's sample of contracts to acquire services, regardless of obligation value. For contracts with higher obligations, DOD used the LPTA process primarily to acquire commercial products, such as fuel. In contrast, for contracts in GAO's sample with lower obligations, DOD used the LPTA process to acquire both products and services. Several contracting and program officials said that their commands gave more attention to whether LPTA is an alternative option in light of declining budgets and efficiency initiatives. For contracts with obligations of $25 million or more, GAO found that DOD increased its use of LPTA since GAO last reported on this issue in October 2010 using fiscal year 2009 data. GAO's prior report did not include contracts with lower obligations. Source: GAO analysis of DOD contract and solicitation documents. | GAO-14-584 a The 95 percent confidence intervals for estimates in this table are within +/- 8 percentage points of the estimates themselves. DOD's ability to clearly define its requirements and its knowledge of potential vendors were key factors that underpinned decisions about whether to use tradeoff or LPTA in GAO's 16 case studies. In the eight case studies in which DOD used LPTA, contracting and program officials generally stated that they had sufficient knowledge of the requirements or vendors to feel confident that the lowest priced vendor, meeting DOD's technical requirements, could deliver the product or service. In contrast, in the eight tradeoff case studies, contracting and program officials were less certain about requirements, were looking for innovative solutions, or wanted to use non-cost factors to differentiate vendors. For example, the United States Army Corps of Engineers used technical non-cost factors to evaluate vendors' abilities to use robotics for explosives disposal. These factors are generally consistent with guidance in the Federal Acquisition Regulation and DOD's March 2011 source selection procedures. DOD, through courses offered by the Defense Acquisition University (DAU) and the military departments, provides both classroom and online training related to source selection processes to its acquisition personnel. Both DAU and military department officials stressed, however, the importance of on-the-job training in preparing personnel to make informed source selection determinations. For example, Naval Facilities Engineering Command officials told GAO that determining when requirements are better suited for tradeoff or LPTA is learned through gaining experience from on-the-job training.
GAO_GAO-07-933T
Background Drinking water can come from either groundwater sources, via wells, or from surface water sources, such as rivers, lakes, and streams. All sources of drinking water contain some naturally occurring contaminants. As water flows in streams, sits in lakes, and filters thorough layers of soil and rock in the ground, it dissolves or absorbs the substances that it touches. Some of these contaminants are harmless, but others can pose a threat to drinking water, such as improperly disposed-of chemicals, pesticides, and certain naturally occurring substances. Likewise, drinking water that is not properly treated or disinfected, or which travels through an improperly maintained water system, may pose a health risk. However, the presence of contaminants does not necessarily indicate that water poses a health risk—all drinking water may reasonably be expected to contain at least small amounts of some contaminants. As of July 2006, EPA had set standards for approximately 90 contaminants in drinking water that may pose a risk to human health. According to EPA, water that contains small amounts of these contaminants, as long as they are below EPA’s standards, is safe to drink. However, EPA notes that people with severely compromised immune systems and children may be more vulnerable to contaminants in drinking water than the general population. General Information about Camp Lejeune and Its Water Systems Camp Lejeune covers approximately 233 square miles in Onslow County, North Carolina, and includes training schools for infantry, engineers, service support, and medical support, as well as a Naval Hospital and Naval Dental Center. The base has nine family housing areas, and families live in base housing for an average of 2 years. Base housing at Camp Lejeune consists of enlisted family housing, officer family housing, and bachelor housing (barracks for unmarried service personnel). Additionally, schools, day care centers, and administrative offices are located on the base. Approximately 54,000 people currently live and work at Camp Lejeune, including about 43,000 active duty personnel and 11,000 military dependents and civilian employees. In the 1980s, Camp Lejeune obtained its drinking water from as many as eight water systems, which were fed by more than 100 individual wells that pumped water from a freshwater aquifer located approximately 180 feet below the ground. Each of Camp Lejeune’s water systems included wells, a water treatment plant, reservoirs, elevated storage tanks, and distribution lines to provide the treated water to the systems’ respective service areas. Drinking water at Camp Lejeune has been created by combining and treating groundwater from multiple individual wells that are rotated on and off, so that not all wells are providing water to the system at any given time. Water is treated in order to remove minerals and particles and to protect against microbial contamination. (See fig. 1 for a description of how a Camp Lejeune water system operates.) From the 1970s through 1987, Hadnot Point, Tarawa Terrace, and Holcomb Boulevard water systems provided drinking water to most of Camp Lejeune’s housing areas. The water treatment plants for the Hadnot Point and Tarawa Terrace water systems were constructed during the 1940s and 1950s. The water treatment plant for the Holcomb Boulevard water system began operating at Camp Lejeune in 1972; prior to this time, the Hadnot Point water system provided water to the Holcomb Boulevard service area. In the 1980s, each of these three systems had between 8 and 35 wells that could provide water to their respective service areas. In 1987 the Tarawa Terrace water treatment plant was shut down and the Holcomb Boulevard water distribution system was expanded to include the Tarawa Terrace water service area. Generally, housing units served by the Tarawa Terrace and Holcomb Boulevard water systems consisted of family housing, which included single- and multifamily homes and housing in trailer parks. Housing units served by the Hadnot Point water system included mainly bachelor housing with limited family housing. Based on available housing data for the late 1970s and the 1980s, the estimated annual averages of the number of people living in family housing units served by these water systems at that time were: 5,814 people in units served by the Tarawa Terrace water system, 6,347 people in units served by the Holcomb Boulevard water system, and 71 people in units served by the Hadnot Point water system. In addition to serving housing units, all three water systems provided water to base administrative offices. The Tarawa Terrace, Holcomb Boulevard, and Hadnot Point water systems also served schools and other recreational areas. Additionally, the Hadnot Point water system also served an industrial area and the base hospital. Department of the Navy Environmental Functions Certain Navy entities provide support functions for Marine Corps bases such as Camp Lejeune. Two entities provide support for environmental issues: The Naval Facilities Engineering Command began providing environmental support for bases in the 1970s. The Naval Facilities Engineering Command, Atlantic Division (LANTDIV) provides environmental support for Navy and Marine Corps bases in the Atlantic and mid-Atlantic regions of the United States. For example, LANTDIV officials work with Camp Lejeune officials to establish environmental cleanup priorities and cost estimates and to allocate funding to ensure compliance with state and federal environmental regulations. The Navy Environmental Health Center (NEHC) has provided environmental and public health consultation services for Navy and Marine Corps environmental cleanup sites since 1991. NEHC is also designated as the technical liaison between Navy and Marine Corps installations and ATSDR and, as a part of this responsibility, reviews and comments on all ATSDR reports written for Navy and Marine Corps sites prior to publication. Prior to 1991, no agency was designated to provide public health consultation services for Navy and Marine Corps sites. In 1980, the Department of the Navy established the Navy Assessment and Control of Installation Pollutants (NACIP) program to identify, assess, and control environmental contamination from past hazardous material storage, transfer, processing, and disposal operations. Under the NACIP program, initial assessment studies were conducted to determine the potential for environmental contamination at Navy and Marines Corps bases. If, as a result of the study, contamination was suspected, a follow- up confirmation study and corrective measures were initiated. In 1986 the Navy replaced its NACIP program with the Installation Restoration Program. The purpose of the Installation Restoration Program is to reduce, in a cost-effective manner, the risk to human health and the environment from past waste disposal operations and hazardous material spills at Navy and Marine Corps bases. Cleanup is done in partnership with EPA, state regulatory agencies, and members of the community. Environmental Laws and Regulations Related to Drinking Water Contamination and Hazardous Waste Contamination at Camp Lejeune Congress passed the Safe Drinking Water Act in 1974 to protect the public’s health by regulating the nation’s public drinking water supply. The Safe Drinking Water Act, as amended, is the key federal law protecting public water supplies from harmful contaminants. For example, the act requires that all public water systems conduct routine tests of treated water to ensure that the water is safe to drink. Required water testing frequencies vary and range from weekly testing for some contaminants to testing every 3 years for other contaminants. The act also established a federal-state arrangement in which states may be delegated primary implementation and enforcement authority for the drinking water program. For contaminants that are known or anticipated to occur in public water systems and that EPA determines may have an adverse impact on health, the act requires EPA to set a nonenforceable maximum contaminant level goal, at which no known or anticipated adverse health effects occur and that allows an adequate margin of safety. Once the maximum contaminant level goal is established, EPA sets an enforceable standard for water as it leaves the treatment plant, the maximum contaminant level. A maximum contaminant level is the maximum permissible level of a contaminant in water delivered to any user of a public water system. The maximum contaminant level must be set as close to the goal as is feasible using the best technology or other means available, taking costs into consideration. The North Carolina Department of Environment and Natural Resources and its predecessors have had primary responsibility for implementation of the Safe Drinking Water Act in North Carolina since 1980. In 1979, EPA promulgated final regulations applicable to certain community water systems establishing the maximum contaminant levels for the control of TTHMs, which are a type of VOC that are formed when disinfectants—used to control disease-causing contaminants in drinking water—react with naturally occurring organic matter in water. The regulations required that water systems that served more than 10,000 people and that added a disinfectant as part of the drinking water treatment process begin mandatory water testing for TTHMs by November 1982 and comply with the maximum contaminant level by November 1983. TCE and PCE were not among the contaminants included in these regulations. In 1979 and 1980, EPA issued nonenforceable guidance establishing “suggested no adverse response levels” for TCE and PCE in drinking water and in 1980 issued “suggested action guidance” for PCE in drinking water. Suggested no adverse response levels provided EPA’s estimate of the short- and long-term exposure to TCE and PCE in drinking water for which no adverse response would be observed and described the known information about possible health risks for these chemicals. Suggested action guidance recommended remedial actions within certain time periods when concentrations of contaminants exceeded specific levels. Suggested action guidance was issued for PCE related to drinking water contamination from coated asbestos-cement pipes, which were used in water distribution lines. The initial regulation of TCE and PCE under the Safe Drinking Water Act began in 1989 and 1992, respectively, when maximum contaminant levels became effective for these contaminants. (See table 1 for the suggested no adverse response levels, suggested action guidance, and maximum contaminant level regulations for TCE and PCE.) The Comprehensive Environmental Response, Compensation, and Liability Act (CERCLA) of 1980 established what is known as the Superfund program to clean up highly contaminated waste sites and address the threats that these sites pose to human health and the environment, and assigned responsibility to EPA for administering the program. CERCLA was amended by the Superfund Amendments and Reauthorization Act (SARA) of 1986. Among other things, SARA requires that federal agencies, including DOD, that own or operate facilities on EPA’s CERCLA list of seriously contaminated sites, known as the National Priorities List, enter into an interagency agreement with EPA. The agreement is to specify what cleanup activities, if any, are required and to set priorities for carrying out those activities. SARA also established the Defense Environmental Restoration Program, through which DOD conducts environmental cleanup activities at military installations. Under the environmental restoration program, DOD’s activities addressing hazardous substances, pollutants, or contaminants are required to be carried out consistent with the provisions of CERCLA governing environmental cleanups at federal facilities. Based on environmental contamination at various areas on the base, Camp Lejeune was designated as a National Priorities List site in 1989. EPA, the Department of the Navy, and the state of North Carolina entered into a Federal Facilities Agreement concerning cleanup of Camp Lejeune with an effective date of March 1, 1991. ATSDR’s Assessment of the Adverse Health Effects of Hazardous Substances at DOD Superfund Sites ATSDR was created by CERCLA and established within the Public Health Service of HHS in April 1983 to carry out Superfund’s health-related activities. These activities include conducting health studies, laboratory projects, and chemical testing to determine relationships between exposure to toxic substances and illness. In 1986, SARA expanded ATSDR’s responsibilities to include, among other things, conducting public health assessments, toxicological databases, information dissemination, and medical education. SARA requires that ATSDR conduct a public health assessment at each site proposed for or on the National Priorities List, and that ATSDR conduct additional follow-up health studies if needed. Potentially responsible parties, including federal agencies, are liable for the costs of any health assessment or health effects study carried out by ATSDR. SARA requires that ATSDR and DOD enter into a memorandum of understanding to set forth the authorities, responsibilities, and procedures between DOD and ATSDR for conducting public health activities at DOD Superfund sites. Based on the memorandum of understanding signed between ATSDR and DOD, ATSDR is required to submit an annual plan of work to DOD, in which it must describe the public health activities it plans to conduct at DOD sites in the following fiscal year, as well as the amount of funding required to conduct these activities. After the annual plan of work has been submitted, DOD has 45 days to respond and negotiate the scope of work to be conducted by ATSDR. The memorandum of understanding states that DOD must seek sufficient funding through the DOD budgetary process to carry out the work agreed upon. From 1991 to 1997, ATSDR conducted a public health assessment at Camp Lejeune that was required by law because of the base’s listing on the National Priorities List. The health assessment evaluated several ways in which people on base had been exposed to hazardous substances, including exposure to the VOC-contaminated drinking water. In its 1997 report, ATSDR recommended that a study be carried out to evaluate the risks of childhood cancer in those who were exposed in utero to the contaminated drinking water and also noted that adverse pregnancy outcomes were of concern. In 1995, while the health assessment was being conducted, ATSDR initiated a study to determine whether there was an association between exposure to VOCs in drinking water and specific adverse pregnancy outcomes among women who had lived at Camp Lejeune from 1968 through 1985. The study, released in 1998, originally concluded that there was a statistically significant elevated risk for several poor pregnancy outcomes, including (1) small for gestational age among male infants born to mothers living at Hadnot Point, (2) small for gestational age for infants born to mothers over 35 years old living at Tarawa Terrace, and (3) small for gestational age for infants born to mothers with two or more prior fetal losses living at Tarawa Terrace. However, ATSDR officials said they are reanalyzing the findings of this study because of an error in the original assessment of exposure to VOCs in drinking water. While the study originally assessed births from 1968 to 1972 in the Holcomb Boulevard service area as being unexposed to VOCs, these births were exposed to contaminants from the Hadnot Point water system. An ATSDR official said the reanalysis may alter the study’s results. In 1999, ATSDR initiated its current study examining whether certain birth defects and childhood cancers are associated with exposure to TCE or PCE at Camp Lejeune. The study examines whether individuals born during 1968 through 1985 to mothers who were exposed to the contaminated drinking water at any time while they were pregnant and living at Camp Lejeune were more likely than those who were not exposed to have neural tube defects, oral cleft defects, or childhood hematopoietic cancers. The current study began with a survey to identify potential cases of the selected birth defects and childhood cancers. The study is also using water modeling to help ATSDR determine the potential sources of past contamination and estimate when the water became contaminated and which housing units received the contaminated water. The water modeling data will help ATSDR identify which pregnant women may have been exposed to the contaminated water, and will also help ATSDR estimate the amount of TCE and PCE that may have been in the drinking water. ATSDR officials said that the study is expected to be completed by December 2007. Possible Adverse Health Effects of TCE and PCE According to ATSDR’s Toxicological Profile, inhaling small amounts of TCE may cause headaches, lung irritation, poor coordination, and difficulty concentrating, and inhaling or drinking liquids containing high levels of TCE may cause nervous system effects, liver and lung damage, abnormal heartbeat, coma, or possibly death. ATSDR also notes that some animal studies suggest that high levels of TCE may cause liver, kidney, or lung cancer, and some studies of people exposed over long periods to high levels of TCE in drinking water or workplace air have shown an increased risk of cancer. ATSDR’s Toxicological Profile notes that the National Toxicology Program has determined that TCE is reasonably anticipated to be a human carcinogen and the International Agency for Research on Cancer has determined that TCE is probably carcinogenic to humans. Unlike TCE, the health effects of inhaling or drinking liquids containing low levels of PCE are unknown, according to ATSDR. However, ATSDR reports that exposure to very high concentrations of PCE may cause dizziness, headaches, sleepiness, confusion, nausea, difficulty in speaking and walking, unconsciousness, or death. HHS has determined that PCE may reasonably be anticipated to be a carcinogen. Efforts to Identify and Address Past Drinking Water Contamination at Camp Lejeune Began in the 1980s and Continue with Long-term Cleanup and Monitoring Efforts to identify and address past drinking water contamination at Camp Lejeune began in the 1980s, when the Navy initiated water testing at Camp Lejeune. In 1980, one water test identified the presence of VOCs and a separate test indicated contamination by unidentified chemicals. In 1982 and 1983, water monitoring for TTHMs by a laboratory contracted by Camp Lejeune led to the identification of TCE and PCE as the contaminants in two water systems at Camp Lejeune. Sampling results indicated that the levels of TCE and PCE varied. Former Camp Lejeune environmental officials said they did not take additional steps to address the contamination after TCE and PCE were identified. The former officials recalled that they did not take additional steps because at that time they had little knowledge of TCE and PCE, there were no regulations establishing enforceable limits for these chemicals in drinking water, and variations in water testing results raised questions about the tests’ validity. In 1984 and 1985, the NACIP program identified VOCs, including TCE and PCE, in 12 of the wells serving the Hadnot Point and Tarawa Terrace water systems. Camp Lejeune officials removed 10 wells from service in 1984 and 1985. Additionally, information about the contamination was provided to residents. Upon investigating the contamination, DOD and North Carolina officials concluded that both on- and off-base sources were likely to have caused the contamination in the Hadnot Point and Tarawa Terrace water systems. Since 1989, federal, state, and Camp Lejeune officials have partnered to take actions to clean up the sources of contamination and to monitor and protect the base’s drinking water. Navy Water Testing Beginning in 1980 Identified VOCs in Camp Lejeune Water Systems The presence of VOCs in Camp Lejeune water systems was first detected in October 1980. On October 1, 1980, samples of water were collected from all eight water systems at Camp Lejeune by an official from LANTDIV, a Navy entity that provided environmental support to Camp Lejeune. The water samples were combined into a single sample, and a “priority pollutant scan” was conducted in order to detect possible contaminants in the water systems. The results of this analysis, conducted by a Navy- contracted private laboratory and sent to LANTDIV, identified 11 VOCs, including TCE, at their detection limits, that is, the lowest level at which the chemicals could be reliably identified by the instruments being used. Separately, in 1980 the Navy began monitoring programs for TTHMs at various Navy and Marine Corps bases, including Camp Lejeune, in preparation for meeting a future EPA drinking water regulation. LANTDIV arranged for an Army laboratory to begin testing the treated water from two Camp Lejeune water systems, Hadnot Point and New River, in October 1980. At that time, these two water systems were the only ones that served more than 10,000 people and therefore would be required to meet the future TTHM regulation. From October 1980 to September 1981, eight samples were collected from the Hadnot Point water system and analyzed for TTHMs. Results from four of the eight samples indicated the presence of unidentified chemicals that were interfering with the TTHM analyses. Reports for each of the four analyses contained an Army laboratory official’s handwritten notes about the unidentified chemicals: two of the notes classified the water as “highly contaminated” and notes for the other two analyses recommended analyzing the water for organic compounds. The exact date when LANTDIV officials began receiving results from TTHM testing is not known, and LANTDIV officials told us that they had no recollection of how or when the results were communicated from the Army laboratory. Available Marine Corps documents indicate that Camp Lejeune environmental officials learned in July 1981 that LANTDIV had been receiving the results of TTHM testing and was holding the results until all planned testing was complete. Subsequently, Camp Lejeune environmental officials requested copies of the TTHM results that LANTDIV had received to date, and LANTDIV provided these results in August 1981. The next documented correspondence from LANTDIV to Camp Lejeune regarding TTHM monitoring occurred in a February 1982 memorandum in which LANTDIV recommended that TTHM monitoring be expanded to all of Camp Lejeune’s water systems and noted that Camp Lejeune should contract with a North Carolina state-certified laboratory for the testing. Current and former LANTDIV officials recalled that their agency played a limited role in providing information or guidance regarding environmental issues at Camp Lejeune, and that this assistance generally would have been at the request of Camp Lejeune officials. However, former Camp Lejeune environmental officials recalled that at that time they had little experience in water quality issues and relied on LANTDIV to serve as their environmental experts. Further Tests Identified TCE and PCE in Two Camp Lejeune Water Systems in 1982 and 1983; Camp Lejeune Officials Do Not Recall Taking Action to Address the Contamination at That Time Following LANTDIV’s recommendation to expand TTHM monitoring to all base water systems, Camp Lejeune officials contracted with a private state-certified laboratory to test samples of treated water from all eight of their water systems. According to an August 1982 memorandum, in May 1982 a Camp Lejeune official was informed during a telephone conversation with a private laboratory official that organic cleaning solvents, including TCE, were present in the water samples for TTHM monitoring from the Hadnot Point and Tarawa Terrace water systems. In July 1982, additional water samples from the two systems were collected in an effort to investigate the presence of these chemicals. In August 1982 the contracted laboratory sent a letter to base officials informing them that TCE and PCE were identified as the contaminants in the May and July samples. According to the letter, the testing determined that the Hadnot Point water system was contaminated with both TCE and PCE and the Tarawa Terrace water system was contaminated with PCE. The letter also noted that TCE and PCE “appeared to be at high levels” and were “more important from a health standpoint” than the TTHM monitoring. Sampling results indicated that the levels of TCE and PCE varied. The letter noted that one sample taken in May 1982 from the Hadnot Point water system contained TCE at 1,400 parts per billion and two samples taken in July 1982 contained TCE at 19 and 21 parts per billion. Four samples taken in May 1982 and July 1982 from the Tarawa Terrace water system contained levels of PCE that ranged from 76 to 104 parts per billion. (See table 2 for the May and July 1982 sampling results.) Former Camp Lejeune environmental officials recalled that after the private laboratory identified the TCE and PCE in the two water systems, they did not take additional steps to address the contamination for three reasons. First, they had limited knowledge of these chemicals; second, there were no regulations establishing enforceable limits for these chemicals in drinking water; and third, they made assumptions about why the levels of TCE and PCE varied and about the possible sources of the TCE and PCE. The former Camp Lejeune environmental officials told us that they were aware of EPA guidance, referred to as “suggested no adverse response levels,” for TCE and PCE when these contaminants were identified at Camp Lejeune. However, they noted that the levels of these contaminants detected at Camp Lejeune generally were below those outlined in the guidance. One Camp Lejeune environmental official also recalled that at the time they were unsure what the health effects would be for the lower amounts detected at the base. Additionally, in an August 1982 document and during our interviews with current Camp Lejeune environmental officials, it was noted that EPA had not issued regulations under the Safe Drinking Water Act for TCE and PCE when the private laboratory identified these chemicals in the drinking water. The former Camp Lejeune environmental officials also said that they made assumptions about why the levels of TCE and PCE varied. For example, they attributed the higher levels to short-term environmental exposures, such as spilled paint inside a water treatment plant, or to laboratory or sampling errors. Additionally, in an August 1982 memorandum, a Camp Lejeune environmental official suggested that based on the sampling results provided by the private laboratory, the levels of PCE detected could be the result of using coated pipes in the untreated water lines at Tarawa Terrace. The former Camp Lejeune environmental officials told us that in retrospect, it was likely that well rotation in these water systems contributed to the varying sampling results because the contaminated wells may not have been providing water to the Hadnot Point and Tarawa Terrace systems at any given time. However, both they and current Camp Lejeune environmental officials said that at that time the base environmental staff did not know that the wells serving both systems were rotated. After August 1982, the private laboratory continued to communicate with Camp Lejeune officials about the contamination of treated water from the Hadnot Point and Tarawa Terrace water systems. All eight of Camp Lejeune’s water systems were sampled again for TTHMs in November 1982. In a December 1982 memorandum, a Camp Lejeune environmental official noted that during a phone conversation with a chemist from the private laboratory the chemist expressed concern that TCE and PCE were interfering with Tarawa Terrace and Hadnot Point TTHM samples. The chemist said the levels of TCE and PCE were “relatively high” in the November 1982 samples, though the specific levels of TCE and PCE were not provided to Camp Lejeune officials. The private laboratory report providing the November 1982 results said that the samples from Tarawa Terrace “show contamination” from PCE and the samples from Hadnot Point “show contamination” from both TCE and PCE. All eight of Camp Lejeune’s water systems were sampled again for TTHMs in August 1983, and the private laboratory report providing these results said that the samples from Tarawa Terrace “show contamination” from PCE and the samples from Hadnot Point “show contamination” from both TCE and PCE. Former Camp Lejeune environmental officials recalled that they did not take any actions related to these findings. Discovery of Contamination in Individual Wells in 1984 and 1985 Prompted Their Removal from Service, and Information Was Provided to Residents and the Media In 1982, Navy officials initiated the NACIP program at Camp Lejeune with an initial assessment study, which was designed to collect and evaluate evidence that indicated the existence of pollutants that may have contaminated a site or that posed a potential health hazard for people located on or off a military installation. The initial assessment study determined that further investigation was warranted at 22 priority sites and a confirmation study to investigate these sites was initiated in July 1984. As a part of the confirmation study, a Navy contractor took water samples from water supply wells located near priority sites where groundwater contamination was suspected. Current and former Camp Lejeune officials told us that previous water samples usually had been collected from treated water at sites such as reservoirs or buildings within the water systems rather than being collected directly from individual wells at Camp Lejeune. In November 1984, Camp Lejeune officials received sampling results for one Hadnot Point well located near a priority site, which showed that TCE and PCE, among other VOCs, were detected in the well. This well was removed from service, and in December 1984, water samples from six Hadnot Point wells that were located in the same general area and treated water samples from the Hadnot Point water plant were also tested. Results of the analysis of the well samples indicated that both TCE and PCE were detected in one well, TCE was detected in two additional wells, and other VOCs were detected in all six wells. Results for the treated water samples also detected TCE and PCE. Four of these six wells were removed from service in addition to the original well removed from service. For the two wells that were not taken out of service, while initial results indicated levels of VOCs, including TCE, other test results showed no detectable levels of VOCs. Documents we reviewed show that continued monitoring of those two wells indicated no detectable levels of TCE. During December 1984, seven additional samples were taken from the treated water at Hadnot Point water plant and revealed no detectable levels of TCE and PCE. According to two former Camp Lejeune environmental officials, once the wells had been taken out of service and the samples from the water plant no longer showed detectable levels of TCE or PCE, they believed the water from the Hadnot Point water system was no longer contaminated. Although the December 1984 testing of water from the Hadnot Point water system showed no detectable levels of TCE or PCE, in mid-January 1985 Camp Lejeune environmental staff began collecting water samples from all wells on the base. Sampling results were received in February 1985 and detected VOCs, including TCE and PCE, in 3 wells serving the Hadnot Point water system and 2 wells serving the Tarawa Terrace water system. As a result, those 5 wells were removed from service. According to current Camp Lejeune officials, all 10 wells had been removed from service by February 8, 1985. According to memoranda dated March 1985 and May 1985, 1 of the 2 wells removed from service at Tarawa Terrace was used on 1 day in March 1985 and on 3 days in April 1985 for short periods of time to meet water needs at the base. See table 3 for the dates that wells were removed from service and for the levels of TCE and PCE that were detected in the wells prior to their removal from service in 1984 and 1985. See app. I for the levels of all VOCs that were detected in the wells prior to their removal from service in 1984 and 1985. In addition, while base officials were waiting for sampling results from January 1985 of samples collected from wells serving Hadnot Point, water from this system was provided to a third water system for about 2 weeks. In late January 1985, a fuel line break caused gasoline to leak into the Holcomb Boulevard water treatment plant. During the approximately 2-week period the treatment plant was shut down, water from the Hadnot Point system was pumped into the Holcomb Boulevard water lines. Former Camp Lejeune environmental officials said that they used water from the Hadnot Point water system because it was the only water system interconnected with the Holcomb Boulevard water system, and because they believed the water from the Hadnot Point water system was no longer contaminated. Prior to restarting the Holcomb Boulevard water system, samples of treated water were tested and no gasoline was detected in any of these samples. However, the samples were found to contain various levels of TCE; these results were attributed to the use of water from the Hadnot Point water system. About 5 days after these samples were taken, the Holcomb Boulevard water system was restarted because the fuel line had been repaired. “Two of the wells that supply Tarawa Terrace have had to be taken off line because minute (trace) amounts of several organic chemicals have been detected in the water. There are no definitive State or Federal regulations regarding a safe level of these compounds, but as a precaution, I have ordered the closure of these wells for all but emergency situations when fire protection or domestic supply would be threatened.” The notice asked residents to reduce water use until early June, when the construction of a new water line was to be completed. In May 1985, another article in the base newspaper stated the number of wells that had been removed from service, stated why the wells were removed from service, and noted the potential for water shortage at Tarawa Terrace as a result. In addition, the Marine Corps provided us with copies of three North Carolina newspaper articles published from May 1985 to September 1985 discussing contamination at Camp Lejeune. All three articles included information about the drinking water contamination and noted that 10 wells serving two water treatment systems at Camp Lejeune had been removed from service. Past Contamination Was Estimated to Have Originated from Both On- base and Off-base Sources, and Cleanup Activities at These Sources Are Under Way The sources of past contamination for the Hadnot Point water system have not been conclusively determined. However, DOD officials have estimated that eight contaminated on-base sites in the proximity of the Hadnot Point water system may be the sources of contamination for that water system. These eight sites were contaminated by leaking underground storage tanks containing fuel, by degreasing solvents, by hazardous chemical spills, and by other waste disposal practices. Efforts by ATSDR are ongoing to conclusively determine the sources of past contamination in the Hadnot Point water system, as well as when the contamination began. For the Tarawa Terrace water system, North Carolina officials determined that the contamination likely came from a dry cleaning solvent that had been released into a leaking septic tank at an off-base dry cleaning facility— ABC One Hour Cleaners—which built its septic system and began operation in 1954. Both the dry cleaning facility and its septic tank were located off base but adjacent to a supply well for the Tarawa Terrace water system. Based on the environmental contamination at this site, ABC One Hour Cleaners was designated as a National Priorities List site in 1989. As part of its current health study, ATSDR has estimated that beginning as early as 1957 individuals were exposed to PCE in treated drinking water at levels equal to or greater than what became effective in 1992 as EPA’s maximum contaminant level of 5 parts per billion. Since 1989, officials from Camp Lejeune, North Carolina, and federal agencies, including EPA, have taken actions to clean up the suspected sources of the contamination in the Hadnot Point and Tarawa Terrace water systems. Because the contamination is thought to have come from both on- and off-base sources, and because those sources are part of two separate National Priorities List sites—Camp Lejeune and ABC One Hour Cleaners—cleanup activities for the suspected sources of contamination are being managed separately. Cleanup activities have included the removal of contaminated soils and gasoline storage tanks and the treatment of contaminated groundwater and soils. Although ATSDR Did Not Always Receive Requested Funding and Experienced Delays in Receiving Information from DOD, Officials Said Their Work Has Not Been Significantly Delayed Since ATSDR began its Camp Lejeune-related work in 1991, the agency did not always receive requested funding and experienced delays in receiving information from DOD entities. Although concerns have been raised by former Camp Lejeune residents, ATSDR officials said these issues have not significantly delayed its work and that such situations are normal during the course of a study. Funding of ATSDR’s Camp Lejeune Work ATSDR received funding from DOD for 13 of the 16 fiscal years during which it has conducted its Camp Lejeune-related work, and ATSDR provided its own funding for Camp Lejeune-related work during the other 3 years. Under federal law and in accordance with a memorandum of understanding between DOD and ATSDR, DOD is responsible for funding public health assessments and any follow-up public health activities, such as health studies or toxicological profiles related to DOD sites as agreed to in an annual plan of work. For fiscal year 1997, funding for ATSDR’s Camp Lejeune-related work came from the Navy. From fiscal year 1998 through fiscal year 2000, no funding was provided to ATSDR by the Navy or any DOD entity for its Camp Lejeune-related work because the agencies could not reach agreement about the funding for Camp Lejeune. In June 1997, ATSDR proposed conducting a study of childhood leukemia and birth defects associated with TCE and PCE exposure at Camp Lejeune during fiscal years 1998 and 1999 at an estimated cost of almost $1.8 million. In a July 1997 letter to the Navy, an ATSDR official noted that during a June meeting the Navy appeared to be reluctant to fund the proposed study; however, the official noted that DOD was liable for the costs of the study under federal law. In an October 1997 letter responding to ATSDR, a senior Navy official stated that the Navy did not believe it should be required to fund ATSDR’s proposed study because the cause of the contamination was an off-base source, ABC One Hour Cleaners. The Navy official said that it was more appropriate for ATSDR to seek funding for the study from the responsible party that caused the contamination. However, ATSDR officials told us that while they expected that the study would focus primarily on contamination from the dry cleaner, the study was also expected to include people who were exposed to on-base sources of contamination. An ATSDR official reported that the agency submitted its funding proposals for the Camp Lejeune study to DOD in each of the annual plans of work from fiscal year 1998 to fiscal year 2000, but that during that time period the agency received no DOD funding and funded its Camp Lejeune-related work from general ATSDR funding. In fiscal year 2001 the Navy resumed funding of ATSDR’s Camp Lejeune- related work. We could not determine why the Navy decided to resume funding of ATSDR’s work at that time. Since fiscal year 2003, funding for ATSDR’s Camp Lejeune-related work has been provided by the Marine Corps. According to a DOD official, the Marine Corps has committed to funding the current ATSDR study. The DOD official also noted that per a supplemental budget request from ATSDR for fiscal year 2006, the Marine Corps agreed to fund community assistance panel meetings and portions of a feasibility assessment for future studies that will include computerization of Camp Lejeune housing records. Provision of Information to ATSDR by DOD ATSDR has experienced some difficulties obtaining information from Camp Lejeune and DOD officials. For example, while conducting its public health assessment in September 1994, ATSDR sent a letter to the Department of the Navy noting that ATSDR had had difficulties getting documents needed for the public health assessment from Camp Lejeune, such as Remedial Investigation documents for Camp Lejeune. The letter also noted that ATSDR had sent several requests for information, and Camp Lejeune’s responses had been in most cases inadequate and no supporting documentation had been forwarded. ATSDR also had difficulty in obtaining access to DOD records while preparing to conduct its survey, the first phase of the current ATSDR health study. In October 1998, ATSDR requested assistance from the Defense Manpower Data Center, which maintains archives of DOD data, in locating residents of Camp Lejeune who gave birth between 1968 and 1985 on or off base. An official at the Defense Manpower Data Center initially did not provide the requested information because he believed that doing so could constitute a violation of the Privacy Act. Between February and April 1999, Headquarters Marine Corps facilitated discussion between ATSDR and relevant DOD entities about these Privacy Act concerns and some information was subsequently provided to ATSDR by DOD. In April 2001, Headquarters Marine Corps sent a letter to the Defense Privacy Office suggesting that the Defense Manpower Data Center had only provided a limited amount of information to ATSDR. However, in a July 2001 reply to Headquarters Marine Corps, the Defense Privacy Office noted that it believed that relevant data had been provided to ATSDR by the Defense Manpower Data Center in 1999 and 2001. In December 2005, ATSDR officials told us that they had recently learned of a substantial number of additional documents that had not been previously provided to them by Camp Lejeune officials. ATSDR then sent a letter to Headquarters Marine Corps seeking assistance in resolving outstanding issues related to delays in the provision of information and data to ATSDR. In an attachment to the letter, ATSDR provided a list of data and information needed from the Marine Corps in order to complete water modeling activities for its current study. In a January 2006 response, a Headquarters Marine Corps official noted that a comprehensive review was conducted of responses to ATSDR’s requests for information and that the Marine Corps believed it had made a full and timely disclosure of all known and available requested documents. The official also noted that while ATSDR had requested that the Marine Corps identify and provide documents that were relevant or useful to ATSDR’s study, the Marine Corps did not always have the subject matter expertise to determine the relevance of documents. The official noted that the Marine Corps would attempt to comply with this request; however, the official also noted that ATSDR was the agency with the expertise necessary to determine the relevance of documents. Effect on ATSDR’s Work Despite difficulties, ATSDR officials said the agency’s Camp Lejeune- related work had not been significantly delayed or hindered by DOD. Officials said that while funding and access to records were probably slowed down and made more expensive by DOD officials’ actions, their actions did not significantly impede ATSDR’s health study efforts. The ATSDR officials also stated that while issues such as limitations in access to DOD data had to be addressed, such situations are normal during the course of a study. The officials stated that ATSDR’s progress on the study has been reasonable in light of the complexity of the project. Nonetheless, as some former residents have learned that ATSDR has not always received requested funding and information from DOD entities, they have raised questions about DOD’s commitment to supporting ATSDR’s work. For example, when some former residents learned during a community assistance panel meeting that it took about 4 months for DOD to respond to a supplemental budget request from ATSDR for fiscal year 2006, they questioned DOD entities’ commitment to ATSDR’s Camp Lejeune-related work. However, DOD and ATSDR officials described this delay in responding as typical during the funding process. Experts Convened by NAS Generally Agreed That Many Parameters of ATSDR’s Current Study Were Appropriate The seven members of an expert panel convened by NAS at our request generally agreed that specific parameters of ATSDR’s current study were appropriate, including the study population, the exposure time frame, and the selected health effects. The expert panel members had mixed opinions on ATSDR’s projected completion date. Study Population The seven panel experts concurred that ATSDR logically limited its study population to those individuals who were in utero while their mothers were pregnant and lived at Camp Lejeune during the 1968 through 1985 time frame, and who may have been exposed to the contaminated drinking water. The current study follows recommendations from the agency’s 1997 public health assessment of Camp Lejeune, which noted that studies of cancer among those who were exposed in utero should be conducted to further the understanding of the health effects in this susceptible population. Panel experts said that ideally a study would attempt to include all individuals who were potentially exposed, but that limited resources and data availability were practical reasons for limiting the study population. Additionally, panel experts agreed that those exposed while in utero were an appropriate study population because they could be considered at higher risk of adverse health outcomes than others, such as those exposed as children or adults. In addition, two panel experts said that studying only those who lived on base was reasonable because they likely had a higher risk of inhalation exposure to VOCs such as TCE and PCE, which may be more potent than ingestion exposure. Thus, pregnant women who lived in areas of base housing with contaminated water and conducted activities during which they could inhale water vapor—such as bathing, showering, or washing dishes or clothing—likely faced greater exposure than those who did not live on base but worked on base in areas served by the contaminated drinking water. Study Time Frame The seven panel experts agreed that the 1968 through 1985 study time frame was reasonable, based on limitations in data availability. This time frame was adopted from ATSDR’s 1998 study of adverse pregnancy outcomes, which limited the study population to include those potentially exposed between 1968 and 1985. According to ATSDR’s study protocol, these years were chosen because 1968 was the first year that birth certificates were computerized in North Carolina and 1985 was when the affected water wells were removed from service. Four of the panel experts said they did not see any benefit in using an earlier start date than 1968 because collecting birth records before 1968 could require a significant amount of resources to collect data. In addition, while the initial exposure to contaminated drinking water may have occurred as early as the 1950s, at the time the ATSDR study time frame was selected officials were unable to determine precisely when the contamination began. Four of the panel experts commented that exposure was likely highest in the latter part of the study time frame—presumably, they said, as a result of a higher accumulated level of contamination over time—thus making the uncertainty of when the contamination began less significant and supporting ATSDR’s decision to study the later time frame. Study Health Effects The five panel experts who discussed health effects said that those selected for the study were valid for individuals who were potentially exposed in utero at Camp Lejeune. Based on previous ATSDR work and existing literature, the health effects chosen for the study were neural tube defects, oral cleft defects, and childhood hematopoietic cancers, including leukemia and non-Hodgkin’s lymphoma. Two panel experts said that ATSDR had limited its study to health effects that are rare and that generally occur at higher levels of exposure to VOCs such as TCE and PCE than are expected to have occurred at Camp Lejeune. They said that this may result in ATSDR not identifying enough individuals with these health effects to determine meaningful results in the study. Study Completion Date ATSDR has projected a December 2007 completion date for the study, which would include activities such as identifying and enrolling study participants, conducting a parental interview, confirming each reported diagnosis, modeling the water system to quantify the amount and extent of each individual’s exposure, analyzing the data, and drafting a final report. Panel experts had mixed opinions regarding ATSDR’s completion date. Of the five panel experts who commented on the proposed completion date, three said that the date appeared reasonable, and two others said that based on the complexity of the water modeling the projected completion date might be optimistic. Mr. Chairman, this completes my prepared statement. I would be happy to respond to any question you or other Members of the Subcommittee may have at this time. Contacts and Acknowledgments For further information about this testimony, please contact Marcia Crosse at (202) 512-7119 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this testimony. Bonnie Anderson, Assistant Director; Karen Doran, Assistant Director; Danielle Organek; and Christina Ritchie made key contributions to this testimony. Appendix I: Volatile Organic Compounds Detected in Wells at Hadnot Point and Tarawa Terrace Water Systems Concentrations of chemicals in parts per billion4 and 195 and are one-time sampling results. We did not find documentation that tied the decision to remove the wells from service to any particular level of contamination included in related Environmental Protection Agency (EPA) guidance or enforceale regulation. oth natural processes and human activities. Some industries use enzene to make other chemicals which are used to make plastics, resins, and nylon and synthetic fiers. Benzene is also a natural part of crude oil, gasoline, and cigarette smoke. Breathing enzene can cause drowsiness, dizziness, and unconsciousness; long-term enzene exposure causes effects on the one marrow and can cause anemia and leukemia. The Department of Health and Human Services (HHS) has determined that enzene is a known carcinogen. uid used as a solvent for waxes and resins; in the extraction of rubber; as a refrigerant; in the manufacture of pharmaceuticals and artificial pearls; in the extraction of oils and fats from fish and meat; and in making other organics. EPA has found trans-1,2-DCE to potentially cause central nervous system depression when people are exposed to it at levels aove 100 parts per illion for relatively short periods of time. Trans-1,2- DCE has the potential to cause liver, circulatory, and nervous system damage from long-term exposure at levels aove 100 parts per illion. uid with a mild, sweet, chloroform-like odor. Virtually all of it is used in making adhesives, synthetic fiers, refrigerants, food packaging, and coating resins. EPA has found 1,1-DCE to potentially cause liver damage when people are exposed to it at levels aove 7 parts per illion for relatively short periods of time. In addition, 1,1-DCE has the potential to cause liver and kidney damage as well as toxicity to the developing fetus and cancer from a lifetime exposure at levels aove 7 parts per illion. urns. HHS has determined that methylene chloride can e reasonaly anticipated to e a cancer-causing chemical. uid which occurs naturally in crude oil and in the tolu tree. It is also produced in the process of making gasoline and other fuels from crude oil and making coke from coal. Toluene may affect the nervous system. Low to moderate levels can cause tiredness, confusion, weakness, drunken-type actions, memory loss, nausea, loss of appetite, and hearing and color vision loss. Inhaling high levels of toluene in a short time can result in feelings of light-headedness, dizziness, or sleepiness. It can also cause unconsciousness, and even death. High levels of toluene may affect kidneys. Studies in humans and animals generally indicate that toluene does not cause cancer. stance that does not occur naturally. It can e formed when other sustances such as trichloroethane, TCE, and PCE are roken down. Breathing high levels of vinyl chloride for short periods of time can cause dizziness, sleepiness, and unconsciousness and at extremely high levels can cause death. Breathing vinyl chloride for long periods of time can result in permanent liver damage, immune reactions, nerve damage, and liver cancer. HHS has determined that vinyl chloride is a known carcinogen. Well TT-23 is also referred to as “TT-new well” in Marine Corps documents. This is a work of the U.S. government and is not subject to copyright protection in the United States. It may be reproduced and distributed in its entirety without further permission from GAO. However, because this work may contain copyrighted images or other material, permission from the copyright holder may be necessary if you wish to reproduce this material separately.
In the early 1980s, volatile organic compounds (VOC) were discovered in some of the water systems serving housing areas on Marine Corps Base Camp Lejeune. Exposure to certain VOCs may cause adverse health effects, including cancer. Since 1991, the Department of Health and Human Services' Agency for Toxic Substances and Disease Registry (ATSDR) has been examining whether individuals who were exposed to the contaminated drinking water are likely to have adverse health effects. ATSDR's current study is examining whether individuals who were exposed in utero are more likely to have developed certain childhood cancers or birth defects. GAO was asked to testify on its May 11, 2007 report: Defense Health Care: Activities Related to Past Drinking Water Contamination at Marine Corps Base Camp Lejeune (GAO-07-276). This testimony summarizes findings from the report about (1) efforts to identify and address the past drinking water contamination, (2) the provision of funding and information from the Department of Defense (DOD) to ATSDR, and (3) an assessment of the design of the current ATSDR study. GAO reviewed documents, interviewed officials and former residents, and contracted with the National Academy of Sciences to convene an expert panel to assess the current ATSDR study. Efforts to identify and address the past drinking water contamination at Camp Lejeune began in the 1980s, when Navy water testing at Camp Lejeune detected VOCs in some base water systems. In 1982 and 1983, continued testing identified two VOCs--trichloroethylene (TCE), a metal degreaser, and tetrachloroethylene (PCE), a dry cleaning solvent--in two water systems that served base housing areas, Hadnot Point and Tarawa Terrace. In 1984 and 1985 a Navy environmental program identified VOCs, such as TCE and PCE, in some of the individual wells serving the Hadnot Point and Tarawa Terrace water systems. Ten wells were subsequently removed from service. DOD and North Carolina officials concluded that on- and off-base sources were likely to have caused the contamination. It has not been determined when contamination at Hadnot Point began. ATSDR has estimated that well contamination at Tarawa Terrace from an off-base dry cleaner began as early as 1957. Since ATSDR began its Camp Lejeune-related work in 1991, the agency has not always received requested funding and has experienced delays in receiving information from DOD. However, ATSDR officials said that while funding and access to records were probably slowed down and made more expensive by DOD officials' actions, their actions did not significantly impede ATSDR's Camp Lejeune-related health study efforts. The ATSDR officials also stated that while issues such as limitations in access to DOD data had to be addressed, such situations are normal during the course of a study. Members of the expert panel that the National Academy of Sciences convened for GAO generally agreed that many parameters of ATSDR's current study are appropriate, including the study population, the exposure time frame, and the selected health effects. Regarding the study's proposed completion date of December 2007, the panel experts had mixed opinions: three of the five panel experts who commented said that the projected date appeared reasonable, while two said that the date might be optimistic. DOD, the Environmental Protection Agency, and the Department of Health and Human Services provided technical comments on a draft of the May 11, 2007 report, which GAO incorporated where appropriate. Three members of an ATSDR community assistance panel for Camp Lejeune provided oral comments on issues such as other VOCs that have been detected at Camp Lejeune, and compensation, health benefits, and additional notification for former residents. GAO focused its review on TCE and PCE because they were identified by ATSDR as the chemicals of primary concern. GAO's report notes that other VOCs were detected. GAO incorporated the panel members' comments where appropriate, but some issues were beyond the scope of the report.
GAO_GAO-12-592
Background The Recovery Act of 2009 was enacted in response to significant weakness in the economy to, among other things, help promote economic recovery and assist those most affected by the recession. The Congressional Budget Office (CBO) estimated the Recovery Act’s cost at $825 billion as of August 2011. The Recovery Act included provisions to help stimulate the housing market, including increasing loan limits for FHA-insured mortgages in 669 high-cost counties in calendar year The provision allowed FHA to insure mortgages at a higher 2009.amount than would have been authorized without the Recovery Act. Under this provision, FHA insured over $20 billion in mortgages for 87,000 homeowners who were approved for FHA mortgage insurance in 2009. The Recovery Act also adapted and extended the FTHBC through November 2009. Through July 3, 2010, IRS reported that about 1.7 million individuals claimed more than $12 billion in FTHBCs under the Recovery Act for homes purchased in 2009. Office of Management and Budget (OMB) Circular A-129 states that delinquent tax debtors are ineligible for federal loan insurance, such as FHA mortgage insurance, unless they repaid the debt or were in a valid repayment agreement with IRS, but the FTHBC was available to those who qualified regardless of their tax debt. FHA Mortgage Insurance FHA’s single-family programs insure private lenders against 100 percent of the value of the loan for foreclosures on mortgages that meet FHA criteria, including mortgages for initial home purchases, construction rehabilitation, and refinancing. As of September 2011, almost 3,700 lenders were approved to participate in these programs. The insurance covers the principal, interest, and associated foreclosure costs, among other things. Lenders usually require mortgage insurance when a home buyer makes a down payment of less than 20 percent of the value of the home. FHA mortgage insurance allows a home buyer to make a modest down payment—as low as 3.5 percent—and obtain a mortgage for the balance of the purchase price. As the recent housing and economic recession set in, FHA’s share of the market for home purchase mortgages grew sharply due to the contraction of other mortgage market segments—rising from about 5 percent in 2006 to nearly 30 percent in 2009. FHA insured almost 2 million single-family mortgages valued at more than $300 billion in mortgage insurance in 2009. FHA generally is thought to promote stability in the market by ensuring the availability of mortgage credit in areas that may be underserved by the private sector or that are experiencing economic downturns. It has played a particularly large role among minority, lower-income, and first-time home buyers; almost 80 percent of FHA-insured home purchase loans in 2010 went to first-time home buyers. The FHA home mortgage insurance programs are funded by the FHA Mutual Mortgage Insurance Fund (MMIF), which is supported by insurance premiums charged to borrowers. The MMIF is used to cover claims on foreclosed mortgages, among other things. The Omnibus Budget Reconciliation Act of 1990 required the Secretary of HUD to take steps to ensure that the MMIF attained a capital ratio (i.e., economic value divided by the unamortized insurance-in-force) of at least 2 percent by November 2000 and maintain a 2 percent ratio at all times thereafter.The act also required an annual independent actuarial review of the economic net worth and soundness of the MMIF. The actuarial review estimates the economic value of the MMIF as well as the capital ratio to determine whether the MMIF has met the capital standards in the act. The capital ratio has dropped sharply in recent years due to declines in home prices and increases in seriously delinquent loans and foreclosures. The most recent actuarial study shows that the capital ratio is currently below the statutorily mandated level, at 0.24 percent, representing $2.6 billion in estimated capital resources against an active portfolio of $1.08 trillion. The MMIF has historically been sufficient to fund the FHA home mortgage insurance programs without additional funding from the federal government, but if the reserve were to be depleted, FHA would need to draw on permanent and indefinite budget authority to cover additional increases in estimated losses. A weakening in the performance of FHA-insured loans could increase the possibility that FHA will require additional federal funds. Our work has previously shown that the increased reliance on FHA mortgage insurance highlights the need for FHA to ensure that it has the proper controls in place to minimize financial risks to the federal government while meeting the housing needs of borrowers. Lenders are responsible for underwriting the loans to determine an applicant’s eligibility for FHA mortgage insurance in accordance with FHA policies. Underwriting is a risk analysis that uses information collected during the loan origination process to decide whether to approve a loan for FHA insurance. Lenders employ automated underwriting— the process by which lenders enter information on potential borrowers into electronic systems that contain an evaluative formula, or algorithm, known as a scorecard. The scorecard attempts to quickly and objectively measure the borrower’s risk of default by examining data such as application information and credit score. Since 2004, FHA has used its own scorecard called Technology Open to Approved Lenders (TOTAL). FHA lenders now use TOTAL in conjunction with automated underwriting systems to determine the likelihood of default. Although TOTAL can assess the credit risk of a borrower, it does not reject a loan outright. Rather, TOTAL will assign a risk assessment of either “accept” or “refer” for each borrower. FHA requires lenders to manually underwrite loans that are assessed as “refer” by TOTAL to give a final determination if the loan should be accepted or rejected. According to FHA policy, a lender remains accountable for compliance with FHA eligibility requirements, regardless of the risk assessment provided by TOTAL. Virtually all of the lenders that participate in FHA’s mortgage insurance programs for single-family homes have direct endorsement authority. These lenders can underwrite and close mortgage loans without FHA’s prior review or approval. FHA insures lenders against nearly all losses resulting from foreclosed loans and covers 100 percent of the value of the loan. In general, foreclosure may be initiated when three monthly installments are due and unpaid, and it must be initiated when six monthly installments are due and unpaid, except when prohibited by law. To minimize the number of FHA loans entering foreclosure, servicers are responsible for pursuing various loss mitigation strategies, including suspended payments, loan modification, reduced mortgage payments, and sale of the property by the borrower. If, despite these loss mitigation strategies, the lender forecloses on the loan, the lender can file an insurance claim with FHA for the unpaid balance of the loan and other costs. However, FHA reviews a selection of insured loans, including early payment defaults (loans at least 60 days delinquent in the first six payments), in part to minimize potential FHA losses and ensure the underwriting for these mortgages met FHA guidelines. Reviews revealing serious deficiencies may result in FHA requiring the lenders to compensate the department for financial losses, known as indemnification, which requires the lender to repay FHA for any losses that it incurs after a loan has gone into default and the property has been sold. Congress, through legislation, sets limits on the size of loans that may be insured by FHA. These loan limits vary by county and can change from year to year. To mitigate the effects from the economic downturn and the sharp reduction of mortgage credit availability from private sources, Congress increased FHA loan limits. The Economic Stimulus Act (ESA) enacted in February 2008 stipulated that FHA loan limits be set temporarily at 125 percent of the median house price in each area, with a maximum loan limit of $729,750 for a one-unit home. Immediately prior to ESA’s enactment, the limits had been set at 95 percent of area median house prices. In July 2008, 5 months after passing ESA, Congress passed the Housing and Economic Recovery Act (HERA), which established new statutory limits of 115 percent of area median home prices. Then, in February 2009, Congress passed the Recovery Act, which stipulated that FHA loan limits for 2009 be set in each county at the higher dollar amount when comparing loan limits established under 2008 ESA requirements and limits for 2009 under HERA. First-Time Homebuyer Credit Congress passed the FTHBC to assist the struggling real estate market and encourage individuals to purchase their first home. The credit was initially enacted by HERA and later revised by the Recovery Act. The 2008 HERA FTHBC provided taxpayers a credit of up to $7,500 to be paid back over 15 years, essentially serving as an interest-free loan. In 2009, the Recovery Act was enacted and increased the maximum credit for the 2009 FTHBC to $8,000, with no payback required unless the home is sold or ceases to be the taxpayer’s principal residence within 3 years of the purchase. The credit of up to $8,000 was a refundable tax credit paid out to the claimant if there was no tax liability or the credit exceeded the amount of any federal tax due. In July 2010, the Homebuyer Assistance and Improvement Act (HAIA) of 2010 extended the date to close on a home purchase to September 30, 2010. Federal Policies on Tax Debtors Receiving Federal Loan Insurance To protect federal government assets and minimize unintended costs to the government, OMB Circular A-129 states that individuals with delinquent federal debts are ineligible for loan insurance and prohibits federal agencies from issuing loans to such applicants; however, OMB’s policy allows individuals with delinquent federal taxes or other federal debt to attain eligibility by repaying their debt in full or entering into a valid repayment plan with the agency they owe. The policy states that agencies should determine if the applicant is eligible by including a question on loan applications asking applicants if they have such delinquencies. The policy also (1) requires agencies and lenders to use credit bureaus as screening tools, because tax liens resulting from delinquent tax debt typically appear on credit reports, and (2) encourages agencies to use HUD’s Credit Alert Interactive Voice Response System (CAIVRS), a database of delinquent federal debtors. CAIVRS contains delinquent debt information for six federal agencies; however, it does not According to OMB policy, if delinquent contain any tax debts from IRS.federal debts are discovered, processing of applications must be suspended until the applicant attains eligibility. FHA’s policies for lenders dictate that an FHA mortgage insurance applicant must be rejected if he or she is delinquent on any federal debt, including tax debt, or has a lien placed against his or her property for a debt owed to the federal government. Like OMB’s policy, FHA policy states that an applicant with federal debt may become eligible for mortgage insurance by repaying the debt in full or by entering into a valid repayment agreement with the federal agency owed, which must be verified in writing. Such repayment plans include IRS-accepted installment agreements and offers in compromise. To identify individuals with tax debt, FHA requires mortgage insurance applicants to declare whether they are delinquent or in default on any federal debt on their insurance application, the Uniform Residential Loan Application (URLA). As printed on the application, knowingly making any false statement on the URLA is a federal crime punishable by fine or imprisonment. FHA also requires that lenders review credit reports for all applicants to identify tax liens and other potential derogatory credit information. FHA Insured over $1.44 Billion in Mortgages for Thousands of Recovery Act Beneficiaries with Federal Tax Debt In 2009, FHA insured over $1.44 billion in mortgages for 6,327 borrowers who at the same time had delinquent tax debt and benefited from the Recovery Act. According to IRS records, these borrowers had an estimated $77.6 million in unpaid federal taxes as of June 30, 2010. As figure 1 illustrates, our analysis included tax debtors who either benefited from FHA’s increased loan limits or who claimed the FTHBC and received FHA mortgage insurance of any value. Although federal policies did not prohibit tax debtors from claiming the FTHBC, they were ineligible for FHA mortgage insurance unless their delinquent federal taxes and other federal debt had been fully repaid or otherwise addressed through a repayment agreement. We could not determine the proportion of borrowers who were ineligible because we could not systematically identify which of the 6,327 borrowers had valid repayment agreements at the time of the mortgage approval using IRS’s data; however, we found that five of our eight selected borrowers were not in valid repayment agreements at the time they obtained FHA mortgage insurance. In addition, FHA records indicate that borrowers with tax debt had serious delinquency (in default for 90 days or more) and foreclosure rates two to three times greater than borrowers without tax debt, which potentially represents an increased risk to FHA. In 2009, FHA insured $759.3 million in mortgages for 2,646 individuals who owed $35.5 million in unpaid federal taxes as of June 30, 2010, under the Recovery Act’s provision for increased loan limits. These borrowers and coborrowers obtained 1,913 insured mortgages with a median value of $352,309 and had a median tax debt of $6,290 per person. Their mortgages accounted for 3.7 percent of the 52,006 mortgages FHA insured under Recovery Act provisions for increased limits in 2009, which in turn represented 2.5 percent of all mortgages insured by FHA in 2009. Our analysis likely understates the amount of unpaid federal taxes because IRS data do not cover individuals who fail to file tax returns or who understate their income. Of the 18 selected individuals who benefitted from increased loan limits for FHA mortgage insurance or received the FTHBC under the Recovery Act, we found that 11 had not filed all of their federal tax returns. Using IRS data, we cannot systematically determine which of these individuals was in a valid repayment agreement at the time of the mortgage, and therefore cannot determine whether insuring each of these 1,913 mortgages was improper, but it is possible that borrowers with tax debt represent a greater financial risk to the federal government. As illustrated in figure 2, serious delinquency and foreclosure rates among Recovery Act borrowers with unpaid federal taxes were at least twice as high as the rates for other borrowers. As of September 2011, 32 percent of the 1,913 mortgages made to borrowers with tax debt were seriously delinquent on their payments, compared with 15.4 percent of other FHA- insured mortgages. About 6.3 percent of the mortgages for borrowers with tax debt went into foreclosure since the home was purchased in 2009, compared with 2.4 percent for others. The homes foreclosed after they were purchased by tax debtors were insured for $44.9 million, potentially leaving FHA responsible for paying claims for the remaining loan balance and certain interest and foreclosure costs. FHA recovers some of these costs when it sells the property. Finally, FHA’s increased exposure to risk from insuring tax debtors is unlikely to be limited to Recovery Act beneficiaries. Because FHA uses identical methods to insure non-Recovery Act mortgages, it is reasonable to assume that some portion of FHA borrowers for the remaining 97.5 percent of mortgages we did not analyze as part of this review are tax debtors. $717.2 Million in FHA Mortgage Insurance Was Provided to Tax Debtors Who Claimed $27.4 Million in Recovery Act FTHBCs In 2009, $717.2 million in FHA mortgage insurance and $27.4 million in Recovery Act FTHBCs were provided to 3,815 individuals who owed an estimated $43.5 million in unpaid federal taxes. These borrowers obtained 3,812 insured mortgages with a median value of $167,887 and had a median unpaid tax amount of $5,044 per person. Their mortgages represented 0.5 percent of the 700,003 mortgages insured by FHA for borrowers who claimed the FTHBC. As discussed above, we were unable to determine the proportion of the mortgage insurance that was provided to borrowers who were, in fact, eligible as a result of entering into a valid repayment agreement with IRS. We found that three of our eight selected borrowers were in valid repayment agreements at the time they obtained FHA mortgage insurance. As illustrated in figure 3, we found that serious delinquency and foreclosure rates for mortgages obtained by FHA borrowers with federal tax debts who received the FTHBC were two to three times higher than the rates for other borrowers. As of September 2011, 26.9 percent of the 3,812 mortgages made to borrowers with unpaid tax debts were seriously delinquent on their payments, compared with 11.9 percent of borrowers without tax debt who received the FTHBC and FHA mortgage insurance. About 4.7 percent of the mortgages of borrowers with tax debt were foreclosed, compared with 1.4 percent for other borrowers. The 181 foreclosed homes purchased by tax debtors had a total mortgage insurance value of $36.5 million, potentially resulting in a loss to the MMIF. The FTHBC is a refundable credit, meaning taxpayers could receive payments in excess of their tax liability. Federal law typically requires that any federal tax refund be offset to pay down an individual’s unpaid taxes. Of the 3,815 borrowers we identified with tax debt, 233 received a federal tax refund after claiming the FTHBC. We selected 9 of these borrowers for a detailed review and found that all 9 were issued refunds in accordance with federal law. For example, three of these cases had filed bankruptcy prior to receiving the refund. Federal bankruptcy law prevents IRS from taking collections actions, such as offsetting postpetition refunds, against individuals undergoing bankruptcy proceedings. The amounts of unpaid federal taxes, mortgage insurance, and FTHBCs we identified are likely understated for the following reasons: Certain individuals did not file tax returns or underreported their income, and therefore are not included in our analysis. Data limitations in the FTHBC data prevented us from isolating all individuals who benefitted from the FTHBC under the Recovery Act. Any Recovery Act FTHBC recipient whose FTHBC was greater than their outstanding tax liability would not be included in our analysis because the refundable credit would have offset their outstanding tax liability. Federal law generally requires that IRS offset any refund against an individual’s tax liability. Shortcomings in the Capacity of FHA- Required Documentation to Identify Tax Debts and in Certain Policies Allow Tax Debtors to Obtain Mortgage Insurance Some ineligible tax debtors received FHA mortgage insurance, in part, due to shortcomings in the capacity of FHA-required documentation to identify tax debts and shortcomings in other policies that lenders may misinterpret. Lenders are required by FHA policy to perform steps to identify an applicant’s federal debt status, but the information provided by these steps does not reliably indicate an applicant’s tax debt. Statutory restrictions limit the disclosure of taxpayer information without the taxpayer’s consent. Lenders are already required to obtain such consent through an IRS form they use to validate the income of some applicants. This same form could also be used to obtain permission from applicants to access reliable tax-debt information directly from IRS, but doing so is not addressed in FHA’s policies. Requiring lenders to collect more reliable information on tax debts could better prevent ineligible tax debtors from obtaining FHA mortgage insurance. Further, FHA’s policies requiring lenders to investigate whether tax liens indicate unresolved tax debt are unclear and may be misinterpreted. The lenders we spoke with believed they were in compliance with FHA policies when they provided FHA- insured loans to applicants with tax liens, but FHA officials indicated otherwise. As a result of these shortcomings, lenders may approve federally insured mortgages for ineligible applicants with delinquent tax debt in violation of federal policies. Information That FHA Requires Lenders to Collect on Mortgage Applicants Does Not Reliably Indicate the Existence of Federal Tax Debt Consistent with OMB policies, FHA has lender policies intended to prevent ineligible tax debtors from obtaining FHA mortgage insurance; however, the information the agency requires lenders to collect does not reliably indicate the existence of federal tax debt. The three sources of information FHA requires lenders to obtain each have shortcomings in their capacity to identify borrowers’ tax debts: Uniform Residential Loan Application (URLA). The URLA requires that applicants declare any federal debt that is delinquent or in default. The URLA also requires applicants to disclose any liabilities, including tax debt, so a lender can assess the applicant’s ability to repay the proposed mortgage. While knowingly making false statements on an URLA is a federal crime and may deter some from lying about their tax debt, much of our work has focused on the inadequacies of self- reported information without independent verification. In fact, our comparison of the URLAs in eight mortgage files with IRS tax data revealed that five borrowers wrongly declared they were not, by FHA’s definition, delinquent or in default on federal tax debt (e.g., not in a valid IRS repayment agreement). In addition, six of the borrowers did not properly disclose the tax debts on the liabilities section of the URLA. Because of the federal statute that prohibits the disclosure of taxpayer information, we are unable to refer these cases to FHA for further investigation. Excerpts of the URLA where applicants are required to disclose any debts that may affect their eligibility for FHA mortgage insurance or their ability to repay the proposed mortgage are illustrated in appendix II. CAIVRS. FHA requires that lenders check all applicants against CAIVRS, HUD’s database of delinquent federal debtors, to identify federal debts. While it contains delinquent debt information from six agencies, such as the Department of Education and the Small Business Administration, CAIVRS does not contain federal tax information from IRS because statutory restrictions generally prohibit IRS from disclosing taxpayer information without the taxpayer’s consent. Two of the three lenders we spoke with mistakenly believed CAIVRS could be used to identify federal tax debt. Credit reports. Lenders told us that credit reports, which contain public records such as federal tax liens, were a primary method of identifying liens to indicate certain tax debts. However, delinquent federal taxes do not always appear on credit reports because IRS does not file liens on all tax debtors with property. In addition, many FHA borrowers are first-time home buyers and may not have real property on which IRS can place a lien. IRS records indicated that only two of our eight selected borrowers had tax liens filed against them at the time they obtained FHA mortgage insurance. Lenders using only these FHA-required methods for identifying tax debt are missing an opportunity to more accurately determine whether applicants are eligible for FHA-insured mortgages, in part, because they do not have access to certain information. Access to the federal tax information needed to obtain the tax payment status of applicants is restricted under section 6103 of the Internal Revenue Code, which prohibits disclosure of taxpayer data to lenders in most instances. However, lenders may request information on federal tax debts directly from IRS if the applicant provides consent. To verify the income of self- employed and commission-income applicants, FHA requires that lenders obtain an applicant-signed consent form allowing the lender to verify the applicant’s income directly with IRS. The three lenders we spoke with indicated they use IRS form 4506-T Request for Transcript of Tax Return to satisfy this requirement. FHA could also compel lenders to use this form or otherwise obtain borrower consent to identify tax debts. Files for four of our eight selected borrowers had a copy of the IRS Form 4506-T in their FHA mortgage files. The lenders for these borrowers used the 4506-T only to validate income by requesting federal tax return transcripts and did not use the form to request account transcripts that would have disclosed tax debt information. None of the eight mortgage files contained IRS tax account transcripts. Officials from each of the lenders we interviewed said it is their policy to use the 4506-T only to validate the income of these applicants, as this is the requirement under FHA policies. Officials from In two of the lenders used the form to verify income for all borrowers.contrast, officials from the third lender stated that they executed this form for a random sample of additional applicants for income verification, but noted that doing so for every applicant would be too burdensome. As shown in figure 4, checking box 6a on the form allows a lender to obtain tax return transcripts for applicants, which do not disclose tax debt information. Checking box 6b would allow a lender to request and receive account transcripts. Account transcripts contain information on the financial status of the account, including information on any existing tax These transcripts would allow a lender to identify federal taxes debts. owed by any applicant, including debts not found on credit reports because a federal tax lien does not exist. Checking box 6c would allow a lender to obtain both tax return transcripts and account transcripts, which the lender could use to verify the income of an applicant as well as identify whether the applicant has federal tax debt. The lender may request account transcripts only for the current year and up to 3 prior years and must state the requested years on the form; transcripts beyond this are generally unavailable. Despite this limitation, the IRS form 4506-T could serve as a method for lenders to identify loan applicants with unpaid debt. Without such a method, lenders may approve federally insured mortgages for ineligible applicants with delinquent tax debt in violation of OMB and FHA policies. IRS returns the information requested on IRS form 4506-T within 10 business days at no expense to the requester, or within 48 hours through the IRS Income Verification Express Service (IVES) at an expense of $2.00, according to IRS officials. Shortcomings in FHA Policies May Have Led to Ineligible Tax Debtors Obtaining Mortgage Insurance All three lenders we spoke with unknowingly violated FHA policies on requirements to investigate tax liens. Federal tax liens remain on a property until the associated tax debt has been paid in full or otherwise satisfied. The presence of a lien does not prevent an applicant from receiving FHA mortgage insurance because, per OMB and FHA policies, applicants are eligible for mortgage insurance if they are in a valid repayment agreement. However, according to FHA officials, FHA requires lenders to investigate whether the tax debt that caused the lien has been resolved or brought current under a repayment plan. If it has not, insurance must be denied. Lenders understood these policies to have exemptions for some applicants. FHA officials told us that endorsing a mortgage without determining applicant eligibility by investigating the status of tax debts related to federal tax liens for any applicant is improper due diligence. Specifically, officials from two of the three lenders said they would approve FHA insurance for applicants with a federal tax lien on their credit report if IRS agreed to subordinate the lien to FHA. The lenders believed this was in accordance with FHA policy that indicates that tax liens may remain unpaid if the lien holder subordinates the lien to FHA. One of the lenders told us that this policy could potentially allow ineligible applicants with delinquent federal tax debt to obtain FHA mortgage insurance. However, FHA officials told us that this policy is only applicable if the lender has previously determined the applicant is eligible by investigating the lien (i.e., requesting verification from IRS that they have repaid their debt or are in a repayment agreement). See figure 5 for FHA policy excerpts. Officials from the third lender said they would approve any applicant rated as “accept” by TOTAL without additional review or manual underwriting, even if the applicant’s credit report showed a tax lien. The officials believed this was consistent with FHA policy because TOTAL would not have granted an “accept” unless the application met FHA requirements. However, FHA officials told us that while TOTAL considers an applicant’s credit score in its risk evaluation, it does not consider other factors such as tax liens. FHA guidance states that the lender remains accountable for compliance with FHA eligibility requirements, regardless of the risk assessment provided by TOTAL. Due to potential shortcomings in FHA policies, lenders may misinterpret them, which could result in lenders approving federally insured mortgages for ineligible applicants with delinquent tax debt in violation of OMB and FHA policies. Our review was limited to mortgages obtained under the Recovery Act provisions; however, these policies are the same for all FHA mortgages. Our review included only a small percentage of all mortgages insured by FHA in 2009, and it is likely that FHA’s unclear policies may negatively affect some of the other mortgages. Conclusion FHA has helped millions of families purchase homes through its single- family mortgage insurance programs. As more and more Americans turn to FHA to finance their homes, it is critical for FHA to ensure that it has policies in place to minimize financial risks to the federal government while meeting the housing needs of borrowers. Tax debtors who were ineligible for FHA mortgage insurance were still able to obtain insurance, despite FHA policies intended to prohibit this. Our review focused exclusively on individuals who benefitted from the Recovery Act, which only accounted for a small percentage of FHA borrowers in 2009; nevertheless we were able to identify thousands of tax debtors who obtained insurance. These debtors became seriously delinquent in their payments and lost their homes to foreclosures at a higher rate than those without tax debt. Current shortcomings we found in the capacity of available information sources to identify applicants’ tax debts could be addressed by improved access to federal tax information. But because FHA’s underwriting policies apply equally to all mortgage insurance applicants, it is likely that loans we did not review also included tax debtors. To ensure compliance with the confidentiality requirements associated with the disclosure of taxpayer information, FHA would need to consult with IRS to take action to identify tax debtors who are ineligible for FHA mortgage insurance, as has been done to verify the income of certain applicants. This would include developing appropriate criteria and safeguards to ensure taxpayer privacy and minimize undue approval delays. In addition, strengthening FHA policies and their interpretation by lenders can help prevent ineligible tax debtors from continuing to receive the benefit of FHA insurance. To the extent that borrowers with tax debt represent additional risk, FHA could minimize the potential for this risk by taking steps to address the issues identified in this report. Recommendations for Executive Action The Secretary of HUD should direct the Assistant Secretary for Housing (Federal Housing Commissioner) to implement the following two recommendations: Consult with IRS to develop written policies requiring lenders to collect and evaluate IRS documentation appropriate for identifying ineligible applicants with unpaid federal taxes, while fully complying with the statutory restriction on disclosure of taxpayer information. For example, FHA could require lenders to obtain consent from borrowers to allow FHA and its lenders to verify with IRS whether recipients of FHA insurance have unpaid federal taxes. Provide FHA lenders with revised policies or additional guidance on borrower ineligibility due to delinquent federal debts and tax liens to more clearly distinguish requirements for lenders to investigate any indication that an applicant has federal tax debt (such as a federal tax lien) to provide reasonable assurance that ineligible borrowers do not receive FHA mortgage insurance. Agency Comments and Our Evaluation We provided a draft of this report to IRS and HUD for review and comment. IRS did not have any comments in response to the draft report. The Acting Assistant Secretary for Housing (Federal Housing Commissioner) provided a written response which is reprinted in appendix III. In HUD’s response, the agency agreed with our recommendations and acknowledged that current policies and procedures may fail to identify all potential borrowers with delinquent tax debt. To address our recommendations, FHA stated that it would contact IRS in an effort to establish executable policy that may identify delinquent tax debtors. Further, the agency affirmed that it would execute changes to current FHA requirements for lenders in order to address the concerns discovered through the audit. Included in its written response, HUD provided technical comments which were incorporated into this report. Specifically, HUD recommended that we change the terminology used to characterize federal tax debts. According to HUD, this suggested change would provide clarity and avoid the appearance that FHA knew of delinquent tax debts. We agreed to make the recommended change. However, for certain cases included in our review, evidence indicates that FHA-approved lenders were aware of tax debts. As agreed with your offices, unless you publicly release its contents earlier we plan no further distribution of this report until 30 days from its issue date. At that time, we will send copies of this report to interested congressional committees, the Secretary of the Treasury, the Secretary of Housing and Urban Development, the Commissioner of Internal Revenue, the Acting Assistant Secretary for Housing (Federal Housing Commissioner), and other interested parties. The report is also available at no charge on the GAO Web site at http://www.gao.gov. If you have any questions concerning this report, please contact Gregory D. Kutz at (202) 512-6722 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Major contributors to this report are acknowledged in appendix IV. Appendix I: Objectives, Scope, and Methodology Our objectives were to determine: (1) the extent to which tax debtors benefited from the Recovery Act’s provisions for increased Federal Housing Administration (FHA) loan limits and the First-Time Homebuyer Credit (FTHBC); and (2) what challenges, if any, FHA faces in preventing ineligible tax debtors from receiving mortgage insurance. To determine the extent to which individuals with unpaid tax debt benefited from the Recovery Act’s provision for increased loan limits on FHA mortgage insurance, we obtained and analyzed electronic data from FHA’s Single Family Data Warehouse (SFDW) as of September 2011. We also obtained and analyzed tax debt data from the Internal Revenue Service (IRS) as of June 30, 2010. Using the taxpayer identification numbers (TIN) present in these data, we electronically matched IRS’s tax debt data to the population we identified of Recovery Act borrower Social Security numbers (SSN) from the SFDW. The Recovery Act stipulated that revised FHA loan limits for 2009 be set in each county at the higher of the loan limits established under the Economic Stimulus Act of 2008 (ESA) or those established under the Housing and Economic Recovery Act of 2009 (HERA). Since loan limits would have reverted to HERA- established rates if the Recovery Act had not been promulgated, we considered an FHA borrower to be part of our Recovery Act population if he or she obtained mortgage insurance in 2009 at a value greater than would have been authorized under HERA. FHA officials agreed with this methodology. To determine the extent to which individuals with unpaid taxes received the FTHBC under the Recovery Act, we obtained and analyzed FTHBC transaction data from IRS as of July 10, 2010, and then electronically matched IRS’s tax debt data to the population of individuals who claimed the FTHBC under the Recovery Act. Since IRS’s FTHBC data do not contain home purchase dates, we were unable to isolate all individuals who benefitted from the FTHBC under the Recovery Act. As a result, we used the SFDW to obtain home purchase dates to determine which FTHBCs were awarded under the Recovery Act. We electronically matched the FTHBC transaction data TINs with the SSNs in the SFDW and extracted mortgages with closing dates from January 1, 2009, through November 30, 2009, to identify a population of Recovery Act FTHBC recipients with FHA mortgage insurance. We identified 722,003 FTHBC claims associated with the Recovery Act for individuals who financed their home using FHA mortgage insurance, and in our prior work we found that there were 1,669,081 FTHBC claims filed under the Recovery Act.approximately 43 percent of all FTHBCs claimed under the Recovery Act. Therefore, our analysis includes two groups: 1. individuals who received FHA mortgage insurance under the higher limits authorized under the Recovery Act, and 2. individuals who received the FTHBC under the Recovery Act and obtained FHA mortgage insurance of any value. Further, to determine the extent to which these Recovery Act FTHBC recipients with unpaid tax debt received federal tax refunds in the same year they claimed the FTHBC, we obtained and analyzed federal tax refund data from IRS from fiscal years 2009 and 2010.matched the refund data TINs with the TINs we identified to be FHA We electronically mortgage insurance borrowers who claimed the FTHBC under the Recovery Act while having unpaid federal tax debt. To avoid overestimating the amount owed by borrowers who benefitted from the increased loan limits for FHA mortgage insurance under the Recovery Act and FTHBC recipients with unpaid federal tax debts, and to capture only significant tax debts, we excluded from our analysis tax debts meeting specific criteria to establish a minimum threshold in the amount of tax debt to be considered when determining whether a tax debt is significant. The criteria we used to exclude tax debts are as follows: tax debts IRS classified as compliance assessments or memo accounts for financial reporting, tax debts from calendar years 2009 and 2010 tax periods, tax debts that were assessed by IRS after the mortgage insurance was issued, and tax debts from individuals with total unpaid taxes of less than $100. The criteria above were used to exclude tax debts that might be under dispute or generally duplicative or invalid, and tax debts that were recently incurred. Specifically, compliance assessments or memo accounts were excluded because these taxes have neither been agreed to by the taxpayers nor affirmed by the court, or these taxes could be invalid or duplicative of other taxes already reported. We also excluded tax debts from calendar years 2009 and 2010 tax periods to eliminate tax debt that may involve matters that are routinely resolved between the taxpayers and IRS, with the taxes paid or abated within a short time. We excluded any debts that were assessed by IRS after the mortgage insurance was received because those debts would not have been included in IRS records at the time the mortgage insurance was issued. We also excluded tax debts of less than $100 because we considered them insignificant for the purpose of determining the extent of taxes owed by Recovery Act recipients. Using these criteria, we identified at least 6,327 Recovery Act recipients with federal tax debt. To provide examples of Recovery Act recipients who have unpaid federal taxes, we selected a non-probability sample of Recovery Act beneficiaries for a detailed review. We used the selection criteria below to provide examples that illustrate the sizeable amounts of taxes owed by some individuals who benefitted from the Recovery Act: We selected nine individuals who benefitted from increased FHA mortgage limits who had (1) large amounts of unpaid federal tax debt (at least $100,000), (2) at least three delinquent tax periods, and (3) indications of IRS penalties or home foreclosures. We also selected nine individuals who benefitted from the FTHBC and obtained FHA mortgage insurance of any value who had (1) large amounts of unpaid federal tax debt (at least $50,000), (2) at least five delinquent tax periods, (3) FHA mortgage insurance of $200,000 or more, and (4) indications of IRS penalties or home foreclosures. We requested IRS notes, detailed account transcripts, and other records from IRS as well as mortgage files from FHA for these 18 individuals. Of the 18 total requested cases, FHA provided us information that only allowed us to fully analyze 8 of them. Although we did not receive complete information necessary to fully analyze the remaining cases, we were able to assess all 18 for limited purposes (e.g., nonfiling of tax returns). We also selected 9 additional cases of FTHBC recipients who received tax refunds to determine how they were able to receive federal tax refunds while having unpaid federal taxes. For these 9, we selected individuals who had (1) at least $5,000 in unpaid federal tax debt, (2) at least three delinquent tax periods, and (3) a federal tax refund value of at least $5,000. All of our cases were selected to illustrate the sizeable amounts of taxes owed by some individuals who benefitted from the Recovery Act. None of our case selections provide information that can be generalized beyond the specific cases presented. To analyze the controls FHA has in place to prevent ineligible individuals with unpaid federal tax debt from receiving mortgage insurance, we reviewed FHA’s lender credit analysis and underwriting handbook, mortgagee letters, and reports from GAO and HUD’s Office of Inspector General. We also interviewed officials from FHA’s Office of Single Family Housing and Office of the Chief Information Officer. To understand how private lenders interpret and implement FHA’s guidelines for preventing individuals with delinquent federal tax debt from receiving mortgage insurance, we interviewed senior-level officials from three large FHA-approved lenders. We selected four lenders based on the following criteria: (1) we selected the two largest lenders in terms of the number of FHA loans approved in 2009, and (2) we selected 2 of the top 10 largest FHA lenders that approved a comparable number of FHA loans in 2009 but varied in proportion of loans awarded to individuals with federal tax debt. However, the lender chosen for having a high proportion of loans awarded to individuals with federal tax debt declined to speak with GAO officials. In total, the three lenders we interviewed endorsed about 15 percent of all FHA mortgages for homes purchased in 2009. Data Reliability Assessment To assess the reliability of record-level IRS unpaid assessments and FTHBC data, we relied on the work we performed during our annual audit of IRS’s financial statements and interviewed knowledgeable IRS officials We also performed electronic testing of about any data reliability issues.required FTHBC elements. While our financial statement audits have identified some data reliability problems associated with tracing IRS’s tax records to source records and including errors and delays in recording taxpayer information and payments, we determined that the data were sufficiently reliable to address this report’s objectives. To assess the reliability of record-level FHA mortgage insurance data, we reviewed documentation from FHA, interviewed FHA officials who administer these information systems and officials who routinely use these systems for mortgage insurance management, verified selected data across multiple sources, and performed electronic testing of required elements. We determined that the data were sufficiently reliable for our purposes. We conducted this performance audit and related investigations from April 2011 through May 2012. We performed this performance audit in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our audit findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Appendix II: Excerpts of the Uniform Residential Loan Application Two of the three figures below represent sections of the Uniform Residential Loan Application (URLA) where applicant’s are required to disclose any debts that may affect their eligibility for FHA mortgage insurance or their ability to repay the proposed mortgage. The third excerpt lists the consequences of making a false statement on the URLA. Knowingly making any false statement on the URLA is a federal crime punishable by fine or imprisonment. Appendix III: Comments from the Federal Housing Administration Appendix IV: GAO Contact and Staff Acknowledgments GAO Contact Staff Acknowledgments In addition to the contact named above, Matthew Valenta, Assistant Director; Emily C.B. Wold, Analyst-in-Charge; Jamie L. Berryhill; Jeff McDermott; Maria McMullen; Wayne Turowski; Susan B. Wallace; and Timothy Walker made significant contributions to this report.
Under a Recovery Act provision that increased mortgage insurance loan limits, FHA insured $20 billion in mortgages for 87,000 homeowners. The Recovery Act also provided for the awarding of an estimated $12 billion of FTHBCs to 1.7 million individuals. GAO was asked to determine the (1) extent to which tax debtors benefited from the Recovery Act’s provisions for increased FHA loan limits and the FTHBC, and (2) challenges, if any, FHA faces in preventing ineligible tax debtors from receiving mortgage insurance. Using IRS and FHA data, GAO identified Recovery Act recipients and compared them to federal tax debtors as of June 30, 2010. GAO reviewed relevant policies and interviewed agency officials and lenders. GAO also reviewed detailed IRS and FHA documents for a nonrepresentative selection of 18 individuals who received FHA mortgage insurance. These were selected based on a combination of factors, such as amount of taxes owed and number of delinquent tax periods. Due to data availability and other factors, GAO was able to completely evaluate only 8 of 18 individuals on their eligibility for FHA mortgage insurance. These cases cannot be generalized beyond those presented. The Federal Housing Administration (FHA) insured over $1.44 billion in mortgages for 6,327 borrowers with $77.6 million in federal tax debt who benefited from the 2009 American Recovery and Reinvestment Act. Of these borrowers, 3,815 individuals claimed and received $27.4 million in Recovery Act First-Time Homebuyer Credits (FTHBC). This analysis includes tax debtors who (1) benefited from FHA’s increased loan limits, or (2) claimed the FTHBCs and received FHA mortgage insurance of any value. Federal policy makes delinquent tax debtors ineligible for FHA mortgage insurance unless they repay their debt or are in a valid repayment agreement with the Internal Revenue Service (IRS), but the FTHBC, like all tax credits, was available to those who qualified, regardless of their tax debt. GAO could not determine the proportion of borrowers who were ineligible for FHA insurance because GAO could not systematically identify which of the 6,327 borrowers were in valid repayment agreements using the data GAO received from IRS. However, GAO did find that 5 of the 8 borrowers completely evaluated were ineligible because they were not in valid repayment agreements at the time they obtained FHA mortgage insurance. In addition, GAO found that Recovery Act borrowers with unpaid taxes had foreclosure rates two to three times greater than borrowers without unpaid taxes, which potentially represents an increased risk to FHA. Some ineligible tax debtors received FHA mortgage insurance, in part, due to shortcomings in the capacity of FHA-required documentation to identify tax debts, and shortcomings in other policies that lenders may misinterpret. Lenders must perform steps to identify an applicant’s federal debt status, but sources commonly used, such as the loan application and credit report, do not reliably indicate an applicant’s tax debt. Statutory restrictions generally prohibit the disclosure of taxpayer information, such as tax debt, without the taxpayer’s consent. Lenders are already required to obtain such consent through an IRS form they use to validate the income of some applicants. This same form could also be used to obtain permission from applicants to obtain reliable tax-debt information directly from IRS, but doing so is not addressed in FHA policies. Requiring lenders to collect more reliable information on tax debts could better prevent ineligible tax debtors from obtaining FHA mortgage insurance. Further, FHA’s policies requiring lenders to investigate whether tax liens indicate unresolved tax debt are unclear and may be misinterpreted. The lenders GAO spoke with believed they were in compliance with FHA’s policies when they provided FHA-insured loans to applicants with tax liens and no repayment agreements, but FHA officials indicated otherwise. As a result of these shortcomings, lenders may approve federally insured mortgages for ineligible applicants with delinquent tax debt in violation of federal policies.
GAO_GAO-10-116
Background Biofuels, such as ethanol and biodiesel, are an alternative to petroleum- based transportation fuels and are produced in the United States from a variety of renewable sources such as corn, sugar cane, and soybeans. Ethanol, the most common U.S. biofuel, is mainly used as a gasoline additive in blends of about 10 percent ethanol and 90 percent gasoline, known as E10, which is available in most states. A relatively small volume is also blended at a higher level called E85—a blend of 85 percent ethanol and 15 percent gasoline—which can only be used in specially designed vehicles, known as flexible fuel vehicles. Biodiesel is a renewable alternative fuel produced from a range of plant oils, animal fats, and recycled cooking oils. Pure biodiesel or biodiesel blended with petroleum diesel—generally in a blend of 20 percent biodiesel and 80 percent diesel—can be used to fuel diesel vehicles. The federal government has promoted biofuels as an alternative to petroleum-based fuels since the 1970s, and production of ethanol from corn starch reached 9 billion gallons in 2008. The Energy Policy Act of 2005 originally created an RFS that generally required U.S. transportation fuel to contain 4 billion gallons of renewable fuels in 2006 and 7.5 billion gallons in 2012. EISA expanded the RFS by requiring that U.S. transportation fuel contain 9 billion gallons of renewable fuels in 2008 and increasing this amount annually to 36 billion gallons in 2022. Moreover, the 36-billion-gallon total must include at least 21 billion gallons of advanced biofuels, defined as renewable fuels other than ethanol derived from corn starch that meet certain criteria; only 15 billion of the 36 billion gallons of renewable fuels can come from conventional biofuels. In addition, at least 16 billion gallons of the 21-billion-gallon advanced biofuels requirement must be made from cellulosic feedstocks, such as perennial grasses, crop residue, and woody biomass. Unlike corn starch, most of the energy in plant and tree biomass is locked away in complex cellulose and hemicellulose molecules, and technologies to produce biofuels economically from this type of feedstock are still being developed. Some cellulosic biorefineries are piloting the use of biochemical processes, in which microbes and enzymes break down these complex plant molecules to produce ethanol, while others are piloting the use of thermochemical processes, which use heat and chemical catalysts to convert plant material into a liquid that more closely resembles petroleum. There are a number of steps in the biofuels life cycle, from cultivation of the feedstock through distribution to the end user at the fuel pump (see fig. 1). Water plays a critical role in many aspects of this life cycle. On the cultivation side, water is needed to grow the feedstock. Crops can be either rainfed, with all water requirements provided by natural precipitation and soil moisture, or irrigated, with at least some portion of water requirements met through applied water from surface or groundwater sources. Figure 2 shows the various water inputs (sources of water) and outputs (water losses) that are part of the agricultural water cycle. Water is also important for conversion of feedstocks into biofuels. In particular, water is used for heating and cooling as well as for processing. For example, during the processing of corn-based ethanol, corn is converted to ethanol through fermentation using one of two standard processes, dry milling or wet milling. The main difference is the initial treatment of the corn kernel. In the dry-mill process, the kernel is first ground into flour meal and processed without separating the components of the corn kernel. The meal is then slurried with water to form a mash, and enzymes are added to convert the starch in the mash to a fermentable sugar. The sugar is then fermented and distilled to produce ethanol. In the wet-mill process, the corn kernel is steeped in a mixture of water and sulfurous acid that helps separate the kernel into starch, germ, and fiber components. The starch that remains after this separation can then be fermented and distilled into fuel ethanol. Traditional dry-mill ethanol plants cost less to construct and operate than wet-mill plants, but yield fewer marketable co-products. Dry-mill plants produce distiller’s grains (that can be used as cattle feed) and carbon dioxide (that can be used to carbonate soft drinks) as co-products, while wet-mill plants produce many more co-products, including corn oil, carbon dioxide, corn gluten meal, and corn gluten feed. The majority of ethanol biorefineries in the United States are dry-mill facilities. Figure 3 depicts the conversion process for a typical dry-mill biorefinery. Each Stage of Biofuel Production Affects Water Resources, but the Extent Depends on the Feedstock and Region The extent to which increased biofuel production will affect the nation’s water resources will depend on which feedstocks are selected for production and which areas of the country they are produced in. Specifically, increases in corn cultivation in areas that are highly dependent on irrigated water could have greater impacts on water availability than if the corn is cultivated in areas that primarily produce rainfed crops. In addition, most experts believe that greater corn production, regardless of where it is produced, may cause greater impairments to water quality than other feedstocks, because corn production generally relies on greater chemical inputs and the related chemical runoff will impact water bodies. In contrast, many experts expect next generation feedstocks to require less water and provide some water quality benefits, but even with these feedstocks the effects on water resources will largely depend on which feedstock is selected, and where and how these feedstocks are grown. Similarly, the conversion of feedstocks into biofuels may also affect water supply and water quality, but these effects also vary by feedstock chosen and type of biofuel produced. Many experts agree that as the agriculture and biofuel production industries make decisions about which feedstocks to grow and where to locate or expand conversion facilities, it will be important for them to consider regional differences and potential impacts on water resources. Water Supply and Water Quality Effects of Increased Corn Cultivation Many experts and officials told us that corn cultivation requires substantial quantities of water, although the amount used depends on where the crop is grown and how much irrigation water is used. The primary corn production regions are in the upper and lower Midwest and include 12 states classified as USDA farm production Regions 5, 6, and 7. Together, these regions accounted for 89 percent of corn production in 2007 and 2008, and 95 percent of ethanol production in the United States in 2007. Corn cultivation in these three regions averages anywhere from 7 to 321 gallons of irrigation water for every gallon of ethanol produced, as shown in table 1. However, the impact of corn cultivation on water supplies in these regions varies considerably. For example, in USDA Region 7, which comprises North Dakota, South Dakota, Kansas, and Nebraska, the production of one bushel of corn consumes an average of 865 gallons of freshwater from irrigation. In contrast, in USDA Regions 5 and 6, which comprise Iowa, Illinois, Indiana, Ohio, Missouri, Minnesota, Wisconsin, and Michigan, corn is mostly rainfed and only requires on average 19 to 38 gallons of supplemental irrigation water per bushel. The effects of increased corn production for ethanol on water supplies are likely to be greatest in water-constrained regions of the United States where corn is grown using irrigation. For example, some of the largest increases in corn acres (1.1 million acres) are projected to occur in the Northern Plains region, which is already a water constrained region. Parts of this region draw heavily from the Ogallala Aquifer, where water withdrawals are already greater than the natural recharge rate from precipitation. A 2009 USGS report found water levels in the aquifer had dropped more than 150 feet in parts of southwest Kansas and the Texas Panhandle, where crop irrigation is intense and recharge to the aquifer is minimal. In 2000, about 97 percent of the water withdrawn from the aquifer was used for irrigation, according to USGS. Many officials told us that an increase in corn cultivation using current agricultural practices will also impair water quality as a result of the runoff of fertilizer into lakes and streams. This will happen because corn requires high applications of fertilizers relative to soybeans and other potential biofuel feedstocks, such as perennial grasses. For example, in Iowa, the expansion of biofuel production has already led to an increasing amount of land dedicated to corn and other row crops, resulting in surface water impacts, including nutrient runoff and increased bacteria counts as well as leaching of nitrogen and phosphorus into groundwater, according to a state official. Fertilizer runoff containing nitrogen and phosphorus can lead to overenrichment and excessive growth of algae in surface waters. In some waters, such enrichment has resulted in harmful algal blooms, decreased water clarity, and reduced oxygen in the water, which impair aquatic life. In marine waters, this excessive algal growth has created “dead zones,” which cannot support fish or any other organism that needs oxygen to survive. The number of reported dead zones around the world has increased since the 1960s to more than 400. Many of them are along the Gulf of Mexico and the Atlantic Coast, areas that receive drainage from agricultural and urban landscapes, including a large portion of the Corn Belt, where many of the existing and planned ethanol production facilities are located. A 2007 USGS model estimated that 52 percent of the nitrogen and 25 percent of the phosphorus entering the Gulf system are from corn and soybean cultivation in the Mississippi River basin. Increased corn production will also increase the use of pesticides— including insecticides and herbicides—which also have the potential to affect surface water and groundwater quality. For example, a 10-year nationwide study by USGS detected pesticides in 97 percent of streams in agricultural and urban watersheds. As would be expected, the highest concentrations of pesticides have been found in those areas that have the highest use. For instance, application rates of atrazine, a commonly used pesticide for corn production, are highest in the Corn Belt, and atrazine was also the most widely detected pesticide in watersheds in this area, according to the USGS nationwide study. USGS determined that the concentrations of atrazine and other pesticides detected had the potential to adversely affect aquatic plants and invertebrates in some of the streams, since organisms are vulnerable to short-term exposure to relatively small amounts of certain pesticides. Similarly, increased pesticide use for the cultivation of corn could impair groundwater supplies. USGS found pesticides in 61 percent of shallow wells sampled in agricultural areas. Once groundwater is contaminated, it is difficult to clean up, according to the experts we contacted. According to some of the experts and officials we spoke with, increased demand for biofuel feedstocks may also create incentives for farmers to place marginal lands back into production. Marginal lands generally have lower productivity soils, so cultivating them may require more nutrient and pesticide inputs than more productive lands, potentially leading to further water quality impairments. Furthermore, delivery of sediments, nutrients, and pesticides to surrounding water bodies may increase if these lands are placed back into production because these lands are often highly susceptible to erosion due to wind and water. Of particular concern to many of the experts with whom we spoke are the millions of acres of land currently enrolled in the Conservation Reserve Program (CRP). This federal program provides annual rental payments and cost share assistance to landowners who contractually agree to retire highly erodible or other environmentally-sensitive cropland from agricultural purposes. As part of the contract, farmers are generally required to plant or maintain vegetative covers (such as native grasses) on the land, which provide a range of environmental benefits, including improved water quality, reduced erosion, enhanced wildlife habitat, and preserved soil productivity. However, many experts and officials we spoke with from the five selected states are concerned that higher corn prices and increased demand for biofuel feedstocks may encourage farmers to return CRP land to crop production. If such conversion does occur, these officials noted that water quality may further decline in the future. Little Is Yet Known about the Water Resource Implications of Next Generation Feedstocks Next generation feedstocks for biofuels have the potential for fewer negative effects on water resources, although several of the experts and officials that we spoke with said that the magnitude of these effects remains largely unknown because these feedstocks have not yet been grown on a commercial scale. These experts suggested that certain water resource impacts were likely for the following potential feedstocks: Agricultural residues, such as corn stover, collected from fields that have already been harvested, can provide feedstock for cellulosic ethanol production. The primary advantage of using agricultural residues is that they are a byproduct of crop cultivation and thus do not require additional water or nutrient inputs. However, removal of these residues has consequences for both soil and water quality, so there may be limits on how much agricultural residues can be removed for cellulosic ethanol production. According to the experts we spoke with, leaving crop residues unharvested on the field benefits soil quality by providing nutrients that help maintain long-term soil productivity, enhancing soil moisture retention, increasing net soil carbon, and reducing the need for nutrient inputs for future crops. In addition, leaving crop residues on the field helps prevent soil erosion due to wind and water and nutrient runoff into the water supply. Farmers could reduce the negative effects of residue removal by harvesting only corn cobs or part of the stover, but the optimal removal rate is not yet fully known, and is currently being studied by several federal agencies and academic institutions. Perennial grasses may require less water and provide some water quality benefits. Perennial grasses such as mixed prairie and switchgrass can grow with less water than corn. But some experts cautioned that any water supply benefits from these grasses will only occur if they are rainfed. For instance, officials in Minnesota told us that because the state’s crops are primarily rainfed, shifting to the cultivation of cellulosic feedstocks, like perennial grasses, without irrigation would have a minimal impact on the state’s water supply. However, other experts and local officials pointed out that if farmers choose to irrigate perennial grasses in order to achieve maximum yields and profits as they do for other crops, then producing these feedstocks could have the same detrimental effects on water supplies as do other crops. This concern was reiterated by the National Research Council, which stated that while irrigation of native grasses is unusual now, it could easily become more common as cellulosic biofuel production gets under way. Perennial grasses can also help preserve water quality by reducing soil, nutrient, and pesticide runoff. Research indicates that perennial grasses cycle nitrogen more efficiently than some row crops and protect soil from erosion due to wind and water. As a result, they can reduce the need for most fertilizers after crops are established, and the land on which these crops are grown do not need to be tilled every year, which reduces soil erosion and sedimentation. According to experts, farmers could also plant a mix of perennial grasses, which could minimize the need for pesticides by promoting greater diversity and an abundance of natural enemies for agricultural pests. In addition, perennial grasses cultivated across an agricultural landscape may help reduce nutrient and chemical runoff from farm lands. Grasses can also be planted next to water bodies to help filter out nutrients and secure soil and can serve as a windbreak to help minimize erosion. However, the type of land and cultivation methods used to grow perennial grasses will influence the extent to which they improve water quality. For instance, if perennial grasses were harvested down to the soil, they would not reduce soil erosion as compared to conventional feedstocks in the long run, according to some experts. In addition, according to some experts, if farmers choose to use fertilizers to maximize yields from these crops as they do for other crops or if these crops are grown on lands with decreased soil quality that require increased nutrient application, then cultivation of perennial grasses could also lead to water quality impairments. Woody biomass, such as biomass from the thinning of forests and cultivation of certain fast-growing tree varieties, could serve as feedstock for cellulosic ethanol production, according to some experts. Use of thinnings is not expected to impact water supply, as they are residuals from forest management. Thinning of forests can have the added benefit of reducing the intensity of wildfires, the aftermath of which facilitates runoff of nutrients and sediment into surface waters. Waste from urban areas or lumber mills may also provide another source of biomass that would not require additional water resources. This waste would include the woody portions of commercial, industrial, and municipal solid waste, as well as byproducts generated from processing lumber, engineered wood products, or wood particles; however, almost all of the commercial wood waste is currently used as fuels or raw material for existing products. In addition, some experts said that fast-growing tree species, such as poplar, willow, and cottonwood, are potential cellulosic feedstocks. However, these experts also cautioned that some of these varieties may require irrigation to cultivate and may have relatively high consumptive water requirements. Algae are also being explored as a possible feedstock for advanced biofuels. According to several experts, one advantage of algae is that they can be cultivated in brackish or degraded water and do not need freshwater supplies. However, currently algae cultivation is expected to consume a great deal of water, although consumption estimates vary widely—from 40 to 1,600 gallons of water per gallon of biofuel produced, according to experts—depending on what cultivation method is used. With open-air, outdoor pond cultivation, water loss is expected to be greater due to evaporation, and additional freshwater will be needed to replenish the water lost and maintain the water quality necessary for new algal growth. In contrast, when algae are cultivated in a closed environment, as much as 90 percent less water is lost to evaporation, according to one expert. The Extent to Which Biofuel Conversion May Affect Water Resources also Depends on the Feedstock Used and Biofuel Produced During the process of converting feedstocks into biofuels, biorefineries not only need a supply of high-quality water, but also discharge certain contaminants that could impact water quality. The amount of water needed and the contaminant discharge vary by type of biofuel produced and type of feedstock used in the conversion process. For example, ethanol production requires greater amounts of high-quality water than does biodiesel. Conversion of corn to ethanol requires approximately 3 gallons of water per gallon of ethanol produced, which represents a decrease from an estimated 5.8 gallons of water per gallon of ethanol in 1998. According to some experts, these gains in efficiency are, for the most part, the result of ethanol plants improving their water recycling efforts and cooling systems. According to some experts we spoke with, the biofuel conversion process generally requires high-quality water because the primary use for ethanol production is for cooling towers and boilers, and cleaner water transfers heat more efficiently and does less damage to this equipment. As a result, ethanol biorefineries prefer to use groundwater because it is generally cleaner, of more consistent quality, and its supply is less variable than surface water. Furthermore, the use of lesser-quality water leaves deposits on biorefinery equipment that require additional water to remove. However, despite water efficiency gains, some communities have become concerned about the potential impacts of withdrawals for biofuel production on their drinking water and municipal supplies and are pressuring states to limit ethanol facilities’ use of the water. For example, at least one Minnesota local water district denied a permit for a proposed biorefinery due to concerns about limited water supply in the area. Current estimates of the water needed to convert cellulosic feedstocks to ethanol range from 1.9 to 6.0 gallons of water per gallon of ethanol, depending on the technology used. Conversion of these next generation feedstocks is expected to use less water when compared to conventional feedstocks in the long run, according to some experts. For example, officials from a company in the process of establishing a biorefinery expect the conversion of pine and other cellulosic feedstocks to consume less water than the conversion of corn to ethanol once the plant is operating at a commercial scale. However, some researchers cautioned that the processes for converting cellulosic feedstocks currently require greater quantities of water than needed for corn ethanol. They said the technology has not been optimized and commercial-scale production has not yet been demonstrated, therefore any estimates on water use by cellulosic biorefineries are simply projections at this time. In contrast, biodiesel conversion requires less water than ethanol conversion—approximately 1 gallon of freshwater per gallon of biodiesel. Similar to ethanol conversion, much of this water is lost during the cooling and feedstock drying processes. Biodiesel facilities can use a variety of plant and animal-based feedstocks, providing more options when choosing a location. This flexibility in type of feedstock that can be converted allows such facilities to be built in locations with plentiful water supplies, lessening their potential impact. In addition to the water supply effects, biorefineries can have water quality effects because of the contaminants they discharge. However, the type of contaminant discharged varies by the type of biofuel produced. For example, ethanol biorefineries generally discharge chemicals or salts that build up in cooling towers and boilers or are produced as waste by reverse osmosis, a process used to remove salts and other contaminants from water prior to discharge from the biorefinery. EPA officials told us that the concentrated salts discharged from reverse osmosis are a concern due to their effects on water quality and potential toxicity to aquatic organisms. In contrast, biodiesel refineries discharge other pollutants such as glycerin that may be harmful to water quality. EPA officials told us that glycerin from small biodiesel refineries can be a problem if it is released into local municipal wastewater facilities because it may disrupt the microbial processes used in wastewater treatment. Glycerin is less of a concern with larger biodiesel refineries because, according to EPA officials, it is often extracted from the waste stream prior to discharge and refined for use in other products. Several state officials we spoke with told us these discharges are generally well-regulated under the Clean Water Act. Under the act, refineries that discharge pollutants into federally regulated waters are required to obtain a federal National Pollutant Discharge Elimination System (NPDES) permit, either from EPA or from a state agency authorized by EPA to implement the NPDES program. These permits generally allow a point source, such as a biorefinery, to discharge specified pollutants into federally regulated waters under specific limits and conditions. State officials we spoke with reported they closely monitor the quality of water being discharged from biofuel conversion facilities, and that the facilities are required to treat their water discharges to a high level of quality, sometimes superior to the quality of the water in the receiving water body. Storage and Distribution of Biofuels Can Have Some Water Quality Consequences The storage and distribution of ethanol-blended fuels could result in water quality impacts in the event that these fuels leak from storage tanks or the pipes used to transport these fuels. Ethanol is highly corrosive and there is potential for releases into the environment that could contaminate groundwater and surface water, among other issues. When ethanol- blended fuels leak from underground storage tanks (UST) and aboveground tank systems, the contamination may pose greater risks than petroleum. This is because the ethanol in these blended fuels causes benzene, a soluble and carcinogenic chemical in gasoline, to travel longer distances and persist longer in soil and groundwater than it would in the absence of ethanol, increasing the likelihood that it could reach some drinking water supplies. Federal officials told us that, because it is illegal to store ethanol-blended fuels in tanks not designed for the purpose, they had not encountered any concerns specific to ethanol storage. However, officials from two states did express concern about the possibility of leaks and told us that ethanol- blended fuels are still sometimes stored in tanks not designed for the fuel. For instance, one of these states reported a 700-gallon spill of ethanol- blended fuels due to the scouring of rust plugs in a UST. According to EPA officials, a large number of the 617,000 federally regulated UST systems currently in use at approximately 233,000 sites across the country are not certified to handle fuel blends that contain more than 10 percent ethanol. Moreover, according to EPA officials, most tank owners do not have records of all the UST systems’ components, such as the seals and gaskets. Glues and adhesives used in UST piping systems were not required to be tested for compatibility with ethanol-blended fuel until recently. Thus there may be many compatible tanks used for storing ethanol-blended fuels that have incompatible system components, increasing the potential for equipment failure and fuel leakage, according to EPA officials. EPA told us that it is continuing to work with government and industry partners to study the compatibility of these components with various ethanol blends. EPA officials also stressed the importance of understanding the fate and transport of biofuels into surface water because biofuels are transported mainly by barge, rail, and truck. The officials noted that spills of biofuels or their byproducts have already occurred into surface waters. The Effect of Increased Biofuel Production Will Vary by Region, Due to Differences in Water Resources and State Laws According to many experts and officials that we contacted, as biofuel production increases, farmers and the biofuel production industry will need to consider regional differences in water supply and quality when choosing which feedstocks to grow and how and where to expand their biofuel production capacity. Specifically, they noted that in the case of cultivation, certain states may be better suited to cultivate particular feedstocks because of the amount and type of water available. Some examples they provided include the following: Certain cellulosic feedstocks, such as switchgrass, would be well-suited for areas with limited rainfall, such as Texas, because these feedstocks generally require less water and are drought tolerant. In the Midwest, switchgrass and other native perennial grasses could be grown as stream buffer strips or as cover crops, which are crops planted to keep the soil in place between primary plantings. In Georgia, some experts said pine was likely to be cultivated as a next generation biofuel feedstock because the state has relatively limited land available for cultivation and increased cultivation of pine or other woody biomass without irrigation would not cause a strain on water supplies. In the Southeast and Pacific Northwest, waste from logging operations and paper production was identified as a potential feedstock for cellulosic ethanol production. Areas with limited freshwater supplies and a ready supply of lower-quality water, such as brackish water or water from wastewater treatment plants, would be better suited to the cultivation of algae. For example, Texas was identified as a state suitable for algae cultivation because of the large amounts of brackish water in many of its aquifers, as well as its abundant sunlight and supplies of carbon dioxide from industrial facilities. Research indicates that in making decisions about feedstock production for biofuels it will be important to consider the effects that additional cultivation will have on the quality of individual water bodies and regional watersheds. Farmers need to consider local water quality effects when making decisions regarding the suitability of a particular feedstock or where to employ agricultural management practices that minimize nutrient application. In addition, state officials should consider these effects when deciding where programs such as the CRP may be the most effective. For example, experts and officials told us it will be important to identify watersheds in the Midwest that are delivering the largest nutrient loads into the Mississippi River basin and, consequently, contributing to the Gulf of Mexico dead zone, in order to minimize additional degradation that could result from increased crop cultivation in these watersheds. In addition, research has shown it is important that management practices be tailored to local landscape conditions, such as topography and soil quality, and landowner objectives, so that efforts to reduce nutrient and sediment runoff can be maximized. In the case of biofuel conversion, some experts and officials said that state regulators and industry will need to consider the availability of freshwater supplies and the quality of those supplies when identifying and approving sites for biorefineries. Currently, many biorefineries are located in areas with limited water resources. For instance, as figure 4 shows, many existing and planned ethanol facilities are located on stressed aquifers, such as the Ogallala, or High Plains, Aquifer. These facilities require 100,000 to 1 million gallons of water per day, and as mentioned earlier, the rate of water withdrawal from the aquifer is already much greater than its recharge rate, allowing water withdrawals in Nebraska or South Dakota to affect water supplies in other states that draw from that aquifer. Experts noted that states with enough rainfall to replenish underlying aquifers may be more appropriate locations for biorefineries. Finally, relevant water laws in certain states may influence the location of future biorefineries. Specifically, several states have enacted laws that require permits for groundwater or surface water withdrawals and this requirement could impact where biorefineries will be sited. These laws specify what types of withdrawals must be permitted by the responsible regulatory authority and the requirements for receiving a permit. For instance, Georgia’s Environmental Protection Division grants permits for certain withdrawals of groundwater and surface water, including for use by a biorefinery, when the use will not have unreasonable adverse effects on other water uses. According to state officials, there has not yet been a case where a permit for a biorefinery was denied because the amount of projected withdrawal was seen as unreasonable. In contrast, groundwater decisions are made at the local level in Texas, where more than half of the counties have groundwater conservation districts, and Nebraska. In deciding whether to issue a permit, the Texas groundwater conservation districts consider whether the proposed water use unreasonably affects either existing groundwater and surface water resources or existing permit holders, among other factors. In Nebraska, permits are only required for withdrawals and transfers of groundwater for industrial purposes. In addition, in Nebraska, where water supplies are already fully allocated in many parts of the state, natural resource districts can require biofuel conversion facilities to offset the water they will consume by reducing water use in other areas of the region. The volume of withdrawals can also factor into the need for a permit. While Texas conservation district permits are required for almost all types of groundwater wells, Georgia state withdrawal permits are only required for water users who withdraw more than an average of 100,000 gallons per day. Agricultural Practices, Technological Innovations, and Alternative Water Sources Can Mitigate Some Water Resource Effects of Biofuels Production, but There Are Barriers to Adoption Agricultural conservation practices can reduce the effects of increased biofuel feedstock cultivation on water supply and water quality, but there are several barriers to widespread adoption of these practices. Similarly, the process of converting feedstocks to biofuels, technological innovations, and the use of alternative water sources can help reduce water supply and water quality impacts, but these options can be cost prohibitive and certain noneconomic barriers to their widespread use remain. Certain Agricultural Practices Can Benefit Water Supply and Water Quality, but Barriers May Limit Widespread Adoption Many experts and officials we spoke with highlighted the importance of using agricultural conservation practices to reduce the potential effects of increased biofuel feedstock cultivation on water resources. These practices can reduce nutrient and pesticide runoff as well as soil erosion by retaining additional moisture and nutrients in the soil and disturbing the land less. For example, several experts and officials we spoke with said that installing and maintaining permanent vegetation areas adjacent to lakes and streams, known as riparian zones, could significantly reduce the impacts of agricultural runoff. More specifically, several experts and officials said that planting buffer strips of permanent vegetation, such as perennial grasses, or constructing or restoring wetlands in riparian areas would reduce the effects that crop cultivation can have on water quality, as shown in figure 5. Experts also identified conservation tillage practices—such as “no-till” systems or reduced tillage systems, where the previous year’s crop residues are left on the fields and new crops are planted directly into these residues—as an important way to reduce soil erosion (see fig. 6). Research conducted by USDA has shown a substantial reduction in cropland erosion since 1985, when incentives were put in place to encourage the adoption of conservation tillage practices. Another practice, crop rotation, also reduces erosion and helps replenish nutrients in the soil. This contrasts with practices such as continuous corn cultivation—in which farmers plant corn on the same land year after year instead of rotating to other crops—which often leads to decreased soil quality. Furthermore, experts identified cover crops, a practice related to crop rotation, as a way to mitigate some of the impacts of agricultural runoff. Cover crops are planted prior to or following a harvested crop, primarily for seasonal soil protection and nutrient recovery before planting the next year’s crops. These crops, which include grains or perennial grasses, absorb nutrients and protect the soil surface from erosion caused by wind and rain, especially when combined with conservation tillage practices. Experts also identified “precision agriculture” as an important tool that can reduce fertilizer runoff and water demand by closely matching nitrogen fertilizer application and irrigation to a crop’s nutrient and water needs. Precision agriculture uses technologies such as geographic information systems and global positioning systems to track crop yield, soil moisture content, and soil quality to optimize water and nutrient application rates. Farmers can use this information to tailor water, fertilizer, and pesticide application to specific plots within a field, thus potentially reducing fertilizer and pesticide costs, increasing yields, and reducing environmental impacts. Other precision agriculture tools, like low-energy precision-application irrigation and subsurface drip irrigation systems, operate at lower pressures and have higher irrigation water application and distribution efficiencies than conventional irrigation systems, as shown in figure 7. Several experts and officials said that in order to promote such practices, it is important to continue funding and enrollment in federal programs, such as USDA’s Environmental Quality Incentives Program, which pay farmers or provide education and technical support. See appendix II for an expanded discussion of agricultural conservation practices. Several experts and officials we spoke with also said that genetic engineering has the potential to decrease the water, nutrient, and pesticide requirements of biofuel feedstocks. According to an industry trade group, biotechnology firms are currently developing varieties of drought-resistant corn that may be available to farmers within the next several years. These varieties could significantly increase yields in arid regions of the country that traditionally require irrigation for corn production. Companies are also working to develop crops that absorb additional nutrients or use nutrients more efficiently, giving them the potential to reduce nutrient inputs and the resulting runoff. However, industry officials believe it may be up to a decade before these varieties become available commercially. Furthermore, according to EPA, planting drought-resistant crops, such as corn, may lead to increased cultivation in areas where it has not previously occurred and may result in problems including increased nutrient runoff. Experts and officials told us there are both economic and noneconomic barriers to the adoption of agricultural conservation practices. Economic barriers. According to several experts, as with any business, farming decisions are made in an attempt to maximize profits. As a result, experts told us that some farmers may be reluctant to adopt certain conservation practices that may reduce yields and profits, especially in the short term. Furthermore, experts and officials also said that some of these agricultural conservation practices can be costly, especially precision agriculture. For example, the installation of low-energy precision irrigation and subsurface drip irrigation systems is significantly more expensive than conventional irrigation systems because of the equipment needed, among other reasons. Farmers may also hesitate to switch from traditional row crops to next generation cellulosic crops because of potential problems with cash flow and lack of established markets. Specifically, it can take up to 3 years to establish a mature, economically productive crop of perennial grasses, and farmers would be hard-pressed to forgo income during this period. Moreover, farmers may not be willing to cultivate perennial grasses unless they are assured that a market exists for the crop and that they could earn a profit from its cultivation. Furthermore, efficient cultivation and harvest could require farmers to buy new equipment, which would be costly and would add to the price they would have to receive for perennial grasses in order to make a profit. Noneconomic barriers. Experts and officials we contacted said that many farmers do not have the expertise or training to implement certain practices, and some agricultural practices may be less suited for some places. For example, state officials told us that farmers usually need a year or more of experience with reduced tillage before they can achieve the same crop yields they had with conventional tillage. In addition, precision agriculture relies on technologies and equipment that require training and support. Officials told us that to help address this training need, USDA and states have programs in place that help educate farmers on how to incorporate these practices and, in some cases, provide funding to help do so. In addition, some experts and officials cited regional challenges associated with some agricultural practices and the cultivation of biofuel feedstocks. For example, these experts and officials said that the amount of agricultural residue that can be removed would vary by region and even by farm. Similarly, cultivation of certain cover crops as biofuel feedstocks may not be suitable in the relatively short growing seasons of northern regions. Use of Innovative Technologies and Alternative Water Sources Could Reduce the Water Resource Effects of Biorefineries, but Costs and Logistics Impede Adoption Technological improvements have already increased water use efficiency in the ethanol conversion process. Newly built biorefineries with improved processes have reduced water use dramatically over the past 10 years, and some plants have reduced their wastewater discharge to zero. Of the remaining water use, water loss from cooling towers for biorefineries is responsible for approximately 50 to 70 percent of water consumption in modern dry-milling ethanol plants. Some industry experts we spoke with said that further improvements in water efficiency at corn ethanol plants are likely to come from minimizing water loss from cooling towers or from using alternative water sources, such as effluent from sewage treatment plants. One alternative technology that can substantially reduce water lost through cooling towers is a dry cooling system, which relies primarily on air rather than water to transfer heat from industrial processes. In addition, some ethanol plants are beginning to replace freshwater with alternative sources of water, such as effluent from sewage treatment plants, water from retention ponds at power plants, or excess water from adjacent rock quarries. For example, a corn ethanol conversion plant in Iowa gets a third of its water from a local wastewater treatment plant. By using these alternative water sources, the biorefineries can lower their use of freshwater during the conversion process. While these strategies of improved water efficiency at biorefineries show considerable promise, there are barriers to their adoption. For example, technologies such as dry cooling systems are often prohibitively expensive and can increase energy consumption. Furthermore, according to industry experts, alternative water sources can create a need for expensive wastewater treatment equipment. Some industry experts also told us that the physical layout of a conversion facility may need to be changed to make room for these improvements. Because of the considerable costs of such improvements, several experts told us, it is difficult for biorefineries to integrate these water-conserving technologies while remaining competitive in the economically strained ethanol industry. Many experts and officials stated that technological innovations for next generation biofuel conversion also have the potential to reduce the water supply and water quality impacts of increased biofuel production. For example, thermochemical production of cellulosic ethanol could require less than 2 gallons of water per gallon of ethanol produced. In addition, some next generation biofuels, known as “drop-in” fuels, are being developed that are compatible with the existing fuel infrastructure, which could reduce the risk that leaks and spills could contaminate local water bodies. For example, biobutanol is produced using fermentation processes similar to those used to make conventional ethanol, but it does not have the same corrosive properties as ethanol and could be distributed through the existing gasoline infrastructure. In addition, liquid hydrocarbons derived from algae have the potential to be converted to gasoline, diesel, and jet fuel, which also can be readily used in the existing fuel infrastructure. However, while these proposed technological innovations can reduce the water resource impacts of increased biofuel production, the efficacy of most of these innovations has not yet been demonstrated on a commercial scale, and some innovations’ efficacy has not yet been demonstrated on a pilot scale. Experts Identified a Variety of Key Research and Data Needs Related to Increased Biofuels Production and Local and Regional Water Resources Many of the experts and officials we spoke with identified areas where additional research is needed to evaluate and understand the effects of increased biofuel production on water resources. These needs fall into two broad areas: (1) research on the water effects of feedstock cultivation and conversion and (2) better data on local and regional water resources. Experts and officials identified the following research needs on the water resource effects of feedstock cultivation and conversion processes: Genetically engineered biofuel feedstocks. Many experts and officials cited the need for more research into the development of drought-tolerant and water- and nutrient-efficient crop varieties to decrease the amount of water needed for irrigation and the amount of fertilizer that needs to be applied to biofuel feedstocks. According to the National Research Council, this research should also address the current lack of knowledge on the general water requirements and evapotranspiration rates of genetically engineered crops, including next generation crops. Regarding nutrient efficiency, some experts and officials noted that research into the development of feedstocks that more efficiently take up and store nitrogen from the soil would help reduce nitrogen runoff. In addition, USDA officials added that research to determine the water requirements for conventional biofuel feedstocks and new feedstock varieties developed specifically for biofuel production is also needed. Effects of cellulosic crops on hydrology. Many experts and officials also told us there is a need to better understand the water requirements of cellulosic crops and the impact of commercial-scale cellulosic feedstock cultivation on hydrology, which is the movement of water through land and the atmosphere into receiving water bodies. According to one expert, these feedstocks differ from corn in their life cycles, root systems, harvest times, and evapotranspiration levels, all of which may influence hydrology. In addition, some research suggests that farmers may cultivate cellulosic feedstocks on marginal or degraded lands because these lands are not currently being farmed and may be suitable for these feedstocks. However, according to the National Research Council, the current evapotranspiration rates of crops grown on such lands is not well known. Effects of cellulosic crops on water quality. Many experts and officials we spoke with said research is needed to better understand the nutrient needs of cellulosic crops grown on a commercial scale. Specifically, field research is needed on the movement of fertilizer in the soil, air, and water after it is applied to these crops. One expert explained there are water quality models that can describe what happens to fertilizer when applied to corn, soy, and other traditional row crops. However, such models are less precise for perennial grasses due to the lack of data from field trials. Similarly, several experts and officials told us that additional research is also needed on the potential water quality impacts from the harvesting of corn stover. In particular, research is needed on the erosion and sediment delivery rates of different cropping systems in order to determine the acceptable rates of residue removal for different crops, soils, and locations and to develop the technology to harvest residue at these rates. Cultivation of algae. Although algae can be cultivated using lower-quality water, the impact on water supply and water quality will ultimately depend on which cultivation methods are determined to be the most viable once this nascent technology reaches commercial scale. Many experts we spoke with noted the need for research on how to more efficiently cultivate algae to minimize the freshwater consumption and water quality impacts. For example, research on how to maximize the quantity of water that can be recycled during harvest will be essential to making algae a more viable feedstock option. Further research is also needed to determine whether the pathogens and predators in the lower-quality water are harmful to the algae. In addition, research is also needed on how to manage water discharges during cultivation and harvest of algae. Although it is expected that most water will be recycled, a certain amount must be removed to prevent the buildup of salt. This water may contain pollutants—such as nutrients, heavy metals, and accumulated toxics—that need to be removed to meet federal and state water quality standards. Data on land use. Better data are needed on what lands are currently being used to cultivate feedstocks, what lands may be most suitable for future cultivation, and how land is actually being managed, according to experts and officials. For example, some experts and officials told us there is a need for improved data on the status and trends in the CRP. According to a CRP official, USDA does not track what happens to land after it is withdrawn from the CRP. Such data would be useful because it would help officials gain a better understanding of the extent to which marginal lands are being put back into production. In addition, improved data on land use would help better target and remove the least productive lands from agricultural production, resulting in water supply and water quality benefits because these lands generally require greater amounts of inputs, according to these experts and officials. Research is also needed to determine optimal placement of feedstocks and use of agricultural conservation practices to get the best yields and minimize adverse environmental impacts. Farmer decision making. Several experts and officials told us that a better understanding of how farmers make cultivation decisions, such as which crops to plant or how to manage their lands, is needed in the context of the water resource effects of biofuel feedstocks. Specifically, several experts and officials said that research is needed to better understand how farmers decide whether to adopt agricultural conservation practices. In particular, some experts and officials said research should explore how absentee ownership of land affects the choice of farming practices. These experts and officials told us it is common for landowners to live elsewhere and rent their farmland to someone else. For example, in Iowa, 50 percent of agricultural land is rented, according to one expert, and renters may be making cultivation decisions that maximize short-term gains rather than focusing on the long- term health of the land. In addition, several experts and officials said that research is needed to understand the cultural pressures that may make farmers slow to adopt agricultural conservation practices. For example, some experts and officials we spoke with said that some farmers may be hesitant to move away from traditional farming approaches. Conversion. Existing and emerging technology innovations, such as those discussed earlier in the report, may be able to address some effects of conversion on water resources, but more research into optimizing current technologies is also needed, according to experts. For example, research into new technologies that further reduce water needs for biorefinery cooling systems would have a significant impact on the overall water use at a biorefinery, according to several experts. Congress is considering legislation—the Energy and Water Research Integration Act—that would require DOE’s research, development, and demonstration programs to seek to advance energy and energy efficiency technologies that minimize freshwater use, increase water use efficiency, and utilize nontraditional water sources with efforts to improve the quality of that water. It would also require the Secretary of Energy to create a council to promote and enable, in part, improved energy and water resource data collection. Similarly, with regard to conversion facilities for the next generation feedstocks, further research is needed to ensure that the next generation of biorefineries is as water efficient as possible. For example, for the conversion of algae into biofuels, research is needed on how to extract oil from algal cells so as to preserve the water contained in the cell, which would allow some of that water to be recycled. Storage and distribution. EPA officials noted that additional research related to storage and distribution of biofuels is also needed to help reduce the effects of leaks that can result from the storage of biofuel blends in incompatible tank systems. Although EPA has some research under way, more is needed into the compatibility of fuel blends containing more than 10 percent ethanol with the existing fueling infrastructure. In addition, research should evaluate advanced conversion technologies that can be used to produce a variety of renewable fuels that can be used in the existing infrastructure. Similarly, research is needed into biodiesel distribution and storage, such as assessing the compatibility of blends greater than 5 percent with the existing storage and distribution infrastructure. In addition, experts and officials identified the following needs for better data on local and regional water resources: Water availability data. Because some local aquifers and surface water bodies are already stressed, many experts called for more and better data on water resources. Although USGS reports data on water use every 5 years, the agency acknowledges that it does not have good estimates of water use for biofuel production for irrigation or fuel production, so it is unclear how much water has been or will be actually consumed with increases in cultivation and conversion of biofuel feedstocks. Furthermore, some experts and officials told us that even when local water data are available, the data sources are often inconsistent or out of date. For example, the data may capture different information or lack the information necessary for making decisions regarding biofuel production. According to several experts and officials, better data on water supplies would also help ensure that new biorefineries are built in areas with enough water for current and future conversion processes. Although biorefineries account for only a small percentage of water used during the biofuel production process, the additional withdrawals from aquifers can affect other users that share these water sources. Improving water supply data would help determine whether the existing water supplies can support the addition of a biorefinery in a particular area. Some experts also noted the need for research on the availability of lower-quality water sources such as brackish groundwater, which could be used for cultivation of some next generation feedstocks, especially algae. Better information is necessary to better define the spatial distribution, depth, quantity, physical and chemical characteristics, and sustainable withdrawal rates for these lower-quality water sources, and to predict the long-term effects of water extraction. Linkages between datasets. Some experts also cited a need for better linkages between existing datasets. For example, datasets on current land use could be combined with aquifer data to help determine what land is available for biofuel feedstock cultivation that would have minimal effects on water resources. In addition, some experts said that while there are data that state agencies and private engineering companies have collected on small local aquifers, a significant effort would be required to identify, coordinate, and analyze this information because linkages do not currently exist. Geological process data. Several experts and officials also said that research into geological processes is needed to understand the rate at which aquifers are replenished and the impact of increased biofuel production on those aquifers. Although research suggests there should be sufficient water resources to meet future biofuel feedstock production demands at a national level, increased production may lead to significant water shortages in certain regions. For example, additional withdrawals in states relying heavily on irrigation for agriculture may place new demands on already stressed aquifers in the Midwest. Even in water-rich states, such as Iowa, concerns have arisen over the effects of increased biofuel production, and research is needed to assess the hydrology and quality of a state’s aquifers to help ensure it is on a path to sustainable production, according to one state official. Agency Comments and Our Evaluation We provided a draft of this report to USDA, DOE, DOI, and EPA for review and comment. USDA generally agreed with the findings of our report and provided several comments for our consideration. Specifically, USDA suggested that we consider condensing our discussion of agricultural practices, equipment, and grower decisions, as these items may or may not be relevant depending on the feedstock or regulatory control. However, we made no revisions to the report because we believe that cultivation is a significant part of the biofuels life cycle, and these items are relevant and necessary to consider when discussing the potential effect of biofuel production on water resources. USDA also noted that the report is more focused on corn ethanol production than next generation biofuels and that we had not adequately recognized industry efforts to be more sustainable through a movement toward advanced biofuels. Given the maturity of the corn ethanol industry, the extent of knowledge about the effects on water supply and quality from cultivation of corn and its conversion into ethanol, and the uncertainty related to the effects of next generation biofuel production, we believe the balance in the report is appropriate. Moreover, although the shift toward next generation biofuels is a positive step in terms of sustainability, this industry is still developing and the full extent of the environmental benefits from this shift is still unknown. USDA also provided technical comments, which we incorporated as appropriate. See appendix III for USDA’s letter. DOE generally agreed with our findings and approved of the overall content of the report and provided several comments for our consideration. Specifically, DOE noted that it may be too early to make projections on the amount of CRP land that will be converted and the amount of additional inputs that will be needed for cultivation of biofuel feedstocks. In addition, DOE suggested we expand our discussion of efforts to address risks of ethanol transport and note the water use associated with the production of biomass-to-liquid fuels. We adjusted the text as appropriate to reflect these suggestions. DOE also suggested that the report should discuss water pricing; however, this was outside the scope of our review. See appendix IV for DOE’s letter. In its general comments, DOI stated that the report is useful and agreed with the finding on the need for better data on water resources to aid the decision about where to cultivate feedstocks and locate biorefineries. DOI also suggested that the report should include a discussion of the other environmental impacts of biofuel production, such as effects on wildlife habitat or effects on soil. In response, we note that this report was specifically focused on the impacts of biofuel production on water resources; however, for a broader discussion of biofuel production, including other environmental effects, see our August 2009 report. DOI also provided additional technical comments that we incorporated into the report as appropriate. See appendix V for DOI’s letter. EPA did not submit formal comments, but did provide technical comments that we incorporated into the final report as appropriate. We are sending copies of this report to interested congressional committees; the Secretaries of Agriculture, Energy, and the Interior; the Administrator of the Environmental Protection Agency; and other interested parties. In addition, the report will be available at no charge on the GAO Web site at http://www.gao.gov. If you or your staff have questions about this report, please contact us at (202) 512-3841 or [email protected] or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix VI. Appendix I: Objectives, Scope, and Methodology Our objectives for this review were to describe (1) the known water resource effects of biofuel production in the United States; (2) the agricultural conservation practices and technological innovations that exist or are being developed to address these effects and any barriers that may prevent the adoption of these practices and technologies; and (3) key research needs regarding the effects of biofuel production on water resources. To address each of these objectives, we conducted a systematic analysis of relevant articles of relevant scientific articles, U.S. multidisciplinary studies, and key federal and state government reports addressing the production of biofuels and its impact on water supply and quality, including impacts from the cultivation of biofuel feedstock and water use and effluent release from biofuel conversion processes. In conducting this review, we searched databases such as SciSearch, Biosis Previews, and ProQuest and used a snowball technique to identify additional studies, asking experts to identify relevant studies and reviewing studies from article bibliographies. We reviewed studies that fit the following criteria for selection: (1) the research was of sufficient breadth and depth to provide observations or conclusions directly related to our objectives; (2) the research was targeted specifically toward projecting or demonstrating effects of increased biofuel feedstock cultivation, conversion, and use on U.S. water supply and water quality; and (3) typically published from 2004 to 2009. We examined key assumptions, methods, and relevant findings of major scientific articles, primarily on water supply and water quality. We believe we have included the key scientific studies and have qualified our findings where appropriate. However, it is important to note that, given our methodology, we may not have identified all of the studies with findings relevant to these three objectives. Where applicable, we assessed the reliability of the data we obtained and found them to be sufficiently reliable for our purposes. In collaboration with the National Academy of Sciences, we identified and interviewed recognized experts affiliated with U.S.-based institutions, including academic institutions, the federal government, and research- oriented entities. These experts have (1) published research analyzing the water resource requirements of one or more biofuel feedstocks and the implications of increased biofuels production on lands with limited water resources, (2) analyzed the possible effects of increased biofuel production on water, or (3) analyzed the water impacts of biofuels production and use. Together with the National Academy of Sciences’ lists of experts, we identified authors of key agricultural and environmental studies as a basis for conducting semistructured interviews to assess what is known about the effects of the increasing production of biofuels and important areas that need additional research. The experts we interviewed included research scientists in such fields as environmental science, agronomy, soil science, hydrogeology, ecology, and engineering. Furthermore, to gain an understanding of the programs and plans states have or are developing to address increased biofuel production, we conducted in-depth reviews of the following five states: Georgia, Iowa, Minnesota, Nebraska, and Texas. We selected these states based on a number of criteria: ethanol and biodiesel production levels, feedstock cultivation type, reliance on irrigation, geographic diversity among states currently producing biofuels, and approaches to water resource management and law. For each of the states, we analyzed documentation from and conducted interviews with a wide range of stakeholders to gain the views of diverse organizations covering all stages of biofuel production. These stakeholders included relevant state agencies, including those responsible for oversight of agriculture, environmental quality, and water and soil resources; federal agency officials with responsibility for a particular state or region, such as officials from the Department of the Interior’s U.S. Geological Survey (USGS), the U.S. Department of Agriculture’s (USDA) Natural Resources Conservation Service, and the Environmental Protection Agency (EPA); university researchers; industry representatives; feedstock producers; and relevant nongovernmental organizations, such as state-level corn associations, ethanol producer associations, and environmental organizations. We also conducted site visits to Iowa and Texas to observe agricultural practices and the operation of selected biofuels production plants. We also interviewed senior officials, scientists, economists, researchers, and other federal officials from USDA, the Departments of Defense and Energy, EPA, the National Aeronautics and Space Administration, the Department of Commerce’s National Oceanic and Atmospheric Administration, the National Science Foundation, and USGS about effects on the water supply and water quality during the cultivation of biofuel feedstocks and the conversion and storage of the finished biofuels. In addition, we interviewed state officials from Georgia, Iowa, Minnesota, Nebraska, and Texas as well as agricultural producers and representatives of biofuel conversion facilities to determine the impact of biofuels production in each state. We also interviewed representatives of nongovernmental organizations, such as the Renewable Fuels Association, the Biotechnology Industry Organization, the Pacific Institute, and the Fertilizer Institute. To conduct the interview content analysis, we reviewed interviews, selected relevant statements from the interviews, and identified and labeled trends using a coding system. Codes were based on trends identified by previous GAO biofuel-related work, background information collected for the review, and the interviews for this review. The methodology for each objective varied slightly, because the first objective focused on regional differences and therefore relied on case study interviews, while analysis performed for the remaining two objectives used expert interviews in addition to case study interviews. Once relevant data were extracted and coded, we used the coded data to identify and analyze trends. For the purposes of reporting our results, we used the following categories to quantify responses of experts and officials: “some” refers to responses from 2 to 3 individuals, “several” refers to responses from 4 to 6 individuals, and “many” refers to responses from 7 or more individuals. We conducted our work from January 2009 to November 2009 in accordance with all sections of GAO’s Quality Assurance Framework that are relevant to our objectives. The framework requires that we plan and perform the engagement to obtain sufficient and appropriate evidence to meet our stated objectives and to discuss any limitations in our work. We believe that the information and data obtained, and the analysis conducted, provide a reasonable basis for any findings and conclusions in this product. Appendix II: Examples of Agricultural Practices Available to Reduce the Water Quality and Water Supply Effects of Feedstock Cultivation for Biofuels Any tillage method that leaves a portion of the previous crop residues (unharvested portions of the crop) on the soil surface. Reduces soil erosion caused by tillage and exposure of bare soil to wind and water. Reduces water lost to evaporation. Improves soil quality. Reduces sediment and fertilizer runoff. Method that leaves soil and crop residue undisturbed except for the crop row where the seed is placed in the ground. Reduces soil erosion caused by tillage and exposure of bare soil to wind and water. Reduces water lost to evaporation. Improves soil quality by improving soil organic matter. Reduces sediment and fertilizer runoff. A close-growing crop that temporarily protects the soil during the interim period before the next crop is established. Reduces erosion. Reduces nitrate leaching. Integrates crops that store nitrogen from the atmosphere (such as soy), replaces the nitrogen that corn and other grains remove from the soil. Reduces pesticide use by naturally breaking the cycle of weeds, insects, and diseases. Improves soil quality by improving soil organic matter. Change in the crops grown in a field, usually in a planned sequence. For example, crops could be grown in the following sequence, corn-soy-corn, rather than in continuous corn. Integrates crops that obtain nitrogen from the atmosphere (such as soy), replaces the nitrogen that corn and other grains remove from the soil. Reduces pesticide use by naturally breaking the cycle of weeds, insects, and diseases. Use of nutrients to match the rate, timing, form, and application method of fertilizer to crop needs. Reduces nutrient runoff and leaching. Injection of fertilizer below the soil surface. Reduces runoff and gaseous emission from nutrients. Use of fertilizers with water-insoluble coatings that can prevent water-soluble nitrogen from dissolving. Reduces nutrient runoff and leaching. Increases the efficiency of the way nutrients are supplied to and are taken up by the plant, regardless of the crop. Water control structures, such as a flashboard riser, installed in the drainage outlet allow water level to be raised or lowered as needed. Minimizes transport of nutrients to surface waters. Irrigation systems buried directly beneath the crop apply water directly to the root zone. Minimizes water lost to evaporation and runoff. Irrigation systems that operate at lower pressures and have higher irrigation-water application and distribution efficiencies. Minimizes net water loss and energy use. Water recovered from domestic, municipal, and industrial wastewater treatment plants that has been treated to standards that allow safe reuse for irrigation. Reduces demand on surface and ground waters. Restoration of a previously drained wetland by filling ditches or removing or breaking tile drains. Reduces flooding downstream. Filters sediment, nutrients, and chemicals. Provides habitat for wetland plants, amphibians, and birds. Strips or small areas of land planted along waterways in permanent vegetation that help control pollutants and promote other environmental benefits. Traps sediment. Filters nutrients. Provides habitat and corridors for fish and wildlife. A system of management of site-specific inputs (e.g., fertilizer, pesticides) on a site-specific basis such as land preparation for planting, seed, fertilizers and nutrients, and pest control. Precision agriculture may be able to maximize farm production efficiency while minimizing environmental effects. Key technological tools used in this approach include global positioning systems, geographic information systems, real-time soil testing, real-time weather information, etc. Reduces nutrient runoff and leaching. Reduces erosion. Reduces pesticide use. Appendix III: Comments from the U.S. Department of Agriculture Appendix IV: Comments from the Department of Energy Appendix V: Comments from the Department of the Interior Appendix VI: GAO Contacts and Staff Acknowledgments GAO Contacts Staff Acknowledgments In addition to the contact named above, Elizabeth Erdmann, Assistant Director; JoAnna Berry; Mark Braza; Dave Brown; Muriel Brown; Colleen Candrl; Miriam Hill; Carol Kolarik; Micah McMillan; Chuck Orthman; Tim Persons; Nicole Rishel; Ellery Scott; Ben Shouse; Jeanette Soares; Swati Thomas; Lisa Vojta; and Rebecca Wilson made significant contributions to this report.
In response to concerns about the nation's energy dependence on imported oil, climate change, and other issues, the federal government has encouraged the use of biofuels. Water plays a crucial role in all stages of biofuel production--from cultivation of feedstock through its conversion into biofuel. As demand for water from various sectors increases and places additional stress on already constrained supplies, the effects of expanded biofuel production may need to be considered. To understand these potential effects, GAO was asked to examine (1) the known water resource effects of biofuel production in the United States; (2) agricultural conservation practices and technological innovations that could address these effects and any barriers to their adoption; and (3) key research needs regarding the effects of water resources on biofuel production. To address these issues, GAO reviewed scientific studies, interviewed experts and federal and state officials, and selected five states to study their programs and plans related to biofuel production. GAO is not making any recommendations in this report. A draft of this report was provided to the Departments of Agriculture (USDA), Energy (DOE), and the Interior (DOI); and the Environmental Protection Agency (EPA). USDA, DOE, and DOI concurred with the report and, in addition to EPA, provided technical comments, which were incorporated as appropriate. The extent to which increased biofuels production will affect the nation's water resources depends on the type of feedstock selected and how and where it is grown. For example, to the extent that this increase is met from the cultivation of conventional feedstocks, such as corn, it could have greater water resource impacts than if the increase is met by next generation feedstocks, such as perennial grasses and woody biomass, according to experts and officials. This is because corn is a relatively resource-intensive crop, and in certain parts of the country requires considerable irrigated water as well as fertilizer and pesticide application. However, experts and officials noted that next generation feedstocks have not yet been grown on a commercial scale and therefore their actual effects on water resources are not fully known at this time. Water is also used in the process of converting feedstocks to biofuels, and while the efficiency of biorefineries producing corn ethanol has increased over time, the amount of water required for converting next generation feedstocks into biofuels is still not well known. Finally, experts generally agree that it will be important to take into account the regional variability of water resources when choosing which feedstocks to grow and how and where to expand their production in the United States. The use of certain agricultural practices, alternative water sources, and technological innovations can mitigate the effects of biofuels production on water resources, but there are some barriers to their widespread adoption. According to experts and officials, agricultural conservation practices can reduce water use and nutrient runoff, but they are often costly to implement. Similarly, alternative water sources, such as brackish water, may be viable for some aspects of the biofuel conversion process and can help reduce biorefineries' reliance on freshwater. However, the high cost of retrofitting plants to use these water sources may be a barrier, according to experts and officials. Finally, innovations--such as dry cooling systems and thermochemical processes--have the potential to reduce the amount of water used by biorefineries, but many of these innovations are currently not economically feasible or remain untested at the commercial scale. Many of the experts GAO spoke with identified several areas where additional research is needed. These needs fall into two broad areas: (1) feedstock cultivation and biofuel conversion and (2) data on water resources. For example, some experts noted the need for further research into improved crop varieties, which could help reduce water and fertilizer needs. In addition, several experts identified research that would aid in developing next generation feedstocks. For example, several experts said research is needed on how to increase cultivation of algae for biofuel to a commercial scale and how to control for potential water quality problems. In addition, several experts said research is needed on how to optimize conversion technologies to help ensure water efficiency. Finally, some experts said that better data on water resources in local aquifers and surface water bodies would aid in decisions about where to cultivate feedstocks and locate biorefineries.
GAO_NSIAD-00-80
Introduction Since World War I, the United States has maintained a stockpile of chemical agents and munitions to deter the use of chemical weapons against its troops. From 1917 through the 1960s, obsolete or unserviceable chemical agents and munitions were disposed of by fire in an open pit, burial, and dumping in the ocean. However, because of public concern about the potential effects of these methods of disposal on public health and the environment, they were discontinued during the 1970s. In 1985, the Congress required the Department of Defense (DOD) to carry out the destruction of the U.S. stockpile of chemical agents and munitions and establish an organization within the Army to be responsible for the disposal program. Over time, the Congress also directed DOD to dispose of chemical warfare materiel not included in the stockpile and to research and develop technological alternatives for disposing of chemical agents and munitions. In April 1997, the U.S. Senate ratified the U.N.-sponsored Chemical Weapons Convention, effectively agreeing to dispose of the chemical stockpile weapons and chemical warfare materiel by April 29, 2007. If a country is unable to maintain the convention’s disposal schedule, the convention’s management organization may grant an extension of up to 5 years. Elements of the Chemical Demilitarization Program Since the 1980s, DOD’s chemical weapons disposal activities have evolved into the current program, known as the Chemical Demilitarization Program. It now consists of the Chemical Stockpile Disposal Project, the Chemical Stockpile Emergency Preparedness Project, the Nonstockpile Chemical Materiel Product, the Alternative Technologies and Approaches Project, and the Assembled Chemical Weapons Assessment Program. Chemical Stockpile Disposal Project In section 1412 of the Fiscal Year 1986 Department of Defense Authorization Act (P.L. 99-145), the Congress directed DOD to destroy the U.S. stockpile of lethal chemical agents and munitions that existed on the date of the legislation’s enactment. The original stockpile consisted of 31,496 tons of nerve and mustard agents contained in rockets, bombs, projectiles, spray tanks, and bulk containers. Some munitions contained nerve agents, which can disrupt the nervous system and lead to loss of muscular control and death. Others contained a series of mustard agents that blister the skin and can be lethal in large amounts. The stockpile is stored at eight sites in the continental United States and on Johnston Atoll in the Pacific Ocean, as shown in figure 1. In 1988, the Army formally announced its Chemical Stockpile Disposal Project and stated that incineration on site at each of the existing stockpile locations was the preferred disposal method. The objectives of the program are to (1) destroy the stockpile of chemical weapons and (2) provide maximum protection to the environment, the public, and personnel involved in the storage, handling, and disposal of the stockpile. To destroy the weapons, the Army uses a “reverse-assembly” procedure that drains the chemical agent from the weapons and takes apart the weapons in the reverse order of assembly. Once disassembled, the chemical agents and weapons are incinerated in separate furnaces. As of January 30, 2000, the Army had incineration-based disposal operations under way at two sites, had destroyed approximately 17.7 percent of the original chemical stockpile, and had started construction of disposal facilities for future disposal operations at five other sites. The two operational disposal facilities are at the following locations: Johnston Atoll is located in the Pacific Ocean about 825 miles southwest of Hawaii. The Army completed construction of its baseline incineration disposal facility in July 1988 and started incineration operations in June 1990. It is the world’s first full-scale facility designed specifically for the disposal of chemical weapons and is the prototype plant for the destruction program. The stockpile at Johnston Atoll originally contained 2,031 tons of nerve and mustard agents and represented 6.4 percent of the original stockpile. Deseret Chemical Depot is located about 22 miles south of Tooele, Utah. The Army completed construction of the disposal facility in August 1993 and started incineration operations in August 1996. The Army considers this disposal facility to be a first generation incineration facility, and as at the Johnston Atoll facility, it expects to apply lessons learned from its operations to other disposal sites. The Deseret stockpile originally contained 13,616 tons of nerve and mustard agents, representing 43.2 percent of the original stockpile. Three other baseline incineration disposal facilities are under construction. In June 1997, the Army started construction of both the Umatilla facility, located about 7 miles from Hermiston, Oregon, and the Anniston facility, located about 50 miles east of Birmingham, Alabama. The Umatilla stockpile contains 3,717 tons of nerve and mustard agents, or 11.8 percent of the original stockpile. The Anniston stockpile contains 2,254 tons of nerve and mustard agents, or 7.2 percent of the original stockpile. These facilities are scheduled to begin destroying agents in late 2001 or early 2002. Construction of the Pine Bluff facility, located about 35 miles southeast of Little Rock, Arkansas, started in 1999. The Pine Bluff stockpile contains 3,850 tons of agents, or 12.2 percent of the original stockpile. The Army plans to begin destroying chemical agents at the Pine Bluff facility in late 2003. Chemical Stockpile Emergency Preparedness Project In 1988, DOD established the Chemical Stockpile Emergency Preparedness Project to help communities in 10 states near the stockpile storage sites enhance their emergency management and response capabilities in the unlikely event of a chemical stockpile accident. The project, a companion to the Chemical Stockpile Disposal Project, is necessary to help protect the civilian population, workers, and the environment until disposal of the chemical stockpile is complete. Since 1988, the Army and the Federal Emergency Management Agency (FEMA) have assisted the civilian communities in the vicinity of the eight chemical stockpile storage locations and the storage installations in enhancing their emergency response capabilities. In 1997, the Army and FEMA implemented a management structure under which FEMA assumed responsibility for off-post (civilian community) program activities, while the Army continued to manage on-post chemical emergency preparedness and to provide technical support for both on- and off-post activities. FEMA, with its long-standing knowledge and experience in preparing for and dealing with emergencies of all kinds, provides its expertise, guidance, training, and other support to the civilian community. The Agency also administers the grant funds provided to the states and counties where stockpile facilities are located in order to carry out the program’s off-post activities. Nonstockpile Chemical Materiel Product Recognizing that the stockpile program did not include all chemical warfare materiel, the Congress directed DOD to plan for the disposal of materiel not included in the stockpile. Consequently, DOD implemented the Nonstockpile Chemical Materiel Product to identify the locations, types, and quantities of chemical materiel not included in the stockpile; develop and implement disposal and transportation methods and procedures; and develop plans, schedules, and cost estimates to implement the program. This materiel, some of which dates as far back as World War I, consists of binary chemical warfare materiel, miscellaneous chemical warfare materiel, recovered chemical warfare materiel, former production facilities, and buried chemical warfare materiel. These items are described in table 1. The locations of the nonstockpile chemical materiel, as of January 31, 2000, are shown in figure 2. Since the early 1990s, the Army has undertaken research and development on several transportable systems that are designed to identify, access, and treat chemical agents in nonstockpile munitions and decontaminate the containers and munitions. All disposal methods are to comply with federal and the affected state’s environmental and safety regulations. Until recently, the Army emphasized the use of transportable treatment systems because of the relatively small quantities and the characteristics of nonstockpile chemical weapon materiel located at a potentially large number of sites through the country. Alternative Technologies and Approaches Project In November 1991, because of public concern about the safety of incineration, the Army requested the National Research Council to evaluate potential technological alternatives to the baseline incineration process. In the 1993 Defense Authorization Act (sec. 173), the Congress directed the Army to use the Council’s evaluation and report on potential technological alternatives to incineration. The Congress also directed the Army to consider safety, environmental protection, and cost-effectiveness when evaluating alternative technologies. Consequently, in August 1994, the Army initiated the Alternative Technologies and Approaches Project, a more aggressive research and development program, to investigate, develop, and support the testing of two technologies based on chemical neutralization of chemical agents at the bulk-only stockpile sites— Aberdeen, Maryland, and Newport, Indiana. The project focused on these two sites because they have only one type of chemical agent stored in large steel bulk containers. The Army is conducting this project in conjunction with the baseline incineration program. In 1997, the project proceeded with full-scale pilot testing of the neutralization technologies at the following two stockpile sites: Edgewood Chemical Activity is located at the Edgewood Area of the Aberdeen Proving Ground, north of Baltimore, Maryland. The Aberdeen stockpile consists of 1,625 tons (or 5.2 percent of the original stockpile) of mustard agent stored in 1,818 ton containers. These containers are designed for safe storage of bulk chemical agents and do not have fuzes, warheads, or other explosive devices. The disposal technology being tested at Aberdeen is neutralization followed by the biodegradation process. The environmental permit was obtained from the state in February 1999, enabling the start of site preparation activities and construction. Newport Chemical Depot is located 2 miles south of Newport and 32 miles north of Terre Haute in western Indiana. The Newport stockpile consists of 1,269 tons (or 4 percent of the original stockpile) of nerve agent stored in 1,690 ton containers. The technology being tested at Newport was neutralization followed by a supercritical water oxidation process. The environmental permit was obtained from the state in December 1999, enabling the start of construction activities. Assembled Chemical Weapons Assessment Program In the National Defense Authorization Act for Fiscal Year 1997, the Congress directed DOD to assess alternative technologies to the baseline incineration process for the disposal of assembled chemical munitions. In addition, section 8065 of the Department of Defense Appropriations Act, 1997, provided $40 million to conduct the Assembled Chemical Weapons Assessment Program, a pilot program to identify and demonstrate two or more alternatives to the incineration process for the destruction of assembled chemical munitions. The appropriations act required the Under Secretary of Defense for Acquisition and Technology to designate a program manager who was not, nor had been, in direct or immediate control of the baseline incineration program to carry out the pilot program. The act also prohibited DOD from obligating any funds for constructing incineration facilities at Blue Grass, Kentucky, and Pueblo, Colorado, until 180 days after the Secretary of Defense reports on alternative disposal methods for assembled chemical weapons. The Strom Thurmond National Defense Authorization Act for Fiscal Year 1999 requires that if a technology other than incineration is selected for these sites, the Under Secretary of Defense must certify in writing to the Congress that the alternative is (1) as safe and cost-effective for disposing of assembled chemical munitions as is incineration, (2) capable of completing the destruction of such munitions on or before the later date of either when the destruction would be completed if incineration were used or the convention’s deadline, and (3) capable of satisfying federal and state environmental and safety laws. Because of the legislative prohibition on obligating any funds for constructing incineration facilities and the states’ unwillingness to accept incineration as a disposal method, the Army tentatively chose the Blue Grass and Pueblo depots to test alternative technologies for destroying the assembled chemical weapons. The Blue Grass Army Depot, located in central Kentucky, has a stockpile of weapons containing 523 tons of nerve and mustard agents, or 1.7 percent of the original stockpile. The Pueblo Chemical Depot, located about 14 miles east of Pueblo, Colorado, has a stockpile of weapons containing about 2,611 tons of mustard agent, or 8.3 percent of the original stockpile. The program involves a three-phased approach that includes the development of technology evaluation criteria, technology assessment, and demonstration of not less than two technologies. The public has thus far participated in all phases of the program. During criteria development in mid-1997, the program office developed three sets of criteria to select proposals of technologies worthy of further evaluation, demonstration, and implementation. From September 1997 through June 1998, the program office selected six technologies as worthy of demonstration from 12 proposals. However, because of funding constraints, the program office selected only three of the six technologies for further testing. This selection was based on the program office’s evaluation of the demonstration plans and determination of each technology’s value to the government. In May 1998, the program office determined that neutralization followed by supercritical water oxidation was a viable solution for destroying assembled chemical weapons containing either nerve or mustard agents. It determined that neutralization followed by biodegradation was a viable solution for destroying assembled weapons containing mustard agents. The conference report for the National Defense Authorization Act for Fiscal Year 2000 stated that the conferees had been advised that DOD intended to conduct evaluations of the three technologies previously selected for the demonstration program, but which had not been tested because of funding constraints. In addition, the conference report noted that DOD had decided to spend $40 million for this purpose. In the conference report for the Department of Defense Appropriations Act for Fiscal Year 2000, the conferees directed DOD to make available $40 million to conduct demonstration testing of the three additional alternative technologies. Program officials are now using these funds to demonstrate these technologies. As of February 28, 2000, no decision had been made on which of the alternative technologies or the baseline incineration process would be used to destroy the chemical stockpile in Kentucky or Colorado. To increase public awareness and acceptance, the program office established, with the assistance of the Keystone Center, the Dialogue on Assembled Chemical Weapons Assessment. The Dialogue includes representatives of the affected communities, national citizens’ groups, state regulatory agencies, Native American tribes, the Environmental Protection Agency, and the Departments of Defense and the Army and participates in the Army’s decision-making process for the program. Management Structure of the Chemical Demilitarization Program DOD and Army managers at several different levels share management roles and responsibilities for elements of the Chemical Demilitarization Program. The Assistant Secretary of the Army (Acquisition, Logistics and Technology) oversees the chemical stockpile, nonstockpile, and alternative technologies and approaches projects. The Program Manager for Chemical Demilitarization manages the daily operations of these projects. The office of the program manager is organized into distinct project areas: the Project Manager for Chemical Stockpile Disposal, the Product Manager for Nonstockpile Chemical Materiel, and the Project Manager for Alternative Technologies and Approaches. The Assistant Secretary of the Army (Installations and Environment) and FEMA share management responsibilities for the Chemical Stockpile Emergency Preparedness Project. The U.S. Army Soldier and Biological Chemical Command manages on-post Army activities for the Assistant Secretary of the Army, while FEMA manages the off-post portion of the program in the civilian communities. The Under Secretary of Defense (Acquisition and Technology) oversees the Assembled Chemical Weapons Assessment Program. The Program Manager for Assembled Chemical Weapons Assessment manages daily operations. The management structure for the program is discussed in more detail in chapter 4. International Efforts to Eliminate Chemical Agents and Weapons In 1993, the United States, Russia, and more than 150 nations signed the U.N.-sponsored Convention on the Prohibition of the Development, Production, Stockpiling and the Use of Chemical Weapons and on Their Destruction, commonly referred to as the Chemical Weapons Convention. In October 1996, the 65th nation ratified the Chemical Weapons Convention, making the convention effective on April 29, 1997. On April 24, 1997, the Senate ratified the convention, committing the United States to dispose of its unitary chemical weapons, binary chemical warfare materiel, recovered chemical warfare materiel, and former chemical weapons production facilities by April 29, 2007. The Army classifies unitary chemical weapons as chemical stockpile weapons and classifies binary chemical warfare materiel, recovered chemical warfare materiel, former chemical weapons production facilities, and miscellaneous chemical warfare materiel as nonstockpile chemical materiel. If a country is unable to meet the convention’s disposal schedule, the convention’s Organization for the Prohibition of Chemical Weapons may grant an extension, although in no case may the deadline be extended past April 29, 2012. Our Prior Concerns With the Chemical Demilitarization Program Since its beginning, the Chemical Demilitarization Program has been beset by controversy over disposal methods, delays of 2 to 3 years more than the Army anticipated in obtaining needed federal and state environmental permits and other approvals, and increasing costs. In prior reports, we expressed concern about the Army’s lack of progress and the rising cost of the program. For example, in 1991 we reported that continued problems in the program indicated that increased costs and additional time to destroy the chemical stockpile should be expected and recommended that the Army determine whether faster and less costly technologies were available to destroy the stockpile. In a 1994 report on the nonstockpile program, we concluded that the Army’s plans for disposing of nonstockpile chemical warfare materiel were not final and that its costs were likely to change. In 1997, we reported that the program cost and schedule were largely driven by the degree to which DOD and the affected states and communities agreed with the proposed method to dispose of the chemical weapons and materiel. In July 1999, we reported that, although sizable unliquidated obligations were reported for the program from prior years, program funds did not appear to be available for other uses. In addition, we reported that these unliquidated obligations were caused by a number of factors, such as delays in obtaining environmental permits and technical delays. See related GAO products at the end of this report. Our objectives, scope, and methodology are described in appendix I. Most Chemical Weapons and Materiel Could Be Destroyed Before the Convention’s 2007 Deadline The Army has destroyed approximately 17.7 percent of the original chemical weapons stockpile and could destroy 90 percent of its stockpile of chemical agents and munitions and most of its nonstockpile chemical warfare materiel before the Chemical Weapons Convention’s 2007 deadline, given its recent progress and projected plans. The Army has disposal operations under way at two stockpile sites and has started construction of disposal facilities for future destruction operations at five other sites—these seven sites store 90 percent of the chemical stockpile. However, because of the additional time required to develop and select disposal methods that are acceptable to the state regulatory agencies and local communities in Kentucky and Colorado, which store the remaining 10 percent of the original stockpile, the Army will not meet the 2007 deadline at these sites. In addition, the disposal of some nonstockpile items may exceed the 2007 deadline because of delays in the testing of and obtaining permits for key disposal systems for recovered chemical warfare materiel and because of possible delays in the demolition of a former chemical weapons production facility. Given past program experience, these types of delays are likely to occur and will add to program costs. The Army estimates that the program will cost $14.9 billion; it has spent approximately $6.2 billion and estimates that the program will cost another $8.7 billion. To identify opportunities to reduce the cost of the program, officials have developed and implemented several cost-reduction initiatives associated with contracting for the stockpile disposal facilities and increasing the public awareness and acceptance of the program. Ninety Percent of the Chemical Stockpile Could Be Destroyed Before the Convention’s 2007 Deadline In prior reports, we have expressed concern about the Army’s lack of progress in destroying the stockpile of chemical agents and munitions. Despite these early delays, the Army is now making progress toward establishing the capabilities needed to destroy the stockpile. Absent unanticipated delays, the Army could destroy about 90 percent of the stockpile before the convention’s 2007 deadline. However, because of the additional time required to research, develop, test, and verify new disposal methods that may be environmentally acceptable to the state regulatory agencies and local communities in Kentucky and Colorado, destruction activities at these locations, which store the remaining 10 percent of the stockpile, are not likely to start before the 2007 deadline. The Army Has Made Progress in the Destruction of the Chemical Stockpile Since the Army formally announced its stockpile disposal project in 1988, it has disposal operations under way at two sites and has started construction of disposal facilities for future operations at five other sites. It is now operating stockpile disposal facilities at Johnston Atoll and Tooele, Utah, which together stored 49.7 percent of the total original stockpile. Of this amount, 5,572 tons (17.7 percent of the original stockpile) of chemical agents have been destroyed, and another 10,075 tons (32 percent of the original stockpile) are scheduled for disposal at the two sites. In addition, the Army has started to build chemical weapons disposal facilities at Aberdeen, Maryland; Anniston, Alabama; Newport, Indiana; Pine Bluff, Arkansas; and Umatilla, Oregon. As shown in figure 3, these sites stored 40.4 percent of the total original stockpile. Significant actions in the implementation of disposal operations for 90 percent of the stockpile have been completed. Such actions include the selection of a disposal method and the granting of environmental permits by state and local governments. The disposal method for the remaining 10 percent of the stockpile stored in Kentucky and Colorado has not yet been selected because the 1997 Defense Appropriations Act requires an examination of alternative disposal methods under the Assembled Chemical Weapons Assessment Program. Disposal Operations at Seven of Nine Stockpile Sites Could Be Completed Before the 2007 Deadline The Army’s most recent schedule shows that disposal operations are expected to be completed at seven of the nine stockpile sites with at least 5 months to spare, most sites have at least 18 months to spare, before the 2007 deadline. However, the schedules for completion at the Maryland and Indiana sites, which are pilot testing alternative disposal technologies, are more uncertain because of the need to further test the alternatives proposed for these locations. For the remaining two sites in Kentucky and Colorado, disposal methods have not yet been selected because the 1997 Defense Appropriations Act requires an examination of alternative disposal methods under the Assembled Chemical Weapons Assessment Program. Figure 4 depicts the Army’s current disposal schedules for these sites. As part of the Alternative Technologies and Approaches Project, Aberdeen, Maryland, and Newport, Indiana, have pilot projects to investigate, develop, and support the testing of disposal technologies based on chemical neutralization processes. The Army has started site preparation and construction activities for the full-scale pilot facilities at both locations, but the schedules for testing the technologies at these sites are based primarily on research, modeling, and input from engineers and scientists and not on full-scale operations. It is premature to assume that operations at these sites will successfully demonstrate the technologies without some initial delays associated with the design and operation of the pilot plant. Until the pilot tests are completed, the schedules for these sites remain uncertain, and the sites may not have as long a period before the 2007 deadline as shown in figure 4. On February 2, 2000, officials from the Assembled Chemical Weapons Assessment Program provided schedules showing that disposal operations at Blue Grass, Kentucky, and Pueblo, Colorado, will not start until after the 2007 deadline because of the time required to validate and certify the alternative technologies and obtain environmental permits. However, program officials are optimistic that operations could be completed within the possible 5-year extension to the deadline. Two other reviews of the program also concluded that the two sites would not meet the 2007 deadline. In August 1999, a National Research Council brief by the Chairman of the Assembled Chemical Weapons Committee concluded that disposal operations using an alternative technology in Kentucky and Colorado would not be completed by December 2007. In September 1999, an Arthur Andersen, Limited Liability Partnership, consulting report prepared for the office of the Deputy Assistant Secretary of the Army for Chemical Demilitarization concluded that the estimated completion dates for these sites ranged between May 2011 and December 2015, well beyond the 2007 deadline. The protocol for selecting an alternative technology for the destruction of assembled chemical munitions stored in Kentucky or Colorado has not yet been determined and remains under study. If a technology other than incineration is selected for these sites, the 1999 Defense Authorization Act requires the Under Secretary of Defense for Acquisition and Technology to certify in writing to the Congress that the alternative is (1) as safe and cost-effective for disposing of assembled chemical munitions as incineration, (2) capable of the destruction of such munitions on or before the later date of the completion of destruction if incineration were used or the convention’s deadline, and (3) capable of satisfying federal and state environmental and safety laws. DOD and Army officials were assessing these three conditions and identifying the criteria for making the certification. At the same time, the Congress directed that DOD make available, and DOD actually committed, funds in fiscal year 2000 to award contracts to evaluate and demonstrate three additional technologies for the Assembled Chemical Weapons Assessment Program. Program officials were also determining whether the Under Secretary of Defense could issue the required certification before demonstrating the three additional technologies. It is unlikely that an alternative technology can be validated, certified, and implemented in Colorado and Kentucky in time to meet the convention’s 2007 deadline. In addition, insufficient time remains for the Departments of Defense and the Army to meet the 2007 deadline at these two sites using the baseline incineration process. According to the Army’s 1998 annual report on the program, to meet the destruction schedule required by the convention, authority to proceed with the baseline incineration process in Colorado and Kentucky was required before June 30, 1999. Even so, DOD and Army officials were discussing whether to grant such authority for both Colorado and Kentucky. These officials were preparing two notices of intent announcing the preparation of separate environmental impact statements for the disposal of the stockpile in Colorado. One environmental impact statement will focus on whether to pilot test alternative technologies in Colorado or at two other sites. The other environmental impact statement is to be specific to Colorado and will focus on which disposal method—the baseline incineration process, a modified incineration process, or an alternative technology—should be used in Colorado. Some program officials believe it may still be possible to meet the 2007 deadline by using a modified incineration process to destroy the stockpile in Colorado. However, according to state officials, the Army would have great difficulty in obtaining environmental permits for any type of chemical agent incineration in Colorado or Kentucky. Each state has requirements for obtaining environmental permits that could prevent or slow the implementation of incineration in the two states. Significant Obstacles Could Prevent the Nonstockpile Product From Meeting the Convention’s 2007 Deadline The Army has made progress in destroying most caterories of its nonstockpile chemical materiel as required by the Chemical Weapons Convention. However, the disposal operations for some recovered chemical warfare materiel may exceed the convention’s 2007 deadline because the Army needs more time to develop and prove the proposed disposal methods will be safe and effective and will be accepted by state and local communities. Further, the demolition of a section of a former chemical weapons production facility in Indiana may exceed the 2007 deadline because the chemical stockpile stored there must be destroyed before demolition of the facility can begin. Any slippage in the stockpile disposal schedule, which is considered optimistic by some involved in the program, will cause demolition operations of the facility to extend past the deadline. The Army Has Made Progress in the Disposal of Nonstockpile Chemical Materiel The Army has destroyed a large portion of its nonstockpile chemical warfare materiel. Table 2 summarizes the status of the Army’s efforts to destroy binary chemical warfare materiel, miscellaneous chemical warfare materiel, recovered chemical warfare materiel, and former chemical weapons production facilities. Change in the Proposed Disposal Method Could Delay Disposal of Some Recovered Chemical Warfare Materiel The disposal of some recovered chemical warfare materiel could exceed the convention’s 2007 deadline because of technical issues and cost increases associated with key disposal methods. In addition, the Army has experienced delays in obtaining state permits and approvals to test and implement these methods. Because of these factors, program officials are considering alternative disposal methods to replace the problematic systems. Until recently, the Army was developing four types of integrated transportable destruction systems for nonstockpile materiel. These systems and their status are briefly described in table 3. Each system was expected to use a neutralization process, through which the chemical agent would be mixed with chemicals that would convert the agent into waste compounds. This waste would be much less hazardous than the chemical agent and would be sent to commercial treatment, storage, and disposal facilities that specialize in the treatment of hazardous industrial waste. Nonstockpile officials stated that the research and development of the four treatment systems described in table 3 have reached the point where the Army must decide whether it wants to complete development and make the systems available for deployment in the field. In the case of the two munitions management devices, the Army has experienced technical problems and cost overruns. In addition, it experienced delays in obtaining state permits and approvals to test the prototype for the munitions management device (version 1). The required permits to test the system had not been approved as of February 2, 2000. Because of the problems and delays, nonstockpile product officials were considering alternative disposal methods to replace the two munitions management devices. For example, they were deliberating over the possibility of destroying recovered chemical warfare materiel stored in Oregon and Colorado in stockpile disposal facilities and disposing of the recovered materiel stored in Maryland and Arkansas in neutralization-based disposal facilities specially designed for nonstockpile materiel. However, officials still need more time to prove that the alternatives will safely and effectively destroy recovered chemical materiel. Environmental issues similar to those affecting the testing of the prototype munitions management devices are also likely to affect the Army’s ability to obtain the environmental approvals and permits for the alternatives. Consequently, until the alternatives for disposing of the recovered chemical warfare materiel are proven and accepted by the state and local communities, this portion of the nonstockpile product is at risk of exceeding the 2007 deadline. Destruction of Former Production Facility in Indiana May Not Meet the Convention’s 2007 Deadline A portion of a former production facility at Newport, Indiana, classified as a nonstockpile requirement, may not be destroyed before the convention’s 2007 deadline because the chemical stockpile agent stored there must be destroyed before destruction of the facility can begin. The weapons were scheduled to be destroyed by December 2004. Although nonstockpile program officials were confident that their schedule provides sufficient time to complete demolition of the facility before the 2007 deadline, slippages in the disposal of the chemical stockpile could extend nonstockpile operations past the deadline. As previously discussed, the Newport, Indiana, stockpile schedule is at risk because the Army needs more time to demonstrate whether the proposed alternative technology— developed by the Alternative Technologies and Approaches Project but not yet proven in full-scale operations—will safely and effectively destroy the stockpile. In addition, state officials believe the disposal schedule is too ambitious because it is based on processing 600 containers filed with nerve agent during pilot testing, more than they believe is realistic in the time allowed. State officials said the Army’s schedule for the pilot test phase might not allow sufficient time for program participants to fully evaluate the new technology before full-scale operations are scheduled to start. Program Costs Will Likely Exceed $14.9 Billion Estimate The Chemical Demilitarization Program has a long-standing history of experiencing significant cost growth. The Army estimates that the program will cost $14.9 billion; it has spent approximately $6.2 billion. However, the $14.9 billion cost estimate does not include the costs associated with the schedule slippages likely in Kentucky and Colorado and in the Nonstockpile Chemical Materiel Product. Army officials said that they were revising their cost estimates. However, the Army will likely need more time to develop a reliable baseline to estimate the cost for the closure and remediation of the chemical stockpile disposal facilities, adjacent areas, and miscellaneous materiel contaminated during disposal operations. At the same time, program officials have initiated some actions to contain costs. The Army Estimates the Program Will Cost $14.9 Billion As shown in table 4, the Army estimates that the program will cost $14.9 billion; the Congress has appropriated nearly $6.2 billion through fiscal year 1999. Since 1985, the Army’s cost estimate for the Chemical Stockpile Disposal Project, the largest portion of the program, has increased significantly from the initial $1.7 billion estimate to nearly $10 billion. The major reasons for the cost increases in the stockpile project include (1) overly optimistic program assumptions and estimates by program officials, (2) enhancements to respond to concerns for maximizing the safety of the public and environment, (3) technical problems resulting in lower than expected disposal rates, and (4) additional legislative and program requirements. In 1997, we reported that until the disposal methods for nonstockpile materiel were developed and proven and accepted by state and local communities, the Army would not be able to predict the cost of the nonstockpile product with any degree of accuracy. The working life cycle cost estimate for the Chemical Demilitarization Program shown in table 4 does not include the costs associated with schedule slippages likely in the disposal of the chemical stockpile stored in Kentucky and Colorado and the nonstockpile materiel. The cost estimates are based on the assumption that the disposal of the chemical stockpile and nonstockpile materiel would be completed before the 2007 deadline, which we believe is unlikely. Historically, schedule delays increase direct costs such as labor, emergency preparedness, and management of the program. In addition, until disposal methods for the stockpile stored in Kentucky and Colorado have been selected, proven to be safe and cost-effective, and accepted by the affected states and localities, the Army will be unable to accurately estimate disposal costs for these sites. Similarly, the cost of destroying recovered chemical warfare materiel will be uncertain until the product manager has demonstrated disposal methods for nonstockpile items and the methods have received permits and have been accepted by the affected states and localities. The cost estimate shown in table 4 for the Nonstockpile Chemical Materiel Product does not included possible costs after the 2007 deadline. The Army also needs additional time to develop a reliable baseline to estimate the costs for the closure and remediation of the chemical stockpile disposal facilities, adjacent areas, and miscellaneous materiel contaminated during disposal operations. According to program officials, these costs may increase because of uncertainties regarding remediation requirements and standards for these facilities and other materiel, such as personal protection suits worn by the workers and miscellaneous equipment, contaminated during disposal operations. Individual states will establish the environmental requirements for remediating these facilities and nearby areas. Consequently, the environmental requirements and standards to use in estimating the cost to remediate these facilities have not yet been fully determined. Furthermore, because no stockpile disposal facility has yet to be remediated, the Army lacks real time experience on which to estimate these costs. The Army Has Management Initiatives Under Way to Control Cost Growth In response to the National Defense Authorization Act for Fiscal Year 1996, the program office for chemical demilitarization has developed and implemented several cost-reduction initiatives. Because the majority of Chemical Stockpile Disposal Project costs are in the contracts for the construction and operation of the disposal facilities, the program office implemented a management approach that includes in-depth reviews of the contracts. According to program officials, these reviews may provide the office with a better understanding of the contractor’s approach to planning, enhance performance analyses and forecasting, and produce cost savings during negotiations with contractors. Additionally, the chemical demilitarization office expects cost savings to accrue through implementation of programmatic lessons learned, where opportunities to reduce costs are routinely investigated and applied as the program moves forward. To increase public awareness and trust, the program office for chemical demilitarization has hosted periodic environmental forums on the Chemical Demilitarization Program. These forums have allowed the public to exchange information with officials from various organizations associated with the program and were intended to increase public awareness and gain acceptance of the program and thereby reduce costs associated with extended environmental permit schedules and litigation actions. Similarly, to increase public awareness and acceptance, the program office for assembled chemical weapons assessment has convened, with the assistance of the Keystone Center, the Dialogue on Assembled Chemical Weapons Assessment. The Dialogue includes representatives of the affected communities, national citizens groups, state regulatory agencies, Native American tribes, the Environmental Protection Agency, and the Departments of the Defense and the Army and has participated in the Army’s decision-making process for the program. Conclusions The Army could destroy 90 percent of its stockpile of chemical agents and munitions and most of its nonstockpile chemical warfare materiel before the Chemical Weapons Convention’s 2007 deadline, given its recent progress and projected plans. However, because of the additional time required to develop and select disposal methods that are acceptable to the state regulatory agencies and local communities in Kentucky and Colorado, the Army will not meet the 2007 deadline at these sites. These sites store 10 percent of the original stockpile. In addition, the disposal of some nonstockpile items may exceed the 2007 deadline because of technical problems with key disposal systems for recovered chemical warfare materiel and because of possible delays in demolition of a former chemical weapons production facility in Indiana. Given past program experience, these types of delays are likely to occur and will add to program costs. The Army Has Not Adequately Managed the Liquidation of Program Funds The Army has experienced problems in recent years in managing its liquidations of program funds. Concerns over the financial management of the Chemical Demilitarization Program surfaced following a February 1999 review by the Office of the Under Secretary of the Defense (Comptroller), which suggested that significant portions of prior years’ obligations remained unliquidated and could be used for other purposes. In July 1999, we reported that sizable unliquidated obligations existed for the program from prior years. During this review, we examined the transactions for which most of these obligations were recorded, a type of interagency order known as a military interdepartmental purchase request. We found that the program had more than $3.1 billion in budget authority from fiscal years 1993-98 appropriations as of September 30, 1999, of which a reported $498 million (16.1 percent) was unliquidated. Some unliquidated obligations exist because of the lack of management attention and fragmented structure for tracking and managing liquidations; procedural delays in reporting liquidation transactions in the defense financial system, auditing and liquidating obligated funds on completed contracts, and deobligating excess funds; and delays in executing the program schedule. Several recent factors have affected and will continue to affect the reduction of the unliquidated obligations. The Army can liquidate some of its obligations as soon as construction and procurement are under way at the chemical stockpile sites that recently obtained environmental permits. In addition, congressional reductions in the administration’s budget requests for fiscal years 1999 and 2000 will likely reduce the future buildup of unliquidated obligations. At the time of our review, the Army had begun to improve its management of appropriated funds and liquidations of obligations for the Chemical Demilitarization Program, but these improvements have not been consistently and systematically implemented across all program elements, and it is too early to tell their effect. Prior Reviews Report Weaknesses in the Management of Program Funds Concerns over the financial management of the program surfaced following a review by the Office of the Under Secretary of the Defense (Comptroller), which suggested that significant portions of prior years’ obligations remained unliquidated and could be used for other purposes. In July 1999, we reported that there were sizable unliquidated obligations reported for the program from prior years. The financial management issue of the program surfaced in February 1999, following a quick program review summarized in internal memorandums prepared by an official in the Office of the Under Secretary of the Defense (Comptroller). The memorandums suggested that significant portions of prior years’ obligations remained unliquidated and could be reprogrammed to other uses. On July 26, 1999, the office issued a more comprehensive report, stating that 26 percent of the Chemical Demilitarization Program’s appropriations were unexpended. The report also notes that delays in executing the program resulted in the accumulation of funds that were out of phase with the specific time when the contracted work was actually performed, resulting in the accumulation of unliquidated obligations. Consequently, the funds were not available for other immediate defense priorities and programs. The report also identified procedural delays in reporting financial transactions in the defense financial system and programmatic delays in executing the program schedule because of permit, technical, and contractual issues that contributed to the program’s unliquidated obligations. In July 1999, we reported that sizable unliquidated obligations existed for the Chemical Demilitarization Program from prior years, but the unused funds did not appear to be available for other uses. Our review of $382.1 million (62.6 percent) of the reported $610.5 million in unliquidated obligations for fiscal years 1992-98 showed that $150.6 million (39.4 percent of our sample) had already been spent but was not recorded in accounting records or included in financial reports prepared by the Defense Finance and Accounting Service (DFAS). Further, the remaining $231.5 million in unliquidated obligations in our sample was scheduled to be liquidated by November 2000. In addition, we reported that these obligations were unliquidated because of several factors, such as delays in obtaining environmental permits and technical delays. At the same time, we identified a number of factors, including states’ approvals of environmental permits to start construction of chemical stockpile disposal facilities and congressional deferments in the administration’s budget request for the program, that have affected or will affect the reduction of unliquidated obligations. About 16.1 Percent of Prior Years’ Obligations Were Unliquidated As of September 30, 1999, the Chemical Demilitarization Program had more than $3.1 billion in budget authority from fiscal years 1993-98, of which $38.9 million was no longer obligated for specific program areas. (See table 5.) Nearly this entire amount was obligated previously for program requirements that were completed for less cost than initially estimated and was deobligated and reclassified as program reserve. Most of these unobligated funds are no longer available because their authorized periods for obligation expired. At the same time, the program office had a reported $498 million (16.1 percent) in unliquidated obligations from fiscal years 1993-98. In addition, the program office reported that it had $804 million in budget authority in fiscal year 1999 funds. Of this amount, $70.7 million was unobligated and $381.3 million in obligations was unliquidated. However, it is important to note that the budget authority for fiscal year 1999 is relatively recent and that some of the funds are still available for obligation and liquidation and may continue to be available for several years depending on the type of fund. The budget authority for 1999 research and development funds under the chemical demilitarization appropriation is available for obligation during fiscal year 2000, 1999 procurement funds are available for obligation for fiscal years 2000 and 2001, and 1999 military construction funds are available for obligation for fiscal years 2000 through 2003. The obligations incurred under each of these chemical demilitarization appropriation subdivisions may be liquidated up to 5 years following the end of the funds’ periods of availability for obligation before the fund account is closed. Assessment Indicates That Most Unliquidated Obligations Were Accounted For During this review, we focused our analysis on the unliquidated obligations for fiscal years 1993-98. On the basis of our analysis of 428 military interdepartmental purchase requests with $495.1 million in unliquidated obligations (or 99.4 percent of the total reported unliquidated obligations), we determined that $63.1 million (12.7 percent) in payments had been made but was not recorded in the accounting records or financial reports prepared by DFAS. (See table 6.) Of the remaining $432 million in unliquidated obligations, most was for work completed but not yet billed by the contractor or verified by the Defense Contract Audit Agency (DCAA), or for work being done but not yet completed. Included in this amount is $10.4 million that program officials could not explain the reasons for the unliquidated balances. As shown in table 6, 240 purchase requests included a reported $62.6 million in unliquidated operations and maintenance obligations. Of this amount, $5.3 million had been liquidated, according to documents provided by the program office and its contractors, but not yet recorded as liquidated in DFAS accounting data and financial reports. Of the remaining $57.2 million in unliquidated obligations, program officials identified $32.3 million for work that had been completed, but they were awaiting other actions such as final billing by the contractor or audit by DCAA. Another $18.6 million of the $57.2 million is obligated for ongoing purchase requests, for which most of the obligations are scheduled to be liquidated between now and January 2001. Program officials were unable to explain the reasons for $6.3 million of the unliquidated obligations. In addition, 97 purchase requests included a reported $339.5 million in unliquidated procurement obligations. Of this amount, $52.6 million had been liquidated, according to documents provided by the program office and its contractors, but not recorded as liquidated in DFAS financial data. Of the remaining $286.9 million in unliquidated obligations, program officials identified $2.7 million for work that had been completed, but they were awaiting other actions such as final billing by the contractor or audit by the DCAA. Another $283.5 million is obligated for ongoing purchase requests, for which most of the obligations are scheduled to be liquidated between now and the end of 2001. Program officials were unable to explain the reasons for almost $700,000 in unliquidated obligations. Further, 76 purchase requests included a reported $19.7 million in unliquidated research and development obligations. Of this amount, $5.2 million had been liquidated, according to documents provided by the program office and its contractors, but not recorded as liquidated in DFAS financial data. Of the remaining $14.5 million in unliquidated obligations, program officials identified $1.3 million for work that had been completed, but they were awaiting other actions such as final billing or audit. Another $9.7 million of the $14.5 million was obligated for ongoing purchase requests, for which most of the obligations are scheduled to be liquidated before June 2000. Program officials were unable to explain the reasons for $3.5 million of the unliquidated obligations. Last, we found $73.3 million in unliquidated military construction obligations managed by the U.S. Army Corps of Engineers. The Corps of Engineers uses integrated financial systems to manage and account for obligations and liquidations. Unlike the separately located, nonintegrated financial systems used by the program offices for chemical demilitarization and assembled chemical weapons assessment, the Corps of Engineers system contains real-time obligation and liquidation data. The $73.3 million in obligations will be liquidated as specialized government-furnished equipment is delivered and installed and the ongoing construction efforts are completed. The construction of the disposal facilities in Alabama and Oregon is scheduled to be completed in the first quarter of fiscal year 2001, and the disposal facility in Arkansas is expected to be completed in the first quarter of fiscal year 2002. Reasons for Some Unliquidated Obligations Some unliquidated obligations are due to the lack of management attention and the decentralized organizational structure for managing program activities and tracking liquidations; procedural delays in reporting liquidation transactions in DFAS accounting data and financial reports; procedural delays in auditing and liquidating obligated balances on completed contracts; and delays in executing the program schedule because of permit, technical, and contractual issues. Management Delays Contributing to unliquidated balances have been delays due to the lack of management attention and decentralized organizational structure for managing program activities and tracking liquidations. Lack of Management Attention The lack of attention to tracking and managing liquidations has contributed to the accumulation of unliquidated obligations. According to DOD and Army officials, the program office for chemical demilitarization has historically prioritized the management and timely obligation of appropriations and given much less attention to tracking and managing liquidations and deobligating excess funds. Despite beginning to track and liquidate obligations more aggressively, program officials still could not readily provide us the obligation and liquidation status for some purchase requests or determine whether the unliquidated obligations were for completed or ongoing efforts. Instead, program officials generally had to obtain liquidation data from performing entities, such as other government agencies and outside contractors, and in many cases testimonial data was the best data they could provide. Some officials said they did not systematically receive financial reports with liquidation data. Additionally, they gave little priority to deobligating unliquidated balances associated with completed or closed contracts. This lack of attention is partially reflected in the inability of program officials to explain the status of $10.4 million in unliquidated obligations across all funding categories except for military construction. Decentralized Organizational Structure Different organizations are responsible for various elements of the Chemical Demilitarization Program. For example, the U.S. Army Corps of Engineers is responsible for managing military construction funds and most of the program’s procurement funds. FEMA and the U.S. Army Soldier and Biological Chemical Command share responsibility for managing funds appropriated for the Chemical Stockpile Emergency Preparedness Project. FEMA is responsible for off-post emergency preparedness activities and the Soldier and Biological Chemical Command is responsible for on-post activities. In addition, the Program Manager for Assembled Chemical Weapons Assessment manages the Assembled Chemical Weapons Assessment Program funding, and the Program Manager for Chemical Demilitarization manages the execution of funds appropriated for the chemical stockpile, nonstockpile, and alternative technologies and approaches projects. Within the program office for chemical demilitarization, project managers are responsible for the execution of funds provided to their respective projects. Within this decentralized organizational structure, these program elements manage their obligations and liquidations as separate operating entities, and in some cases, there was confusion as to which program element or organization was accountable for tracking and managing the unliquidated obligations. Procedural Delays Procedural delays have accounted for the accumulation of some of the unliquidated obligations. Some liquidation transactions were not reported in accounting records and financial reports prepared by DFAS in a timely way, and program officials have been reluctant to deobligate unliquidated excess funds on completed contracts until DCAA validates labor rates and other contract costs. Accounting and Procedural Delays According to program officials, processing and reporting liquidation data have taken 90 to 120 days before the data were included in accounting records and financial reports prepared by DFAS. For example, contractors have taken several weeks to validate and process liquidations by their subcontractors and report them to the program office, which has its own processes and procedures to complete before reporting to DFAS. Furthermore, DFAS requires time to input and report its liquidation data to its financial system. We recently reported that DOD’s payment and accounting processes are complex, generally involving separate functions carried out by individual offices using different systems. These processes can contribute significantly to delays in reporting the liquidation of obligations to responsible program officials. Contract Closure Delays According to Army officials, DCAA has taken several months to review and approve costs associated with completed contracts. Program officials have generally waited until DCAA validated labor rates and other contract costs. These audits may adjust labor rates or other costs, requiring additional payments from the remaining obligated funds. Program Delays Contributing to the unliquidated balances have been delays in executing the program schedule because of environmental permit, technical, and contractual issues. Environmental Permit Delays Program officials found that the time required to actually gain environmental permit approvals, particularly in Oregon, Alabama, and Arkansas, exceeded estimates. The additional time was mainly attributable to a variety of both internally and externally driven requirements. For example, satisfying safety and environmental design changes resulting from programmatic lessons learned, new state and federal regulatory requirements, and new interpretations of existing regulatory requirements in some cases significantly extended the projected schedules. Although funds were obligated to support the three sites based on initial permit issuance projections, the program office could not liquidate some obligations until after construction began, which was contingent on the issuance of the environmental permits. Technical Delays According to program officials, lessons learned from ongoing disposal operations at Johnston Atoll and Utah resulted in technical and design changes for future facilities that required additional time and resources. While these changes were being incorporated, liquidation of obligated funds proved to be slower than program officials expected. Contractual Delays According to program officials, the award of several construction and procurement contracts has been delayed due to protests by losing bidders. For example, the award of the construction contract for the disposal facility in Arkansas was delayed a year due in part to a bid protest. Accordingly, obligations for this contract could not be liquidated until resolution of the protest. Recent Factors Reducing Unliquidated Obligations Recently, approvals of environmental permits by state regulatory agencies at five chemical stockpile disposal sites resulted in initiation of construction activities and procurement actions and greater disbursement of obligated funds. Additionally, actions by the Congress and the Office of the Under Secretary of the Defense (Comptroller) to reduce the funding requested for the program decreased the amount of funds available for obligation and better aligned funding with the program’s execution. This action decreases the likelihood that these funds will be obligated far in advance of when they are needed. The Army’s recent receipt of the required environmental permits and approvals by the state regulatory agencies at five chemical stockpile disposal sites has resulted in initiation of construction activities and procurement actions and greater pay-out of obligated funds. The environmental permits for the construction of the disposal facilities in Oregon and Alabama were approved in 1997. The execution of these construction projects has allowed and will continue to allow the program office to liquidate construction and procurement obligations for these locations. In addition, the environmental permits were approved in 1999 for the construction of disposal facilities in Arkansas, Indiana, and Maryland, which should allow the program office to liquidate construction and procurement obligations for these locations. Congressional actions to reduce the funding requested for the program have decreased and are expected to continue to decrease the program’s unliquidated balances. In the DOD and military construction appropriations acts for fiscal year 1999, the Congress appropriated $78 million less than the administration requested for operations and maintenance, procurement, and research and development activities and appropriated $50.5 million less than requested for military construction projects. Similarly, in the fiscal year 2000 DOD and military construction appropriations acts, the Congress appropriated $140 million less than the administration requested for operations and maintenance, procurement, and research and development activities and appropriated $93 million less than requested for military construction projects. These actions reduced the amount of funds available for obligation and better aligned funding with the program’s execution, decreasing the likelihood that these funds will be obligated far in advance of when they are needed. In their review of the Army’s fiscal year 2001 budget request for the Chemical Demilitarization Program, the Office of the Under Secretary of the Defense (Comptroller) recommended, and the Deputy Secretary of Defense approved, reductions to the Army’s budget request for fiscal year 2001 to better align funding in the year it would be executed. For example, because of the delays in the Alternative Technologies and Approaches Project, DOD reduced the Army’s budget request for construction funds by $25 million for the Maryland site. It concluded that the contractor would be unable to execute construction work scheduled for fiscal year 2001. Similarly, DOD reduced fiscal year 2001 funding for equipment installation at Newport, Indiana, by $7.2 million because of the delays in the Alternative Technologies and Approaches Project. Further, DOD reduced the Army’s request for the Assembled Chemical Weapons Assessment Program by $42 million because of expected delays in executing the program during fiscal year 2001. Financial Improvements Have Not Been Consistently and Systematically Implemented Across All Program Elements Although the Army has started to improve its management of obligations and liquidations of obligated funds for the Chemical Demilitarization Program, these improvements have not been consistently and systematically implemented across all program elements. These inconsistencies are due in part to the decentralized financial management structure of the program. In July 1999, the Program Manager for Chemical Demilitarization mandated monthly reporting of obligations, disbursements, and planned and actual cost information by the managers for chemical stockpile, nonstockpile, and the alternative technologies and approaches projects. The manager also reminded all project managers of the importance of effective funds management, including the management of cost as well as schedule and technical performance. Consequently, officials started examining the unliquidated obligations and deobligating those determined as no longer needed. In addition, they started working with their contractors and other defense agencies to expedite the reporting of financial transactions and developing methods for capturing and reporting obligations, liquidations, and accrual data. While these are positive steps, the program office has not fully implemented these improvements for the timely capturing and reporting of obligations, liquidations, and accruals and could not explain $10.4 million in unliquidated obligations. The program office did not have an independent, integrated system to track obligations, liquidations, and accrual data and has relied mostly upon data in accounting records or financial reports prepared by DFAS. For example, the Chemical Stockpile Emergency Preparedness Project and the Assembled Chemical Weapons Assessment Program were not included in the Chemical Demilitarization Program Manager’s monthly reporting requirement because the U.S. Army Soldier and Biological Chemical Command manages the funds for the two programs. Further, the managers for the chemical stockpile, nonstockpile, and alternative technologies and approaches projects have implemented different systems to comply with the program manager’s mandate for a monthly reporting of obligations, liquidations, and planned and actual cost information. Conclusions Although program officials have acted and are acting to improve the financial management of the program, problems remain. No systematic approach exists across all program elements to help ensure the consistent, effective execution and expenditure of funds appropriated for the program, and a relatively small amount of unliquidated obligations remain unexplained. Some unliquidated obligations exist because of the lack of management attention and decentralized structure for tracking liquidations. Other unliquidated obligations exist because of procedural delays in reporting financial transactions in the defense financial system, in auditing and liquidating obligated balances on completed contracts, and in deobligating excess funds, and delays in executing the program schedule. Several recent factors, including the recently approved environmental permits and congressional actions to reduce funding for the program, have decreased and will likely reduce the future buildup of unliquidated obligations. However, because the improvements in its financial management have not been consistently and systematically implemented, the Army cannot ensure that its unliquidated obligations will receive consistent attention to bring about a better alignment of funds with the execution of the program on an ongoing basis. In response to a draft of this report, the Army has recently initiated actions to address our concerns over the financial management of the program. Given the long-standing nature of these concerns, management oversight is essential to the effective implementation of the Army’s actions. Recommendations We recommend that the Secretary of Defense monitor the Army’s actions to develop a systematic approach for ensuring the timely, effective expenditure of funds appropriated for all elements of the Chemical Demilitarization Program and direct program officials to account for the $10.4 million in unliquidated obligations that officials could not give an explanation for, or explain why the funds had not been liquidated. Agency Comments and Our Evaluation In its written comments on our draft report, DOD agreed with our recommendations to develop a systematic approach for ensuring the timely, effective expenditure of funds appropriated for all elements of the Chemical Demilitarization Program and direct program officials to account for the $10.4 million in unliquidated obligations that they could not explain. However, DOD disagreed that the Secretary of Defense should direct the Secretary of the Army to implement the recommendations because the Program Manager for Chemical Demilitarization had already initiated implementation actions. We are encouraged by the Program Manager’s actions, which once completed should address the concerns raised in our draft report. Given these actions and the long-standing nature of our concerns, we modified our recommendations to call for the Secretary of the Defense to monitor the Army’s actions to ensure that it completes them fully and in a timely way and that appropriate results are obtained. FEMA concurred with the recommended principle of a systematic approach for ensuring the timely, effective expenditure of funds and elaborated on its actions implementing the principle behind the recommendation. Because the recommendation to explain the $10.4 million in unliquidated obligations pertains to the Army, FEMA had no comment on that recommendation. Program Management Has Been Hindered by Complex Structure and Ineffective Coordination Effective management of the Chemical Demilitarization Program has been hindered by its complex management structure and ineffective coordination among program offices and with state and local officials. Several changes in the organization and structure of the program during 1997-99, including some changes to implement legislative requirements, divided the management roles, responsibilities, and accountability among several different levels within the Departments of Defense and the Army. In addition, accountability for program performance has been unclear, and coordination and communication among certain program elements and state and local officials have been inadequate. Further, officials of the Departments of Defense and the Army have not agreed on whether or when management roles, responsibilities, and accountability should be consolidated for destruction of the chemical stockpiles at Blue Grass, Kentucky, and Pueblo, Colorado. Consequently, state and local officials have raised concerns that no single office is accountable for achieving the desired results of the program’s various elements. In addition, the Congress has expressed concern about the management of the program. Complex Management Structure As the program has been expanded beyond its original single purpose of destroying the stockpile to encompass a broader range of missions, to include compliance with the Chemical Weapons Convention, the organization and structure of the Chemical Demilitarization Program have changed and become increasingly complex. At times, these changes have resulted in the fragmentation of the responsibilities for management and oversight of the program. For example, several different levels within the Departments of Defense and the Army now share oversight and management responsibilities. Evolution of the Management Structure As provided for in the original legislation establishing the Chemical Demilitarization Program, the Army, as executive agent for the program, established a Program Manager for Chemical Demilitarization who was responsible for management of the destruction of the stockpile. This Program Manager reported directly to the Assistant Secretary of the Army (Installations and Environment). Once the estimated cost of the program reached a certain dollar amount, as required by statute, the Army formally designated it to be a major defense acquisition program subject to congressional reporting requirements and Office of the Secretary of Defense review and approval of various milestones. So that this major defense acquisition program could be managed in the acquisition chain in accordance with the DOD Directive 5000 series, program responsibility was transferred to the Assistant Secretary of the Army (Acquisition, Logistics and Technology) from the Assistant Secretary of the Army (Installations and Environment). To support the enhanced oversight role of the Office of the Secretary of Defense, an office was established in the office of the Assistant to the Secretary of Defense (Nuclear, Chemical, and Biological Defense Programs) to provide oversight responsibility for the Chemical Demilitarization Program regarding policy guidance, budget authority, and annual reporting requirements. The Program Manager continued to report directly to the Assistant Secretary of the Army (Acquisition, Logistics and Technology) in his capacity as the Army Acquisition Executive and the Program Manager remained responsible for executing the existing elements of the program, except for the Chemical Stockpile Emergency Preparedness Project. Under a memorandum of understanding, the responsibility for the latter project resides with the Assistant Secretary of the Army (Installations and Environment) in conjunction with FEMA. Current Structure Has Three Separate Lines of Authority There are three different lines of authority within the Departments of Defense and the Army for elements of the Chemical Demilitarization Program (see fig. 5). This structure resulted from congressional and DOD actions affecting various elements of the program. For example, the Congress wanted greater emphasis on the management of efforts to research and develop alternative technologies for destroying assembled chemical weapons. To achieve that goal, it directed that these research and development efforts be conducted separately from the baseline incineration activities. DOD, as part of its downsizing theOffice of the Secretary of Defense, devolved management responsibilities to the Army. In addition, to improve the management of the Chemical Stockpile Emergency Preparedness Project, the Army and FEMA changed the management structure for the project. The current organization has created a complex management structure and separated responsibilities. For example: In the 1997 Defense Appropriations Act (sec. 8065), the Congress required the Under Secretary of Defense (Acquisition and Technology) to designate a program manager who was not, nor had been, in direct or immediate control of the baseline reverse assembly incineration demilitarization program to carry out a new pilot program. In response to the act, DOD established the office of the Program Manager for Assembled Chemical Weapons Assessment, independent of the Program Manager for Chemical Demilitarization, to implement the pilot program. The purpose of this legislation was to separate this pilot program from the baseline incineration activities. Achievement of that goal also meant that two program offices would share responsibilities associated with disposal activities in Kentucky and Colorado. As discussed later, ineffective coordination between these offices has hindered the program management. A 1998 Defense Reform Initiative downsized the Office of the Secretary of Defense and resulted in the devolvement of the DOD office overseeing the Chemical Demilitarization Program from DOD to the Army. Consequently, the Office of the Deputy Assistant Secretary of the Army (Chemical Demilitarization) was formed in February 1998 from a consolidation of the existing DOD oversight office and the Army staff office that assisted the Army Assistant Secretary in performing his chemical weapons demilitarization functions. While this devolvement resulted in a consolidation of staff offices in the Army Secretariat, it still did not clear up the existing ambiguity in responsibilities, lines of authority, and accountability. For example, the responsibility of the Deputy Assistant Secretary of the Army (Chemical Demilitarization) is limited to oversight of the stockpile, nonstockpile, and alternative technologies and approaches projects. While this office reports directly to the Assistant Secretary of the Army (Acquisition, Logistics and Technology), it has no direct management control of the Program Manager for Chemical Demilitarization, who also reports directly to the Assistant Secretary. During our review, we received conflicting descriptions and inconsistent organizational charts concerning the relationship and responsibilities of the offices of the Deputy Assistant Secretary of the Army (Chemical Demilitarization) and the Program Manager for Chemical Demilitarization. This indicates some amount of confusion among those involved in the program regarding who is accountable. DOD, as part of the devolvement, planned to consolidate within the Army Secretariat management responsibility for the Assembled Chemical Weapons Assessment Program. For example, as noted in the transition plan for the reform initiative, program oversight for the Assembled Chemical Weapons Assessment Program was to be delegated to the Assistant Secretary of the Army (Acquisition, Logistics and Technology) and the Program Manager for Assembled Chemical Weapons Assessment was to report directly to the Assistant Secretary. The Under Secretary of Defense (Acquisition and Technology) was to evaluate and certify the effectiveness of the alternative technologies as required by legislation. However, in the Conference Report accompanying the 1999 Defense Authorization Act, the conferees agreed that the Program Manager for Assembled Chemical Weapons Assessment should continue to report directly to the Under Secretary of Defense (Acquisition and Technology) rather than the Assistant Secretary of the Army (Acquisition, Logistics and Technology). In 1997, the Secretary of the Army and the Director of FEMA entered into a memorandum of agreement that revised the management structure of the Chemical Stockpile Emergency Preparedness Project in an effort to streamline and improve the management of the program. Under the agreement, FEMA assumed full responsibility and authority for off-post project activities, and the U.S. Army Soldier and Biological Chemical Command assumed responsibility for the on-post portion of the project. As a result of the agreement, the Assistant Secretary of the Army (Installations and Environment) assumed oversight responsibilities for the project. In addition, the Chemical Stockpile Emergency Preparedness Project was removed from the offices of the Program Manager for Chemical Demilitarization and the Assistant Secretary of the Army (Acquisition, Logistics and Technology). Unclear Accountability for Program Results and Inadequate Coordination and Communication The Chemical Demilitarization Program has a complex structure that separates management roles, responsibilities, and accountability for achieving program results. In addition, effective management of the program has been hindered further by ineffective coordination among program offices and with state and local officials. Consequently, accountability for program performance is unclear, and state and local officials have expressed concern about conflicting information and the lack of a single office to be clearly accountable for the execution of the program. We also found instances where coordination and communication among project managers for the program were inadequate. In addition, the Congress has expressed concern about the management of the program. State and Local Concerns Over Conflicting Information In order to comply with congressional direction, program managers for chemical demilitarization and assembled chemical weapons assessment currently share responsibilities associated with disposal of the chemical stockpile at Blue Grass, Kentucky, and Pueblo, Colorado. For example, the Program Manager for Chemical Demilitarization is responsible for the destruction of the chemical stockpile, and at the same time, the Program Manager for Assembled Chemical Weapons Assessment is responsible for developing and testing alternative technologies for disposing of the assembled chemical weapons at these sites. However, the activities of these offices have not always been effectively coordinated. This has led to difficulties in presenting a clear, coordinated message to affected state and local officials regarding the overall program goals for these sites. According to several state and local officials, spokespersons for these two programs have made conflicting and inconsistent statements about the possible disposal methods for the chemical stockpile stored in Kentucky and Colorado. Consequently, this confusion has created the public perception of the program at these two sites that DOD lacks a single vision for destroying the chemical stockpile in a judicious manner. The lack of coordination will pose even greater problems in the future in implementing the decisions selecting the most appropriate disposal method to use in Kentucky and Colorado. Specifically, if an alternative technology is selected for use at these sites, views differ on which program office should manage the disposal operations after the pilot project starts. Specifically: Some program officials believe that the Program Manager for Chemical Demilitarization should assume responsibility for disposal operations after the method of destruction is selected for use in Kentucky and Colorado. As provided for in the original legislation establishing the Chemical Demilitarization Program, the Army established the office of the Program Manager for Chemical Demilitarization and made it responsible for management of the destruction of the stockpile. Officials believed that this office would be more skilled at managing the construction and operation of these disposal facilities based on its experience at other stockpile sites. Also, it would match the management structure being employed in Aberdeen, Maryland, and Newport, Indiana, where the Program Manager for Chemical Demilitarization has overall responsibility for implementing the pilot projects to test alternative technologies for disposing of chemical agents in bulk containers. Other program officials believe that the Program Manager for Assembled Chemical Weapons Assessment should continue to manage the program through the completion of the pilot-scale testing of alternative technologies for the disposal of assembled chemical weapons stored in Kentucky and Colorado. In section 142 of the 1999 Defense Authorization Act, the Congress directed that the Program Manager continue to manage the development and testing, including the demonstration and pilot-scale testing, of alternative technologies for the destruction of assembled chemical weapons. The Congress further directed the Program Manager to act independently of the Program Manager for Chemical Demilitarization. Program officials believed this could continue the enhanced communications the program office achieved with these states and local communities through the Dialogue initiative discussed previously and retain the expertise the office obtained on alternative technologies for destroying assembled chemical weapons. DOD and the Army have not resolved issues related to future management roles and responsibilities should a full-scale pilot project start for demonstrating an alternative technology at Kentucky and Colorado be implemented. In any case, closer cooperation will be required between these program offices in the future. The adoption of any alternative disposal method for pilot-scale testing will depend on a certification to the Congress that the alternative is as safe and cost-effective as the baseline incineration process for disposing of assembled chemical weapons and will meet the destruction deadline. Inadequate Coordination and Communication Among Project Managers In some instances, coordination and communication among project managers for the program were inadequate. For example, as previously discussed, the Program Manager for Chemical Demilitarization’s efforts to improve the office’s management of funds had not been consistently and systematically implemented across all program elements. In another case, officials of the stockpile and nonstockpile projects in Arkansas had not coordinated their efforts to obtain environmental permits and approvals for their disposal operations. This could have a significant effect on the start of one or both disposal operations because the state of Arkansas has limited resources to review and approve permit changes that will be needed to begin operations. Although concerned that nonstockpile activities could delay the state’s approval of permit changes, stockpile officials at the site did not know the status or schedule for nonstockpile activities. Additionally, some public outreach offices for the program did not have information related to emergency preparedness activities, such as information booklets and evacuation routes maps. According to outreach officials, they did not routinely provide emergency preparedness information to the public because those activities were managed by the U.S. Army Soldier and Biological Chemical Command and FEMA, not by the Program Manager for Chemical Demilitarization. Congressional Concern About the Management of the Program The Congress has expressed concern about the management of the Chemical Demilitarization Program. For example, in the 2000 Defense Appropriations Act, section 8159, the Congress directed the Secretary of Defense to report on the management of the Chemical Demilitarization Program, including an assessment of the Assembled Chemical Weapons Assessment Program. Some in the Congress have also expressed concern that, in recent budget submissions, DOD included the budget for the Chemical Demilitarization Program as part of the Army’s budget. In its report on the 2000 Defense Authorization Act, the House Armed Services Committee reaffirmed its belief that, as required by the original statute establishing the program, chemical demilitarization funds should be set forth in a DOD-wide budget account, not in the budget accounts for any military department. This was to emphasize that destruction of the chemical stockpile is a national issue that affects all of DOD, not just a single military service. It stated that the Committee intended to keep chemical demilitarization funding separate to prevent these funds from being subject to internal service budget priorities and to avoid artificially inflating the budgets of any military department. Conclusions Effective management of the Chemical Demilitarization Program has been hindered by its complex management structure and ineffective coordination among program offices and with state and local officials. This has been the case particularly at the Kentucky and Colorado sites, which were not expected to meet the convention’s 2007 deadline for destruction of their stockpiles. As the program’s mission has been expanded, some fragmentation in the management roles, responsibilities, and accountability among various program participants has resulted. While the Department of the Army is now responsible for most elements of the mission to destroy the stockpile, in accordance with congressional direction, responsibility for the Assembled Chemical Weapons Assessment Program remains with a separate program manager that reports directly to the office of the Secretary of Defense. Regarding the future management of this program, officials of the Departments of Defense and the Army have not agreed on the most appropriate management structure for accomplishing the destruction of the chemical stockpile stored in Kentucky and Colorado. Without resolution, these issues leave the effectiveness of the program at risk. Recommendations We recommend that the Secretary of Defense direct the Secretary of the Army to clarify the management roles and responsibilities of program participants, assign accountability for achieving program goals and results, and establish procedures to improve coordination among the program’s various elements and with state and local officials. Agency Comments Both DOD and FEMA concurred with the recommendation.
Pursuant to a legislative requirement, GAO reviewed the financial management of the Department of Defense's (DOD) Chemical Demilitarization Program, focusing on whether: (1) the program will meet the Chemical Weapons Convention timeframes within the costs projected; (2) obligations and liquidations of funds appropriated for the program have been adequately managed; and (3) the management structure of the program allows for coordinated accountability of the program. GAO noted that: (1) the Army had destroyed approximately 17.7 percent of the chemical weapons stockpile as of January 31, 2000, and could destroy about 90 percent of the stockpile by the convention's 2007 deadline, given its recent progress and projected plans; (2) however, the Army may not meet the deadline for the remaining 10 percent of the stockpile because the incineration method of destruction has not been acceptable to two of the states where the chemical stockpile is located; (3) additionally, some of the nonstockpile materiel may not be destroyed before the deadline because the proposed method of destruction has not been proven safe and effective and accepted by state and local communities; (4) the Army's $14.9 billion estimate for program costs will likely increase due to: (a) the additional time required to develop and select disposal methods for the remaining 10 percent of the stockpile and for some of the nonstockpile chemical materiel; and (b) possible delays in demolition of a former chemical weapons production facility; (5) the Army has experienced significant problems in recent years in effectively managing the use of funds appropriated for the Chemical Demilitarization Program; (6) the Army reported that as of September 30, 1999, $498 million of the $3.1 billion appropriated for the program in fiscal years 1993 through 1998 was unliquidated; (7) in an assessment of interagency orders that account for $495.1 million of the unliquidated obligations, GAO found that $63.1 million had been liquidated but was not recorded in accounting records or included in financial reports prepared by the Defense Finance and Accounting Service; (8) of the remaining $432 million, most was for work completed but not yet billed by the contractor or verified by the Defense Contract Audit Agency or for work in progress but not yet completed; (9) effective management of the Chemical Demilitarization Program has been hindered by its complex management structure and ineffective coordination among program offices and with state and local officials; (10) coordination and communication among officials responsible for elements of the program have been inadequate, thus causing confusion about what actions would be taken at certain sites; (11) DOD and Army officials have not agreed on whether or when management roles, responsibilities, and accountability should be consolidated for destruction of the chemical stockpiles in Kentucky and Colorado; and (12) consequently, state and local officials have expressed concern that no single office is accountable for achieving the desired results of the Chemical Demilitarization Program.
GAO_GAO-10-775
Background State insurance regulators are responsible for enforcing state insurance laws and regulations. They oversee the insurance industry through the licensing of agents, approval of insurance products and their rates, and examination of insurers’ financial solvency and market conduct. The National Association of Insurance Commissioners (NAIC) assists state regulators with various oversight functions, including maintaining databases and coordinating regulatory efforts by providing guidance, model laws and regulations, and information-sharing tools. Federal and state securities regulators oversee the securities markets, in part to protect investors. The U.S. securities markets are subject to a combination of industry self-regulation (with the Securities and Exchange Commission’s (SEC) oversight) and direct SEC regulation. This regulatory scheme was intended to relieve resource burdens on SEC by giving self- regulatory organizations, such as the Financial Industry Regulatory Authority (FINRA), responsibility for most of the daily oversight of the securities markets and broker-dealers under their jurisdiction. In addition, state securities regulators administer state securities laws and regulations, which include registering nonexempt and noncovered securities before they are marketed to investors; licensing broker-dealers, investment advisers, and their agents; and taking anti-fraud and other enforcement actions. Over the years, we have made a number of recommendations to encourage state regulators to implement a consistent set of insurance regulations. Given the difficulties of harmonizing insurance regulation across states through the NAIC-based structure, we reported that Congress could consider the advantages and disadvantages of providing a federal charter option for insurance and creating a federal insurance regulatory entity. We also recently developed a framework comprised of nine elements to help Congress and others evaluate proposals for financial regulatory reform. One of these elements is consistent consumer and investor protection: market participants should receive consistent, useful information, as well as legal protections for similar financial products and services, including disclosures, sales practice standards, and suitability requirements. Another element is consistent financial oversight: the regulatory structure should ensure that similar institutions and products are subject to consistent regulation, oversight, and transparency, in part to help minimize negative competitive outcomes. Life Settlement Market Organized Largely as an Informal Network of Specialized Intermediaries The life settlement market is organized largely as an informal network of specialized intermediaries that facilitate the sale of existing life insurance policies by their owners to third-party investors. Policy owners may sell their policies directly to investors in some cases, but owners and investors commonly use intermediaries to assist them with their life settlement transactions. Life settlement brokers represent policy owners for a fee or commission and may solicit bids for policies from multiple life settlement providers with the goal of obtaining the best price. Life settlement providers buy life insurance policies on behalf of investors for a fee or commission or for their own account. The number of brokers and providers varies widely from state to state. No comprehensive data exist on the size of the life settlement market, but estimates and other data indicate that the market grew rapidly from its inception around 1998 until the recent financial crisis. Industry estimates of the total face value of policies settled in 2008 ranged from around $9 billion to $12 billion. Life settlement providers responding to our survey reported purchasing policies with a total face value of around $5.50 billion, $9.03 billion, $12.95 billion, and $7.01 billion in 2006, 2007, 2008, and 2009, respectively. Life settlements traditionally have involved high-dollar- amount policies insuring older Americans. Individuals and financial institutions, including some banks, hedge funds, and life insurance companies, have invested in life settlements by buying individual policies, fractionalized interests in individual policies, interests in pools of policies, or other products. State and Federal Regulators Oversee Various Aspects of the Life Settlement Market State insurance and securities regulators and federal securities regulators oversee various aspects of the life settlement market. Life settlements typically comprise two transactions: (1) the sale of a policy by its owner to a provider, which itself is the life settlement contract, and (2) the sale of a policy by the provider to an investor. The majority of states regulate the first transaction, called the front-end transaction, under their insurance laws. The second transaction, called the back-end transaction, is regulated under state and federal securities laws in certain circumstances. NAIC and the National Conference of Insurance Legislators have developed model acts to help states craft legislation to regulate viatical and life settlements. As of February 2010, 38 states had enacted insurance laws and regulations specifically to regulate life settlements—many based on one or both of the model acts. State insurance regulators generally focus on regulating the front-end transaction to protect policy owners, such as by imposing licensing, disclosure, reporting, and other requirements on brokers and providers. Although state insurance laws regulating life settlements generally share basic elements, we identified differences between state laws through our survey of state insurance regulators. State securities regulators and, in certain circumstances, SEC regulate investments in life settlements (the back-end transaction) to protect investors. Variable life policies are securities; thus, settlements involving these policies are securities subject to SEC’s and FINRA’s sales practice rules. SEC also has asserted jurisdiction over certain types of investments in life settlements of nonvariable, or traditional, life insurance policies, but their status as securities is unclear because of conflicting decisions from the U.S. Courts of Appeals for the District of Columbia and the Eleventh Circuit. In 2002, the North American Securities Administrators Association (NASAA) issued guidelines for states to regulate viatical and life settlement investments under their securities laws. According to NASAA and our independent research, all but two states regulate investments in life settlements as securities under their securities laws. Regulatory Inconsistencies May Pose Challenges for Policy Owners, Investors, and Life Settlement Intermediaries Inconsistencies in the regulation of life settlements may pose a number of challenges. First, life settlements can provide policy owners with a valuable option, but policy owners in some states may be afforded less protection than policy owners in other states due to regulatory inconsistencies. Consequently, such policy owners may face greater challenges obtaining information needed to protect their interests. Twelve states and the District of Columbia have not enacted laws specifically governing life settlements, and disclosure requirements can differ among the states that have such laws. Based on our survey of state insurance regulators, state regulators have conducted a limited number of broker or provider examinations. For example, 24 of the 34 state regulators that had the authority to examine brokers licensed in their state had not done any such examinations in the past 5 years. Similarly, 22 of the 33 state regulators that had the authority to examine providers licensed in their state had not done so in the past 5 years. In addition to the lack of uniformity, policy owners in some states could complete a life settlement without knowing how much they paid their brokers or whether they received a fair price for their policies, unless such information voluntarily was provided to them. Second, some individual investors may face challenges obtaining adequate information about life settlement investments, including the risks associated with such investments. Because of the conflicting court decisions (noted previously) on whether investments in life settlements are securities and differences in state securities laws, individuals in different states investing in the same life settlement investment may be afforded different regulatory protections and receive different disclosures about their investments. Third, some life settlement brokers and providers may face challenges because of inconsistencies in laws across states. For example, two brokers and four providers told us that regulatory differences between states were burdensome or increased their compliance costs. Also, brokers and providers told us that some states have adopted laws that impede their ability to do business in those states. Conclusion Because life settlements and related investments can have characteristics of both insurance and securities, their regulatory structure involves multiple state and federal regulators. State insurance regulators have played the primary role in protecting policy owners by regulating the sale of in-force policies by their owners to life settlement providers. In turn, state and federal securities regulators have played the primary role in protecting investors by regulating the sale of life settlement investments. We recently developed a framework for assessing proposals for modernizing the financial regulatory system. One element of that framework is consistent consumer and investor protection: market participants should receive consistent, useful information and legal protection for similar financial products and services, including disclosures, sales practice standards, and suitability requirements. Another element is consistent financial oversight: the regulatory structure should ensure that similar institutions and products are subject to consistent regulation, oversight, and transparency, in part to help minimize negative competitive outcomes. These two elements have not been fully achieved under the current regulatory structure of the life settlement market. First, not all states have enacted life settlement laws to provide policy owners with a minimum level of protection. Second, licensing, disclosure, and other requirements differ among some states with life settlement laws, resulting in different protections for different policy owners. Third, policy owners also can be afforded different protections, depending on whether the policy being sold is a variable policy subject to FINRA and federal sales practice rules or a nonvariable policy. Although variable policies, unlike nonvariable policies, expose their policy owners to investment risk, life settlements involving both types of policies generally raise the same potential risks for policy owners. A potential federal role in the regulation of insurance has been the subject of debate, which the current financial crisis has renewed. For example, the financial regulation reform legislation currently under consideration by Congress would, among other things, create a Federal Insurance Office, in part to monitor the insurance industry (other than health and crop insurance). The bill contains a provision directing the office to consult with states on matters of national importance and conduct a study on how to modernize and improve insurance regulation, including gaps in state regulation. In the last decade, we have made a number of recommendations to encourage state regulators to implement a consistent set of insurance regulations. In providing a framework for assessing proposals to modernize the financial regulatory system, we recently reported that Congress could consider the advantages and disadvantages of providing a federal charter option for insurance and creating a federal insurance regulatory entity because of the difficulties in harmonizing insurance regulation across states through the NAIC-based structure. Matter for Congressional Consideration As Congress continues to consider how best to reform the regulatory structure of the financial services sector, life settlements offer another example of products that may lack clear comprehensive regulation. Therefore, Congress may wish to consider taking steps to help ensure that policy owners involved in life settlement transactions are provided a consistent and minimum level of protection. Agency Comments and Our Evaluation We provided the Chairman of SEC, the Commissioner of Internal Revenue, and the Chief Executive Officer of NAIC with a draft of this report for their review and comment. We received written comments from SEC and NAIC, which are summarized below and reprinted in appendixes IV and V. SEC also provided us with technical comments that were incorporated in the report where appropriate. The Internal Revenue Service did not provide any written comments. SEC generally agreed with our conclusions and matter for congressional consideration. NAIC and not state whether it agreed or disagreed with our matter for congressional consideration but raised related concerns. In commenting on a draft of the report, SEC stated that it agreed with our matter for congressional consideration and, based on the work of its Life Settlement Task Force, believes that enhanced investor protections should be introduced into the life settlement market. SEC noted that investors often face challenges in obtaining adequate information about life settlement investments and, as indicated in our report, may be afforded different regulatory protections and receive different disclosures, depending on where they reside. According to SEC, these are issues that should be addressed through clarification of regulatory authority. In that connection, SEC’s Life Settlement Task Force has focused its review on enhancing investor protections and addressing regulatory gaps in the life settlement market and is expected to make recommendations to the commission along those lines. In commenting on a draft of this report, NAIC’s Chief Operating Officer and Chief Legal Officer summarized our matter for congressional consideration but noted that NAIC disagrees that an option for a federal charter for insurance is an appropriate solution for the life settlement market. He also noted that NAIC objects to the inclusion of a discussion of federal chartering for insurers or the creation of a federal insurance regulatory entity, as neither proposal has included any federal role in the life settlement market. Our references to federal chartering and a federal insurance regulatory entity in the conclusions served to illustrate the debate over the advantages and disadvantages of a federal role in the regulation of insurance, given the difficulties of harmonizing insurance regulation across the states. As discussed in our report, states also have faced difficulties in harmonizing their life settlement regulations. Because of regulatory inconsistencies, policy owners in some states may be afforded less protection than policy owners in other states, and addressing this issue should be part of any regulatory reform effort. Our matter for congressional consideration seeks to raise this as an issue to be considered but does not provide any specific approach that Congress should take. While NAIC discusses potential approaches that it views as inappropriate—regulation through federal chartering or a federal regulatory agency—other approaches have been taken to harmonize state insurance regulations. For example, in 1999, Congress passed the Gramm- Leach-Bliley Act, which encouraged states to enact uniform laws and regulations for licensing insurers or reciprocity among states when licensing insurers that operate across state lines. The NAIC official also commented that our report did not mention that policy owners entering into life settlements have received, in the aggregate, a small fraction of the face value of their policies (based on our provider survey)—indicating that such transactions are a poor financial choice for most consumers. The costs and benefits provided by life settlements to policy owners has been a controversial issue. For example, some have noted that policy owners could maximize their estate value by liquidating assets other than their life insurance policies, and others have noted that life settlements offer policy owners an alternative to surrendering their policies for their cash value, which also typically is a small fraction of the face value of the policies. As we noted in our report, life settlements can provide policy owners with a valuable option, but policy owners can face challenges in assessing whether a life settlement is their best option or knowing whether they are being offered a fair price for their policy. As agreed with your office, unless you publicly release its contents earlier, we plan no further distribution of this report until 30 days from its date of issue. At that time, we will send copies of this report to interested congressional committees, the Chairman of SEC, Commissioner of Internal Revenue, Chief Executive Officer of NAIC, and others. In addition, the report will be available at no charge on GAO’s Web site at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-8678 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made major contributions to this report are listed in appendix VI. Appendix I: Briefing to Congressional Staff on Life Insurance Settlements Briefing to the Senate Special Committee on Aging Life Insurance Settlements (Life Settlements) As you requested, our review addresses the following three questions. How is the life settlement market organized? How are the life settlement market and its participants regulated? What challenges are policy owners, investors, and life insurance companies facing in connection with the life settlement market? Reviewed and analyzed academic and other studies on life settlements and related investments, materials collected from the Web sites of life settlement brokers and providers, and information from firms offering life settlement investments. Reviewed licensing records in 34 states (where providers were required to be licensed) to compile a list of providers. We then conducted a survey of those 49 providers licensed in two or more states to collect data on their settlement transactions over the past 4 years. We received responses from 25 providers. Interviewed seven providers, four brokers, three institutional investors, the Securities and Exchange Commission (SEC), the Financial Industry Regulatory Authority (FINRA), three state insurance regulators, three state securities regulators, the National Association of Insurance Commissioners (NAIC), the North American Securities Administrators Association (NASAA), the Life Insurance Settlement Association (LISA), the Life Settlement Institute (LSI), the Institutional Life Markets Association (ILMA), the Insurance Studies Institute, and three attorneys specializing in life settlements. We attended two life settlement industry conferences. Scope and Methodology (continued) Reviewed and analyzed state insurance laws and regulations; state and federal securities laws and regulations; federal and state court cases, as well as SEC and state securities enforcement actions involving life settlements or related investments; model acts or similar guidance created by NAIC, the National Conference of Insurance Legislators (NCOIL), and NASAA; academic, regulatory, and other studies on the regulation of life settlements; and related GAO reports. We conducted a survey of state regulators from 50 states and the District of Columbia to obtain information about their life settlement laws and regulations. We received responses from 45 states and the District of Columbia. For this objective, we generally interviewed the same entities identified in objective one. Scope and Methodology (continued) We conducted this performance audit from August 2009 to July 2010 in accordance with generally accepted government auditing standards. Those standards require that we plan and perform the audit to obtain sufficient, appropriate evidence to provide a reasonable basis for our findings and conclusions based on our audit objectives. We believe that the evidence obtained provides a reasonable basis for our findings and conclusions based on our audit objectives. Life settlements are the sale of existing, or in-force, life insurance policies by their owners. A U.S. Supreme Court decision in 1911 (Grigsby v. Russell, 222 U.S. 149) determined in effect that a valid life insurance policy is personal property that may be sold by its owner. Historically, policy owners who have had unneeded or unaffordable life insurance could surrender their policies to their life insurers for the cash surrender value. Life settlements provide owners with another option—the potential to sell their policies for an amount greater than the cash surrender value of the policies. Life settlements evolved from viatical settlements in the late 1990s. Viatical settlements involved the sale of insurance policies by terminally or chronically ill persons expected to live 2 years or less. Life settlements typically involve the sale of policies by healthier persons expected to live more than 2 years. Background (continued) The life settlement industry distinguishes between viatical and life settlements based on the insured’s life expectancy, but some regulators do not. For example, some state insurance laws broadly define the term “viatical settlements” to include life settlements. We generally use the term “life settlements” to refer to sales of policies covering insured persons expected to live more than 2 years. Life settlements gave rise to stranger-originated life insurance (STOLI) around the early 2000s. STOLI generally is the origination of a life insurance policy for the benefit a person who has no insurable interest in the insured when the policy is issued. STOLI also has been referred to as investor-originated life insurance and speculator- initiated life insurance. Background (continued) In a life settlement transaction, a policy owner sells a policy to an investor (or other buyer) for an agreed upon amount, typically for more than the policy’s cash surrender value but less than the policy’s face value (or death benefit). In exchange for the payment, the investor becomes the new policy owner—responsible for paying the policy’s premiums and entitled to receive the policy’s death benefit when the insured dies. Policy owners sell their policies because they no longer need the insurance or can no longer afford to pay the premiums. For example, a policy owner may no longer need the insurance because the intended beneficiary had died. Investors can profit from a life settlement by receiving a death benefit that is greater than the cost of acquiring and owning the policy. The amount of the death benefit is known when the policy is bought, but the date when the insured will die and the death benefit will be paid is not known. Background (continued) State insurance regulators are responsible for enforcing state insurance laws and regulations. State insurance regulators oversee the insurance industry through the licensing of agents, approval of insurance products and their rates, and examination of insurers’ financial solvency and market conduct. NAIC assists state regulators with various oversight functions, including maintaining databases and coordinating regulatory efforts by providing guidance, model laws and regulations, and information-sharing tools. Federal and state securities regulators oversee the securities markets, in part to protect investors. The U.S. securities markets are subject to a combination of industry self- regulation (with SEC oversight) and direct SEC regulation. This regulatory scheme was intended to relieve resource burdens on SEC by giving self- regulatory organizations, such as FINRA, responsibility for much of the daily oversight of the securities markets and broker-dealers under their jurisdiction. State securities regulators administer state securities laws and regulations, which include registering nonexempt and noncovered securities before they are marketed to investors; licensing broker-dealers, investment advisers, and their agents; and taking antifraud and other enforcement actions. The life settlement market is organized largely as an informal network of specialized intermediaries that facilitate the sale of existing life insurance policies by their owners to third-party investors. Policy owners and investors can transact directly but commonly use intermediaries. Life settlement brokers represent policy owners and may solicit bids for their policies from multiple life settlement providers with the goal of obtaining the best price. In turn, providers sell policies to investors. The number of brokers and providers varies widely from state to state. No comprehensive life settlement data exist, but estimates indicate the market grew rapidly from 1998 until the recent financial crisis. Estimates of the total face value of policies settled in 2008 ranged from around $9 billion to $12 billion. Life settlements traditionally have involved high dollar amount policies insuring older Americans. Individuals and financial institutions, including some banks, hedge funds, and life insurers, have invested in life settlements by buying individual policies, fractionalized interests in individual policies, interests in pools of policies, or other products. Summary (continued) State insurance and securities regulators, combined with federal securities regulators, oversee various aspects of the life settlement market. Life settlements typically comprise a front-end transaction—the sale of a policy by its owner (e.g., the insured) to a provider—and a back-end transaction—the sale of a policy by the provider to an investor. As of February 2010, 38 states have enacted insurance laws and regulations specifically to regulate life settlements.1 State insurance regulators generally focus on regulating the front-end transaction to protect policy owners, such as by imposing licensing, disclosure, reporting, and other requirements on brokers and providers. State securities regulators and, in certain circumstances, SEC regulate investments in life settlements (the back-end transaction) to protect investors. Variable life policies are securities; thus, settlements involving these policies are securities subject to its jurisdiction. SEC also has asserted jurisdiction over certain types of investments in life settlements of nonvariable, or traditional, insurance policies, but their status as securities is unclear because of a split between two federal circuit courts. According to NASAA and our own independent research, all but two states regulate investments in life settlement as securities under their securities laws. As noted in the background, we generally use the term “life settlements” to refer to sales of policies covering insured persons expected to live more than 2 years and, thus, draw a distinction between life settlements and viatical settlements. Summary (continued) Regulatory inconsistencies may pose a number of challenges. Life settlements can provide policy owners with a valuable option, but owners may face challenges obtaining adequate information. Twelve states and the District of Columbia have not enacted laws governing life settlements, and disclosure requirements can differ among the other states. In addition to the lack of uniformity, the potential exists for policy owners to complete a life settlement without knowing how much they paid their brokers or whether they received a fair price for their policies, unless such information is provided voluntarily to them. Some individual investors may face challenges obtaining adequate information about life settlement investments. Due to conflicting decisions by the U.S. Courts of Appeals for the District of Columbia and the Eleventh Circuit on whether investments in viatical settlements are securities and differences in state laws, individuals in different states investing in the same type of life settlement investment may be afforded different regulatory protections and receive different disclosures about their investment. Some life settlement brokers and providers may face challenges because of inconsistencies in the life settlement laws across states. For example, brokers and providers told us that some states have adopted laws that impede their ability to do business in those states. Summary (continued) As Congress considers how best to reform the regulatory structure of the financial services sector, it may wish to consider taking steps to help ensure that policy owners involved in life settlement transactions are provided a consistent and minimum level of protection. Life Settlement Market Organized Largely as an Informal Policy owners may sell their policies directly to investors in some cases, but owners and investors commonly use intermediaries, including agents, life settlement brokers, and life settlement providers, to assist them with their life settlement transactions (see fig. 1). Often the provider transfers the commission payment to the broker from the proceeds of the sale. Agents typically are insurance agents who assist policy owners with their transactions for a fee or commission. Agents may help policy owners determine whether to sell their policies, complete a life settlement application, and hire a life settlement broker. Generally, in states that regulate life settlements, a life insurance agent licensed by the state may serve as a life settlement broker, subject to the duties and responsibilities imposed on such brokers, but does not have to register as one. In nonregulated states, an agent may not be subject to similar duties and requirements. In regulated states, financial planners, accountants, and attorneys retained and paid by the policy owner are not regulated as life settlement brokers. Life settlement brokers negotiate the sale of a life insurance policy between the policy owner and buyer, namely a life settlement provider, for a fee or commission. Brokers represent policy owners, and policy owners can negotiate their broker’s commission. State laws typically provide that regardless of the manner in which the broker is compensated, the broker owes a fiduciary duty to the policy owner. According to four providers we interviewed, commissions are negotiated between policy owners and their brokers, but providers pay brokers their commissions from the proceeds provided by investors. One provider said that this approach is similar to the way commissions are paid in real estate transactions. Broker services may include obtaining a life expectancy estimate on the insured, gathering required documents (such as medical forms), and soliciting offers for the policy from multiple providers with the goal of obtaining the best price for the policy. We surveyed insurance regulators in all 50 states and the District of Columbia. Forty-five states and the District of Columbia completed our survey. Life settlement providers purchase life insurance policies from policy owners, agents, or life settlement brokers on behalf of investors for a fee or commission or for their own account. Providers sell policies to investors. Provider activities may include ensuring that documents comply with applicable laws, representing investors in the bidding process, and servicing policies after transactions are completed. Based on our survey of state insurance regulators, we found that the number of licensed life settlement providers varied considerably across the 32 states that imposed a licensing requirement on providers and provided us with data on the number of their licensed providers (see fig. 3). The life settlement market is organized largely as an informal network of specialized intermediaries that facilitate the sale of existing life insurance policies by their owners to investors. To sell their policies, owners or brokers typically solicit bids for the policies from providers. The value of a policy depends on a range of factors, including the life expectancy of the insured and the policy’s death benefit. Life settlement brokers can play a key role in settlement transactions by controlling which providers are permitted to bid on a policy. Brokers establish working relationships with a number of providers and may have a process for reviewing and approving the providers with which they will do business. Likewise, providers may have a process for reviewing and approving brokers. Brokers solicit bids on policies from one or more providers, in part depending on whether (1) the policy’s parameters (for example, policy’s face value and insured’s life expectancy) match the specifications of the providers and (2) the providers are licensed, if required. Providers value the policies and, if interested, bid on them. Some life settlement providers buy policies through other intermediaries, such as insurance agents, financial planners, or securities broker-dealers. Electronic trading platforms have been developed to help facilitate the buying and selling of life insurance policies. However, two brokers and three providers told us such platforms generally provide little cost savings and are not widely used. No comprehensive life settlement data exist, but various estimates indicate that the market grew rapidly until the recent financial crisis. A securities research firm estimated that the total face value of policies settled in 1998, around the time life settlements emerged, was $0.2 billion. A provider and consulting firm separately estimated that the total face value of policies settled in 2008 was about $9 billion to $12 billion. Two brokers and three providers told us that the recent credit crisis generally has led to a reduction in investor demand for life settlements and an excess in supply of policies for sale in 2008 and 2009. To compile data on the size of the life settlement market, we conducted a survey of 49 providers that were licensed in two or more states, and 25 providers responded to our survey. We identified 34 states that required providers to be licensed and obtained a list of providers licensed in each of these states (as of September 2009). Based on these lists, we identified 98 providers, of which 55 were licensed in two or more states. However, we were able to contact only 49 of these providers for our survey. Because no comprehensive life settlement data exist, we were not able to estimate the share of the market held by 25 providers responding to our survey. Table 1 summarizes some of our survey results. Total commissions paid to brokers (in billions) billions) billions) Various data indicate that life settlements traditionally have involved high-face- value policies insuring older Americans. Based on a sample of 1,020 policies settled in 2008, Life Policy Dynamics, a consulting firm, found that the average face value per policy was nearly $2.3 million and the average age of the insured male and female were 76.8 years and 81.1 years, respectively. Based on a sample of 3,138 policies settled in 2006, LISA reported that the average face value per policy was nearly $2.1 million. Based on our review of 29 provider Web sites, we found these providers were interested in buying policies with the following parameters: Minimum age of the insured ranged from 60 to 70 years old, Minimum face value of the policy ranged from $25,000 to $1 million, Maximum face value of the policy ranged from $5 million to $100 million, Minimum life expectancy ranged from 2 to 4 years, Maximum life expectancy ranged from 10 to 21 years, and Types of life insurance policies included universal, whole, convertible term, and variable policies. Individuals and financial institutions, including banks, hedge funds, and life insurance companies, have invested in life settlements through several different products or instruments. Investors may chose life settlements to diversify their portfolios (viewing life settlement returns as not being correlated with returns on equities and other traditional investments) or for other purposes. However, returns on life settlements depend on when the insured persons die, which cannot be predicted precisely. If the insured persons live longer than estimated, investors may pay more than expected in policy premiums—reducing their return. Products or instruments through which investors can invest in life settlements include individual policies; portfolios of individual policies; fractionalized interests in individual policies; and interests in pools of policies, such as life settlement funds and asset-backed securities. Institutional investors tend to buy individual policies or portfolios of policies, and individual investors tend to buy fractionalized interests in individual policies or interests in pools of policies. State and Federal Regulators Oversee Various Aspects of the Life settlements typically comprise two transactions: (1) the sale of a policy by the owner to a provider, which itself is the life settlement contract, and (2) the sale of the policy or an interest in the policy or its proceeds by providers to investors. The majority of states regulate the first transaction, called the front- end transaction, under their insurance laws. However, in at least one circumstance, when the life settlement involves the sale of a variable life insurance policy, the front-end transaction also is regulated under the federal securities laws. The second transaction, called the back-end transaction, is regulated under state securities laws and, in certain circumstances, federal securities laws. To protect policy owners involved in life settlements, NAIC and NCOIL have developed model acts to help states craft legislation to regulate viatical and life settlements. In 1993, following the emergence of viatical settlements, NAIC developed the Viatical Settlements Model Act. Viatical settlements did not precisely fit within the definition of insurance activity on which regulators usually focused, but insurance consumers were being harmed in these transactions, leading state insurance regulators to develop a model act. In 2000, following the emergence of life settlements, NCOIL developed the Life Settlements Model Act and revised the act in 2004 to address the growing life settlement market. In 2001, NAIC extensively revised its model act and expanded the act’s definition of viatical settlement to include life settlements. In 2007, following the emergence of stranger-originated life insurance (STOLI), NAIC and NCOIL revised their model acts to prohibit, in effect, life settlements involving STOLI policies. STOLI generally is the origination of a life insurance policy for the benefit of a person who has no insurable interest in the insured when the policy is issued. Such arrangements attempt to circumvent state insurable interest laws—under which many states require a person to be related by blood or law, have an interest engendered by affection, or have an economic interest in the continued life of the insured. According to life insurance officials and others, STOLI emerged around 2003, when the supply of existing life insurance policies eligible for life settlements could not meet investor demand for such policies. Unlike life settlements, STOLI involves the issuance of a new policy without an insurable interest, but STOLI policies subsequently can be sold and, thus, become life settlements. Over time, the majority of states have enacted laws or modified existing regulations to regulate life settlements under their insurance laws and regulations—many of which were based on a version of the NAIC model act, a version of the NCOIL model act, or a combination of both. As of February 2010, 38 states have enacted insurance laws or regulations to regulate life settlements, and 12 states and the District of Columbia have not. State insurance laws and regulations covering life settlements focus primarily on protecting policy owners by regulating activities and professionals involving the sale of a policy by its owner to a provider (front-end transaction). State life settlement laws and regulations generally (1) require licensing of providers and brokers; (2) require filing and approval of settlement contract forms and disclosure statements; (3) describe the content of disclosures that must be made by brokers and providers; (4) impose periodic reporting requirements on providers; (5) prohibit certain business practices deemed to be unfair; and (6) provide insurance regulators with examination and enforcement authority. In addition to state insurance regulators, state securities regulators and, in certain circumstances, SEC oversee investments in life settlements under their securities laws to protect investors. Sales of variable life insurance policies—in both the front- and back- end transactions—are securities transactions under the federal securities laws. Variable life insurance policies build cash value through the investment of premiums into separate investment options and offer an income tax-free death benefit to the beneficiaries. The cash value and death benefit vary based on the performance of the underlying investment choices. These policies are similar to traditional, or nonvariable, life insurance, except that the policy owners have investment choices in connection with the underlying assets. Because policy owners assume investment risk under their variable policies, these policies are securities. As a result, life settlements and related investments involving variable policies are securities transactions subject to SEC jurisdiction. Investments in nonvariable life policies do not expressly fall under the definition of a security but still can be subject to securities laws. As noted above, investors can invest in life settlements by buying individual policies, a portfolio of policies, fractionalized interests in individual policies, or interests in a pool of policies. These policies can include variable or nonvariable insurance policies. Under the federal securities laws, the statutory definition of a security does not expressly include life settlement investments but does include the term “investment contract.” In SEC v. W.J. Howey Co., the Supreme Court held that an investment contract is a security if the investors expect profits from a common enterprise that depends upon the efforts of others.4 This definition is used to determine whether an instrument is an investment contract (called the investment contract test). Providers or other third parties may seek to structure investments in life settlements in a way that makes them fall outside the definition of an investment contact and, thus, not subject to the federal securities laws. See SEC v. W.J. Howey Co., 328 U.S. 293 (1946). Applying the investment contract test, SEC has asserted that certain life settlement investments involving nonvariable insurance policies are investment contracts and, thus, subject to its jurisdiction, but the federal courts have not reached a uniform decision on this issue. In SEC v. Life Partners, SEC brought an enforcement action against a provider for selling fractionalized interests in viatical settlements without registering them as securities.5 In 1996, the D.C. Circuit Court concluded that the interests were not investment contracts and, thus, not subject to the federal securities laws. In SEC v. Mutual Benefits Corp., SEC brought an enforcement action against a provider for fraud in connection with its sale of fractionalized interests in viatical settlements.6 In 2005, the Eleventh Circuit found the interests were investment contracts and subject to the federal securities laws. The federal courts have not addressed whether the sale of an individual nonvariable policy by a provider to an investor is a security under the federal securities laws. See SEC v. Life Partners, Inc., 87 F.3d 536 (D.C. Cir. 1996). See SEC v. Mutual Benefits Corp., 408 F.3d 737 (11th Cir. 2005). Life settlement investments that are securities under the federal securities laws must be registered, unless they qualify for an exemption. Moreover, entities selling these investments must be registered as securities broker- dealers and are subject to FINRA’s sales practice rules (discussed below). SEC and FINRA have used various tools to monitor life settlements and related investments. FINRA has issued various notices, reviewed applications by broker- dealers to add life settlements to their business activities, and examined broker-dealers involved in life settlements. SEC has taken enforcement actions to protect investors. SEC recently formed a life settlement task force to examine emerging issues in the life settlement market and advise SEC on whether market practices and regulatory oversight can be improved. According to SEC staff, the task force may issue a public report based on its work and, if warranted, include recommendations. According to NASAA and our own independent research, all but two states regulate investments in life settlements under their state securities laws. Because of the Life Partners decision, NASAA issued guidelines in 2002 for states to regulate viatical investments under their securities laws. NASAA noted that state securities regulators were not bound by the decision and took the position that investments in viatical settlements, broadly defined to include life settlements, were securities. Under NASAA’s guidelines, a viatical investment is defined as the right to receive any portion of the death benefit or ownership of a life insurance policy for consideration that is less than the death benefit. The guidelines exclude sales of policies by their owners to providers from the definition. Thirty-five states have statutes defining a “security” or “investment contract” to expressly include investments in life settlements under their securities laws. These states generally exempt from the definition sales of policies by their owners to providers. Thirteen other states and the District of Columbia, like SEC, apply the investment contract test to life settlement investments to determine whether these investments fall within the definition of a security and are subject to their securities laws. The majority of state authorities applying the investment contract test have found that their states’ securities laws include viatical or life settlement investments. In a 2004 decision (Griffitts v. Life Partners, Inc.), the Texas Court of Appeals concluded that viatical settlements are not securities under the Texas securities law and instead fall within the law’s exception for insurance products.7 Investments in life settlements that are subject to state securities laws must be registered, and entities or persons selling these investments must be registered. See Griffitts v. Life Partners, Inc., 2004 Tex. App. LEXIS 4844 (Tex. Ct. App. May 26, 2004). Regulatory Inconsistencies May Present Challenges for Policy Policy owners may face challenges obtaining adequate information about their life settlement transactions. Although life settlements can provide policy owners with a valuable option, policy owners can face challenges with these transactions, such as assessing whether a life settlement is suitable or the best option for knowing whether they are being offered a fair price for their policy, because little information about the market value of policies is publicly available; understanding the potential risks or implications associated with life settlements, including that the proceeds may be taxable or the transaction could limit their ability to obtain insurance in the future; or protecting themselves from potential abuse, such as excessive broker commissions. Regulators and others have raised concerns about the potential for policy owners to be subject to abuse in life settlements. The New York Attorney General and Florida Office of Insurance Regulation separately took action against a provider for allegedly working with brokers to manipulate the bidding process and not disclosing commissions paid to the brokers. The provider settled both cases without any admission of liability or violation of any laws or regulations. SEC and FINRA have expressed concern about high broker commissions. Moreover, FINRA has examined six broker-dealers believed to be engaged in life settlements and found problematic practices, primarily with regard to commissions, at two firms. Some industry observers and participants have commented that one of the significant risks faced by consumers is not being adequately advised about whether they should sell their life insurance or pursue another option. Some industry participants identified excessive commissions and not obtaining bids from multiple buyers as bad practices. Based on the data provided by the 25 providers responding to our survey, we found that broker commissions appear to have declined in the past 4 years. Specifically, the share of the total gross proceeds (or amount paid for the policies) received by brokers declined from around 15 percent in 2006 to around 9 percent in 2009. In our survey of state insurance regulators, 26 states reported that they track consumer complaints about life settlements, but such complaints have been limited in number. Of the 26 states, 22 of them provided us with the number of complaints they received about life settlements in 2007, 2008, and 2009. Fourteen states reported that they did not receive any complaints during the 3 years. Eight states reported receiving a total of 35, 47, and 36 complaints in 2007, 2008, and 2009, respectively. Figure 4 shows the complaints received by these states. Thirty-eight states (regulated states) have enacted life settlement laws and regulations to protect policy owners, but 12 states and the District of Columbia have not (unregulated states). As discussed above, regulatory protections provided by some regulated states to policy owners include requiring brokers and providers to be licensed; brokers to owe their clients a fiduciary duty; brokers or providers to disclose in writing the risks associated with a brokers or providers to disclose in writing the amount of broker brokers to disclose in writing all offers, counter-offers, acceptances, and rejections relating to a proposed life settlement contract. Although 34 and 33 states reported providing their regulators with the authority to examine brokers and providers, respectively, not all of them provided us with data about the examinations they have conducted. In addition to state life settlement laws, FINRA imposes sales practice requirements on securities broker-dealers or their representatives recommending or facilitating life settlement transactions involving variable life policies. Suitability: FINRA requires firms to have a reasonable basis for believing that the transaction is suitable for the customer. It has noted that a variable life settlement is not necessarily suitable for a customer simply because the settlement price offer exceeds the policy’s cash surrender value. Due diligence: FINRA requires firms to understand the confidentiality policies of providers and brokers and the ongoing obligations that customers will incur. Best execution: FINRA requires firms to use reasonable diligence to ascertain the best market for a security and obtain the most favorable price possible. FINRA notes that firms should make reasonable efforts to obtain bids from multiple providers, either directly or through a broker. Supervision: FINRA requires firms to establish an appropriate supervisory system to ensure that their employees comply with all applicable rules. Commissions: FINRA prohibits firms from charging customers more than a fair and reasonable commission in any securities transaction. Policy owners in some states may be afforded less protection than policy owners in other states due to regulatory inconsistencies and, thus, face greater challenges obtaining information needed to protect their interests. Policy owners can ask brokers or providers for information they need to protect their interests. Nonetheless, as recognized by NAIC’s and NCOIL’s adoption of model acts and, in turn, some states’ adoption of life settlement laws, some policy owners may not do so because they might not know to ask for such information or for other reasons. Likewise, some brokers or providers may not provide policy owners with certain information unless asked or required. Policy owners could complete a life settlement without being informed about risks or implications of such a transaction. Many brokers disclose potential implications to policy owners in their application forms, but some do not in unregulated states and regulated states that have not imposed the requirement. Some providers buy policies directly from owners but do not include disclosures in their application forms. Brokers or providers may voluntarily disclose such information later in the process (e.g., as part of the closing documents) but are not required to do so in all states. Policy owners could complete a life settlement without knowing how much they paid their brokers or whether their brokers solicited bids from multiple providers. Institutional investors formed ILMA, in part to promote transparency about broker commissions and bids received by brokers. Since 2008, ILMA members have required their providers to disclose broker commissions. ILMA officials told us that about half the settlement transactions are completed with the level of disclosure required by ILMA. Three providers told us that some brokers have not solicited bids from providers because those providers disclose commissions, and some policy owners have renegotiated commissions once disclosed. One provider told us that it does not disclose broker commissions in unregulated states, unless asked, to avoid being disadvantaged. Brokers may voluntarily disclose information about their commissions or bids received from providers but are not required to do so in unregulated states and regulated states that have not imposed the requirement. Policy owners selling variable life insurance may be afforded greater protection in terms of suitability than policy owners selling nonvariable policies. Regulated states generally hold brokers to a fiduciary duty to policy owners, but do not specifically impose a suitability requirement. In contrast, FINRA specifically imposes a suitability requirement on securities broker-dealers with respect to variable life settlements. SEC also has broad antifraud authority over these transactions. According to an attorney who specializes in nonvariable life settlements, few brokers perform a suitability analysis, but the attorney said such analysis should be required. Similarly, a broker told us the lack of a suitability requirement for brokers should be addressed. According to a life settlement provider, life settlements generally have involved policies owned by high-net-worth individuals, who are financially sophisticated and able to protect their own interest. Some market participants we spoke to have called for a federal role in the laws regulating life settlements to protect policy owners. According to a provider, federal law should set minimum standards for state regulation of life settlements, and the proposed Consumer Financial Protection Agency should supervise life settlement activity in those states that do not provide the minimum level of regulation. Three providers told us that federal regulation of life settlements would promote greater uniformity, but this approach also has potential negatives. For example, one provider told us that it is not clear that a federal regulatory agency would be better than the states in enforcing the standards and protecting consumers. We also recently developed a framework comprised of nine elements to help Congress and others evaluate proposals for financial regulatory reform. One of these elements is consistent consumer and investor protection: market participants should receive consistent, useful information, as well as legal protections for similar financial products and services, including disclosures, sales practice standards, and suitability requirements. See, for example, GAO, Insurance Reciprocity and Uniformity: NAIC and State Regulators Have Made Progress in Producer Licensing, Product Approval, and Market Conduct Regulation, but Challenges Remain, GAO-09-372 (Washington, D.C.: Apr. 6, 2009). See GAO, Financial Regulation: A Framework for Crafting and Assessing Proposals to Modernize the Outdated U.S. Financial Regulatory System, GAO-09-216 (Jan. 8, 2009). Individual investors may face challenges obtaining adequate information about investments in life settlements. Life settlement investments raise a number of risks for investors. Some of these risks include the following. Longevity risk: Persons whose lives are insured in life settlements may live significantly longer than expected because of medical advances or other factors. In this case, investors would have to pay more policy premiums than expected, resulting in lower returns than expected. Life expectancy underwriting risk: Medical underwriters use different methodologies to estimate the life expectancies of the insured persons. If a an underwriter underestimated the life expectancies of the insured persons, the effect for investors generally would be the same as under longevity risk. Legal risk: Life insurance companies could contest the policy and refuse to pay the death benefit because of a lack of insurable interest. If a company was successful, the investor would suffer a loss on the policy. Liquidity risk: Investors may need to liquidate their investment but may not be able to do so in a timely manner. If they could not continue to pay policy premiums to keep the policies in force, they may have to let the policies lapse. Regulators also have raised concerns about the risks associated with life settlement investments. In a 2009 speech, the SEC Chairman commented that investors may not have a complete understanding of the investment risks associated with a life settlement, including the risks related to the health and life expectancy of the insured. In a 2009 release, FINRA expressed concern about retail investors who purchase these life settlement products because they may not fully understand the risks of such investments. In 2009, NASAA included life settlements in its list of top investment traps. According to SEC staff, the agency received 54 complaints regarding viatical or life settlements between July 2007 and January 2010. Thirty-seven complaints involved two providers for failing to pay investors and other abuses. Seventeen complaints alleged misrepresentation, lack of suitability, theft of funds, and other abuses. State and federal securities regulators have taken various actions to protect investors investing in life settlements and related investments. Nearly all states have brought life settlement investments under their securities laws. According to NASAA, state securities regulators have taken enforcement actions against providers for selling unregistered investments and committing fraud and abuse against individual investors. The types of targeted abuses have included life settlement entities: Deliberately selling nonexistent policies and keeping the investment proceeds (e.g., Ponzi schemes), Misrepresenting the medical condition of the policy owners, and Making unsupportable claims about the performance of the investment or failing to adequately disclose information about the risks to prospective investors. Since 1994, SEC has brought 19 enforcement cases related to the sale of viatical and life settlement investments. These include actions against providers for making misrepresentations to investors and actions against funds for operating Ponzi schemes involving viatical settlements. Some individual investors may face challenges obtaining sufficient information about life settlement investments because of the potential for such investments not to be subject to state or federal securities laws. Because of a split between two federal circuit court decisions, a lack of uniformity exists as to whether investments in life settlements on nonvariable policies are securities, creating a potential obstacle for SEC and state securities regulators to protect investors. For example, two state securities regulators told us that they often are confronted with defenses based on the D.C. Circuit Court’s Life Partners decision when trying to establish jurisdiction over life settlement investments in enforcement actions. A Texas state court has found certain life settlement products sold by a provider not to be securities, but a Colorado state court has found the same products to be securities. As a result, investors investing in the same product could be provided different protections and, in turn, different disclosures about the product. In 2002, LSI testified and more recently NASAA and a life settlement provider told us that the federal securities laws should be amended to deem life settlement investments as securities in light of the D.C. Circuit Court’s Life Partners decision. Some life settlement brokers and providers may face challenges because of inconsistencies in the life settlement laws or regulations across states. Brokers and providers generally told us that keeping abreast of the ongoing changes in life settlement laws and regulations across the different states and complying with these laws and regulations is challenging. Some states began regulating life settlements in the early 2000s but changes are ongoing. For example, California, Illinois, and New York recently modified their laws and regulations to enhance their oversight of life settlements. Following the NAIC’s and NCOIL’s amendment of their model acts in 2007 to address STOLI, numerous states have amended their life settlement laws and regulations. Two providers told us that they spend significant resources tracking changes being made by states to their life settlement laws and regulations. Two brokers and four providers told us that differences in regulatory requirements among or between states are burdensome to them or increase their compliance costs. Entities operating in multiple states may need to (1) maintain different application, disclosure, and other forms for different states, (2) obtain approval for such forms from different regulators, and (3) file different data in different states, for example in annual reports. According to ACLI and industry observers, life insurance companies in the broader insurance market can face similar challenges that life settlement market participants face in obtaining licenses, reporting information, and obtaining approvals for their products and forms in 51 different jurisdictions, and that this increases costs and hampers competition. Brokers and providers told us that some states have adopted laws or regulations that impede their ability to do business in those states. With fewer brokers or providers available to solicit or make bids, policy owners could receive lower offers, according to market participants. Three brokers and one provider told us that some states require brokers to obtain surety bonds to be licensed, but such bonds can be costly or may not be available. One broker told us that this requirement is unnecessary, because brokers do not handle customer funds. Some regulators have recognized that the requirement might be difficult to comply with but consider it important to protect policy owners. Two brokers told us that one state limits broker commissions to 2 percent of the gross proceeds, which is too low given their costs. According to our survey of state insurance regulators, no brokers are licensed in that state. Two providers told us that they do not do business in certain states because it is too difficult to comply with their regulations. ILMA, two providers, and a bank involved in life settlements said that they support greater uniformity in the laws regulating life settlements, in part to lower transaction costs or increase operational efficiencies. STOLI may pose challenges for life insurers, as well as for policy owners and investors, despite various efforts to prevent STOLI. No universally accepted definition of STOLI exists, but the term generally refers to the issuance of a life insurance policy for the benefit of a third party who has no insurable interest in the insured when the policy was issued. According to ACLI, states require the buyer of insurance on the life of another person to have an insurable interest in the life of that person. Despite this requirement, some individuals have been induced to purchase life insurance for the benefit of investors (called STOLI). Although STOLI involves the origination of new policies, STOLI policies can be sold by their owners to providers or investors and, thus, become life settlements. No reliable data exist to measure STOLI, but various industry observers and participants told us that STOLI grew rapidly from around 2003 to 2008. STOLI can pose risks to policy owners, life insurance companies, and investors, including the following risks. Policy owners participating in STOLI can face a number of risks, including incurring taxes on income generated from the transaction, becoming involved in disputes about the validity of the policy, being unable to purchase additional life insurance (because insurers sometimes will not offer coverage to individuals with total outstanding coverage above certain limits), and facing potential legal liability from the transaction. Some of these risks are similar to the risks raised in a life settlement transaction. According to ACLI and insurers we interviewed, life insurance companies may suffer damage to their reputation from STOLI and losses on STOLI policies, and they could incur costs in deterring, detecting, or litigating STOLI policies. Investors investing in life settlements involving STOLI policies face the risk that such policies could be rescinded for violation of the insurable interest laws or fraud. STOLI generally is prohibited under insurable interest laws, but approximately half of states have enacted additional laws or regulations specifically prohibiting STOLI transactions. In 2007, NAIC and NCOIL modified their model acts to include provisions to address STOLI, but the acts take different approaches. NAIC imposes a 5-year moratorium on the settlement of policies with STOLI characteristics, subject to some exceptions. NCOIL defines STOLI and prohibits such transactions. LISA generally supports NCOIL’s approach, because it does not interfere with the property rights of policy owners. Various insurance associations support using NAIC’s approach as the basis for state legislation but also including aspects of NCOIL’s approach. Based on responses to our survey of state insurance regulators, 26 states have laws that include specific provisions to deter or prohibit STOLI. Of these states, 20 explicitly have defined STOLI transactions and prohibited such transactions. Life insurers and others also have taken action to detect or prevent STOLI, and they and others have indicated that STOLI appears to have decreased due to various factors. Some life insurance companies have sought to prevent STOLI by (1) tightening underwriting standards and developing screening procedures to identify potential STOLI; (2) disciplining or terminating business arrangements with agents selling STOLI policies; (3) and initiating legal actions to rescind STOLI policies. According to life insurers, brokers, and providers, several factors have reduced STOLI—including the recent credit crisis, which reduced investor demand for life settlements and the availability of credit to finance STOLI; efforts taken by life insurers to detect STOLI and prevent the issuance of such policies; and the increase in life expectancy estimates by several life expectancy underwriters, which reduced investor demand for life settlements involving STOLI policies. Life insurance companies may continue to face challenges in detecting and preventing STOLI. Two life insurers and ACLI told us that STOLI promoters are continuing to develop new ways to evade efforts to detect or prevent the issuance of STOLI, such as by using trusts. Two life insurers told us that separating life settlements that involve STOLI policies from life settlements that involve legitimate life insurance policies is difficult because of the difficulty in distinguishing which policies are STOLI policies. The courts recently have found that a person may legitimately buy a policy while planning to sell it, as long as no agreement exists to sell the policy to a third party when the policy is purchased. ACLI supports banning the securitization of life settlements, because securitization would encourage promoters to elicit STOLI, but ILMA, LISA, and others disagree. Time between the issuance and settlement of a policy The total policies settled do not match the total policies settled shown in table 1, because not all providers provided us with data on the age of their settled policies. Because life settlements and related investments can have characteristics of both insurance and securities, their regulatory structure involves multiple state and federal regulators. State insurance regulators have played a primary role in protecting policy owners by regulating the sale of in-force policies by their owners to life settlement providers. In turn, state and federal securities regulators have played the primary role in protecting investors by regulating the sale of life settlement investments. We recently developed a framework for assessing proposals for modernizing the financial regulatory system. One of the elements of that framework is consistent consumer and investor protection: market participants should receive consistent, useful information, and legal protection for similar financial products and services, including disclosures, sales practice standards, and suitability requirements. Another element is consistent financial oversight: the regulatory structure should ensure that similar institutions and products are subject to consistent regulation, oversight, and transparency, in part to help minimize negative competitive outcomes. Conclusions (continued) These two elements have not been fully achieved under the current regulatory structure of the life settlement market. First, not all states have enacted life settlement laws to provide policy owners with a minimum level of protection. Second, licensing, disclosure, and other requirements differ between or among some states that have enacted life settlement laws, resulting in different protections for different policy owners. Third, policy owners also can be afforded different protections, depending on whether the policy being sold is a variable policy subject to FINRA and federal sales practice rules or a nonvariable policy. Although variable policies, unlike nonvariable policies, expose their policy owners to investment risk, life settlements involving both types of policies generally raise the same potential risks for policy owners. Conclusions (continued) A potential federal role in the regulation of insurance has been the subject of debate, and the current financial crisis has renewed the debate. For example, the House Financial Services Committee proposed a bill to create a Federal Insurance Office to monitor all aspects of the insurance industry including identifying regulatory gaps. More recently, the Senate Committee on Banking, Housing, and Urban Affairs proposed a bill to create an Office of National Insurance, in part to monitor the insurance industry. In the last decade, we have made a number of recommendations to encourage state regulators to implement a consistent set of insurance regulations. As Congress considers how best to reform the regulatory structure of the financial services sector, life settlements offer another example of products that may lack clear comprehensive regulation. Therefore, Congress may wish to consider taking steps to help ensure that policy owners involved in life settlement transactions are provided a consistent and minimum level of protection. Appendix II: Results of GAO’s Survey of State Insurance Commissioners Regarding Their Regulation of Life Settlements As part of our life settlement review, we surveyed insurance regulators of the 50 states and the District of Columbia to document their laws and regulations applicable to life settlements. Our survey focused on state regulation of life settlements and excluded viatical settlements from our definition of life settlements. We defined a life settlement generally as the sale of a life insurance policy by an individual who is not terminally or chronically ill to a third party, namely a settlement provider. We define a viatical settlement generally as the sale of a life insurance policy by an individual who is terminally or chronically ill to a third party. Forty-five states and the District of Columbia completed our survey. Five states did not complete our survey: Delaware, Georgia, Indiana, Kansas, and South Carolina. California, Illinois, New York, and Rhode Island recently passed life settlement laws that had not yet taken effect. California, Illinois, and Rhode Island completed our survey as if their recently passed laws had taken effect; New York did not. For each question below, we provide the total responses to each possible answer in parentheses. State Viatical and/or Life Settlement Laws 1. Which of the following best describes your state’s laws and regulations covering viatical and/or life settlements? a. Only viatical settlements, generally defined as the sale of a life insurance policy by an individual with a terminal or chronic illness or condition are covered (5–Massachusetts, Michigan, New Mexico, New York, and Wisconsin) b. Only life settlements, generally defined as the sale of a life insurance policy by an individual without a terminal or chronic illness or condition are covered (1–Idaho) c. Both viatical and life settlements are covered (33–Alaska, Arkansas, California, Colorado, Connecticut, Florida, Hawaii, Illinois, Iowa, Kentucky, Louisiana, Maine, Maryland, Minnesota, Mississippi, Montana, Nebraska, Nevada, New Jersey, North Carolina, North Dakota, Ohio, Oklahoma, Oregon, Pennsylvania, Rhode Island, Tennessee, Texas, Utah, Vermont, Virginia, Washington, and West Virginia) d. Neither viatical nor life settlements are covered (7–Alabama, Arizona, District of Columbia, Missouri, New Hampshire, South Dakota, and Wyoming) Definition of Broker - Throughout our survey, we use the term “brokers” to refer to persons or entities that negotiate the sale of a life insurance policy between a policy owner and provider or other buyer. 2. Does your state require the policy owner’s life settlement brokers to be licensed? a. Yes (33) b. No (1) c. Don’t know (0) d. Not applicable (0) 3. Does your state require the policy owner’s life settlement brokers to complete continuing education requirements related to life settlements on a periodic basis? a. Yes (16) b. No (17) c. Don’t know (0) d. Not applicable (0) 4. Does your state require the policy owner’s life settlement brokers to demonstrate evidence of their financial responsibility through a surety bond or similar means? a. Yes (11) b. No (21) c. Don’t know (0) d. Not applicable (1) 5. How many life settlement brokers are currently licensed in your state? Indicates no response was provided. 6. Does your state impose a fiduciary duty on life settlement brokers to their clients (i.e., policy owners) who are selling their policies? a. Yes (31) b. No (2) c. Don’t know (1) d. Not applicable (0) Definition of Provider - Throughout our survey, we use the term “providers” to refer to persons or entities that purchase a life insurance policy from the policy owner for their own account or on behalf of a third party. 7. Does your state require life settlement providers to be licensed? a. Yes (33) b. No (1) c. Don’t know (0) d. Not applicable (0) 8. Does your state require life settlement providers to demonstrate evidence of their financial responsibility through a surety bond or similar means? a. Yes (24) b. No (10) c. Don’t know (0) d. Not applicable (0) 9. How many life settlement providers are currently licensed in your state? Indicates no response was provided. 10. Does your state require that life expectancy underwriters or consultants (e.g., those companies that conduct analyses of an insured’s life expectancy) be licensed? a. Yes (1) b. No (33) c. Don’t know (0) Approval of Settlement Contracts and Disclosure Statements 11. Does your state require life settlement brokers to have their disclosure statements (provided to policy owners) approved by an appropriate regulator (e.g., insurance commission)? a. Yes (30) b. No (4) c. Don’t know (0) 12. Does your state require life settlement providers to use a settlement contract form that has been approved by an appropriate regulator (e.g., insurance commission)? a. Yes (33) b. No (1) c. Don’t know (0) 13. Does your state require life settlement providers to have their disclosure statements (e.g., forms providing risk or fee disclosures) provided to policy owners approved by an appropriate regulator (e.g., insurance commission)? a. Yes (33) b. No (1) c. Don’t know (0) 14. Does your state require life settlement providers to submit data (e.g., aggregate face value and proceeds of policies settled) periodically on their settlement transactions executed within your state (i.e., executed either on the basis of the location of the policy owner’s residence, or on location where business is conducted)? a. Yes (31) b. No (3) c. Don’t know (0) 15. Does your state require life settlement providers to submit data (e.g., aggregate face value and proceeds of policies settled) periodically on their settlement transactions executed outside of your state (i.e., executed either on the basis of the location the policy owner’s residence, or on location where business is conducted)? a. Yes (15) b. No (19) c. Don’t know (0) 16. Does your state require life settlement providers to submit data periodically on enforcement actions in which they are involved within or outside of your state? a. Yes (22) b. No (12) c. Don’t know (0) 17. Does your state require life settlement providers to report information on policies settled within a prescribed period of policy issuance (e.g., within 5 years)? a. Yes (24) b. No (10) c. Don’t know (0) 18. Does your state prohibit life settlement brokers, providers, and other life settlement entities with knowledge of the insured’s identity from disclosing the insured’s financial or medical information, except under expressly enumerated circumstances? a. Yes (34) b. No (0) c. Don’t know (0) Examinations and Investigations–Life Settlement Brokers 19. Does your state’s appropriate regulator (e.g., insurance commission) have the authority to examine licensed life settlement brokers? a. Yes (34) b. No (0) c. Don’t know (0) 20. In the past 5 years, has your state conducted any examinations of life settlement brokers based solely on the passage of time and not based on cause (e.g., a customer complaint)? a. Yes (3) b. No (27) c. Don’t know (1) d. Not applicable (3) 21. In the past 5 years, has your state conducted any investigations (or “cause exams”) of a life settlement broker? a. Yes (9) b. No (21) c. Don’t know (1) d. Not applicable (3) 22. In the past 5 years, has your state conducted any on-site examinations of life settlement brokers? a. Yes (5) b. No (25) c. Don’t know (1) d. Not applicable (3) 23. In the past 5 years, has your state conducted any off-site examinations of life settlement brokers? a. Yes (6) b. No (24) c. Don’t know (1) d. Not applicable (3) 24. In the past 5 years, has your state assessed the controls that life settlement brokers use to protect the confidentiality of an insured’s personal information and to comply with privacy requirements? a. Yes (7) b. No (22) c. Don’t know (1) d. Not applicable (4) 25. In the past 5 years, has your state assessed controls that life settlement brokers use to ensure that life settlement advertisements are not unfair, deceptive, or misleading? a. Yes (9) b. No (21) c. Don’t know (0) d. Not applicable (4) 26. In the past 5 years, has your state assessed controls that life settlement brokers use to detect, investigate, and report possible acts of fraud? a. Yes (8) b. No (21) c. Don’t know (1) d. Not applicable (4) 27. Of the currently licensed life settlement brokers in your state, what percentage of them has been examined in the last 5 years? a. 0 % (24) b. 1 – 25 % (4) c. 26 – 50 % (0) d. 51 – 75 % (1) e. 76 – 100 % (1) f. Don’t know (1) g. Not applicable (3) 28. Have any of your examinations or investigations found instances in which a life settlement broker had improperly disclosed the identity (e.g., name and address) of an insured in the past 5 years? a. Yes (2) b. No (12) c. Don’t know (2) d. Not applicable (18) Examinations and Investigations–Life Settlement Providers 29. Does your state’s appropriate regulator (e.g., insurance commission) have the authority to examine licensed life settlement providers? a. Yes (33) b. No (0) c. Don’t know (0) 30. In the past 5 years, has your state conducted any examinations of life settlement providers based solely on the passage of time and not based on cause (e.g., a customer complaint)? a. Yes (3) b. No (27) c. Don’t know (0) d. Not applicable (3) 31. In the past 5 years, has your state conducted any investigations (or “cause exams”) of a life settlement provider? a. Yes (9) b. No (21) c. Don’t know (0) d. Not applicable (3) 32. In the past 5 years, has your state conducted any on-site examinations of life settlement providers? a. Yes (3) b. No (27) c. Don’t know (0) d. Not applicable (3) 33. In the past 5 years, has your state conducted any off-site examinations of life settlement providers? a. Yes (7) b. No (23) c. Don’t know (0) d. Not applicable (3) 34. In the past 5 years, has your state assessed the controls that life settlement providers use to protect the confidentiality of an insured’s personal information and to comply with privacy requirements? a. Yes (8) b. No (21) c. Don’t know (0) d. Not applicable (3) 35. In the past 5 years, has your state assessed controls that life settlement providers use to ensure that life settlement advertisements are not unfair, deceptive, or misleading? a. Yes (9) b. No (21) c. Don’t know (0) d. Not applicable (3) 36. In the past 5 years, has your state assessed controls that life settlement providers use to detect, investigate, and report possible acts of fraud? a. Yes (12) b. No (18) c. Don’t know (0) d. Not applicable (3) 37. Of the currently licensed life settlement providers in your state, what percentage of them has been examined in the last 5 years? a. 0 % (22) b. 1 – 25 % (5) c. 26 – 50 % (0) d. 51 – 75 % (1) e. 76 – 100 % (2) f. Don’t know (0) g. Not applicable (3) 38. Have any of your examinations or investigations found instances in which a life settlement provider had improperly disclosed the identity (e.g., name and address) of an insured in the past 5 years? a. Yes (1) b. No (13) c. Don’t know (1) d. Not applicable (18) 39. Do you track the number of complaints made by consumers about life settlements? a. Yes (26) b. No (6) c. Don’t know (1) 40. How many complaints were made by consumers concerning life settlements in calendar years 2007, 2008, and 2009? N. Carolina N. Dakota Oklahoma Indicates no response was provided. Disclosure Requirements of Life Settlement Brokers 41. When does your state require life settlement brokers to provide policy owners with a written disclosure of the risks associated with a life settlement contract (e.g., tax liability, ability to purchase future insurance, effects on the eligibility for public assistance)? a. Not applicable (2) b. At the time of application (13) c. No later than the date the application for the settlement contract is signed by all parties (13) d. No later than the date the life settlement contract is signed (4) e. By another date (2) f. Don’t know (0) 42. Does your state require life settlement brokers to provide policy owners with information on the method (e.g., such as the percentage of the policy’s face value or gross proceeds) for calculating the broker’s compensation? a. Required verbally (0) b. Required in writing (19) c. Required both verbally and in writing (0) d. Not required (13) e. Don’t know (0) 43. Does your state require life settlement brokers to provide policy owners with information on the amount of the broker’s compensation? a. Required verbally (0) b. Required in writing (22) c. Required both verbally and in writing (0) d. Not required (10) e. Don’t know (0) 44. Does your state require life settlement brokers to provide policy owners with information on all offers, counter-offers, acceptances, and rejections relating to the proposed settlement contract? a. Required verbally (0) b. Required in writing (20) c. Required both verbally and in writing (0) d. Not required (12) e. Don’t know (0) 45. Does your state require life settlement brokers to provide policy owners with information on any affiliation between the broker and any person making an offer for the proposed settlement contract (e.g., a life settlement provider or investor)? a. Required verbally (0) b. Required in writing (23) c. Required both verbally and in writing (0) d. Not required (9) e. Don’t know (0) Disclosure Requirements of Life Settlement Providers 46. When does your state require life settlement providers to provide policy owners with a written disclosure of the risks associated with a life settlement contract (e.g., tax liability, ability to purchase future insurance, effects on the eligibility for public assistance)? a. Not applicable (1) b. At the time of application (11) c. No later than the date the application for the settlement contract is signed by all parties (11) d. No later than the date the life settlement contract is signed (10) e. By another date (1) f. Don’t know (0) 47. Does your state require life settlement providers to notify the insured in the event of transfer of ownership of the policy or change in the beneficiary? a. Required verbally (1) b. Required in writing (22) c. Required both verbally and in writing (0) d. Not required (10) e. Don’t know (0) 48. Does your state require life settlement providers to provide policy owners with information on any affiliation between the provider and the issuer of the policy? a. Required verbally (0) b. Required in writing (30) c. Required both verbally and in writing (0) d. Not required (4) e. Don’t know (0) 49. Does your state require life settlement providers to provide policy owners with information on the method for calculating the compensation paid to the broker? a. Required verbally (0) b. Required in writing (18) c. Required both verbally and in writing (1) d. Not required (15) e. Don’t know (0) 50. Does your state require life settlement providers to provide policy owners with information on the amount of compensation paid to the broker? a. Required verbally (1) b. Required in writing (20) c. Required both verbally and in writing (1) d. Not required (12) e. Don’t know (0) Information Disclosure for Brokers or Providers in Life Settlement Transactions 51. Does your state require life settlement providers or brokers to provide policy owners with information that alternatives to life settlement contracts exist? a. Required verbally (0) b. Required in writing (33) c. Required both verbally and in writing (0) d. Not required (0) e. Don’t know (0) 52. Does your state require life settlement providers or brokers to provide policy owners with information that settlement brokers owe a fiduciary duty to the policy owners? a. Required verbally (0) b. Required in writing (22) c. Required both verbally and in writing (0) d. Not required (11) e. Don’t know (1) 53. Does your state require life settlement providers or brokers to provide policy owners with information that some or all of the proceeds of the life settlement contract may be taxable? a. Required verbally (0) b. Required in writing (33) c. Required both verbally and in writing (0) d. Not required (0) e. Don’t know (0) 54. Does your state require life settlement providers or brokers to provide policy owners with information that the proceeds from a settlement contract may adversely affect the recipient’s eligibility for public assistance or other government benefits? a. Required verbally (0) b. Required in writing (32) c. Required both verbally and in writing (0) d. Not required (0) e. Don’t know (1) 55. Does your state require life settlement providers or brokers to provide policy owners with information that the owner has the right to terminate or rescind a life settlement contract within a prescribed period after the contract is executed? a. Required verbally (0) b. Required in writing (34) c. Required both verbally and in writing (0) d. Not required (0) e. Don’t know (0) 56. Does your state require life settlement providers or brokers to provide policy owners with information that entering into a settlement contract may cause other rights or benefits, including conversion rights or waiver of premium benefits under the policy, to be forfeited? a. Required verbally (0) b. Required in writing (32) c. Required both verbally and in writing (0) d. Not required (2) e. Don’t know (0) 57. Does your state require life settlement providers or brokers to provide policy owners with information that the insured may be asked to renew his or her permission to disclose all medical, financial, or personal information in the future to someone who buys the policy? a. Required verbally (0) b. Required in writing (27) c. Required both verbally and in writing (1) d. Not required (5) e. Don’t know (1) 58. Does your state require life settlement providers or brokers to provide policy owners with information that any person who knowingly presents false information in an application for a life insurance or life settlement contract is guilty of a crime? a. Required verbally (0) b. Required in writing (25) c. Required both verbally and in writing (0) d. Not required (8) e. Don’t know (1) 59. Does your state require life settlement providers or brokers to provide policy owners with information that the insured may be contacted for the purpose of determining the insured’s health status? a. Required verbally (0) b. Required in writing (31) c. Required both verbally and in writing (1) d. Not required (1) e. Don’t know (1) 60. Does your state require life settlement providers or brokers to provide policy owners with information that a change in ownership could in the future limit the insured’s ability to purchase future insurance on the insured’s life? a. Required verbally (0) b. Required in writing (19) c. Required both verbally and in writing (0) d. Not required (14) e. Don’t know (1) 61. Does your state require providers or brokers to provide life insurance companies with information about settlement transactions involving policies that were issued within the past 5 years? a. Yes (7) b. No (26) c. Don’t know (1) 62. Does your state require providers or brokers to provide life insurance companies with a written notice to the issuer when its policy has become subject to a settlement? a. Yes (24) b. No (10) c. Don’t know (0) 63. Does your state require life insurance companies to disclose information about other options (such as life settlements) to their policy holders who want to terminate their policy? a. Yes (3) b. No (31) c. Don’t know (0) 64. Does your state require advertisements or marketing materials by entities soliciting potential policy owners for life settlements to be approved by an appropriate regulator (e.g., insurance commission)? a. Yes (14) b. No (20) c. Don’t know (0) 65. Does your state prohibit life settlement brokers from conducting sales with any provider, financing entity, or related provider trust, that is controlling, controlled by, or under common control with such broker? a. Yes (15) b. No (19) c. Don’t know (0) 66. Does your state prohibit providers from entering in a life settlements contract, if, in connection to such contract, anything of value will be paid to a broker that is controlling, controlled by, or under common control with such provider? a. Yes (16) b. No (17) c. Don’t know (0) 67. Does your state require providers entering into a life settlement contract to obtain a written statement from a licensed physician that the policy owner is of sound mind and under no constrain or under influence to enter into a settlement contract? a. Yes (28) b. No (5) c. Don’t know (1) 68. Does your state require the life settlement provider to obtain a document in which the insured consents to the release of his or her medical records to a licensed provider, broker, or insurance company? a. Yes (31) b. No (2) c. Don’t know (1) 69. Does your state require the life settlement provider to obtain a witnessed document, prior to the execution of the settlement contract, in which the policy owner consents to the contract, represents that the policy owner has a full and complete understanding of not only the contract but also the benefits of the insurance policy, and acknowledges he or she is entering into the contract freely and voluntarily? a. Yes (28) b. No (5) c. Don’t know (1) 70. Which of the following best describes your state’s provisions on a policy owner’s right to terminate (i.e., rescind) a life settlement contract after entering it? a. Policy owner does not have the right to terminate a contract after entering it (0) b. Policy owner generally has 15 days or less to terminate a contract after entering it (15) c. Policy owner generally has 16 to 60 days to terminate a contract after entering it (17) d. Policy owner generally has more than 60 days to terminate a contract after entering it (0) e. Don’t know (0) 71. Does your state require fees, commission, or other compensation paid by the provider or owner to the broker in connection with a settlement contract be computed as a percentage of the offer obtained, not the face value of the policy? a. Yes (6) b. No (27) c. Don’t know (1) Stranger-Originated Life Insurance (STOLI) Transactions 72. Do your state’s laws include any specific provisions intended to deter or prohibit STOLI or similar types of transactions? a. Yes (26) b. No (8) c. Don’t know (0) 73. Does your state explicitly define STOLI transactions and prohibit such transactions? a. Yes (20) b. No (14) c. Don’t know (0) 74. Within how many years of issuance of a life insurance policy does your state prohibit a life settlement contract on that policy, except under specific enumerated circumstances? a. Our state does not prohibit a life settlement contract based on the amount of years from issuance of that policy to deter or prevent STOLI transactions (3) b. 2 years ( 22) c. 3 years (0) d. 4 years (1) e. 5 years (7) f. 6 or more years (0) g. Don’t know (1) 75. In efforts to deter or prohibit STOLI transactions, does your state have another approach to deter and prohibit STOLI transactions, aside from those approaches listed in the previous two questions? a. Yes (16) b. No (16) c. Don’t know (2) 76. Does your state require life settlement brokers to have an anti-fraud plan or initiatives to detect, investigate, and report possible fraudulent acts? a. Yes (22) b. No (12) c. Don’t know (0) 77. Does your state require life settlement providers to have an anti-fraud plan or initiatives to detect, investigate, and report possible fraudulent acts? a. Yes (29) b. No (5) c. Don’t know (0) Appendix III: Results of GAO’s Survey of Licensed Life Settlement Providers As part of our life settlement review, we surveyed life settlement providers licensed in two or more states about their life settlement transactions. We identified 34 states that required providers to be licensed and obtained a list of providers licensed in each of these states (as of September 2009). Based on these lists, we identified 98 providers, of which 55 were licensed in two or more states. However, we were able to contact only 49 of these providers for our survey. Of the 49 life settlement providers we surveyed, 25 of them completed our survey. For each question below, we provide the aggregated responses of the providers. Some providers did not answer every question on the survey (as noted below where applicable). Because no comprehensive life settlement data exist, we were not able to estimate the share of the market held by the providers responding to our survey. 1. What was the total number of life insurance policies purchased by your firm in calendar year? a. 2006 – 3,148 b. 2007 – 3,703 2. What was the total face value of the policies purchased by your firm in calendar year? a. 2006 – $5,501,932,247 b. 2007 – $9,025,862,851 c. 2008 – $12,946,270,383 d. 2009 – $7,005,574,470 3. What was the total amount paid to policy owners (exclusive of broker compensation, such as commissions) for the policies purchased by your firm in calendar year? a. 2006 – $1,170,878,009 b. 2007 – $1,801,390,695 c. 2008 – $2,319,081,754 d. 2009 – $888,003,867 4. What was the total amount of associated cash surrender value of the a. 2006 – $99,965,301 b. 2007 – $199,300,307 c. 2008 – $149,741,970 d. 2009 – $109,432,850 5. What was the total amount of compensation (e.g., commissions) paid to brokers for the policies purchased by your firm in calendar year? a. 2006 – $202,774,451 b. 2007 – $263,454,952 c. 2008 – $275,676,198 d. 2009 – $92,229,350 6. What was the total number of policies purchased by your firm, based on the age of policy at the time of settlement (i.e., the time between policy’s issuance and settlement), for calendar year data on the number of policies purchased based on the age of the policy at time of issuance.) 1. less than 2 years old – 37 2. 2 to 5 years old – 844 3. greater than 5 years old – 880 data on the number of policies purchased based on the age of the policy at time of issuance.) 1. less than 2 years old – 21 2. 2 to 5 years old – 1,366 3. greater than 5 years old – 1,296 data on the number of policies purchased based on the age of the policy at time of issuance.) 1. less than 2 years old – 10 2. 2 to 5 years old – 1,790 3. greater than 5 years old – 1,301 data on the number of policies purchased based on the age of the policy at time of issuance.) 1. Appendix IV: Comments from the Securities and Exchange Commission Appendix V: Comments from the National Association of Insurance Commissioners Appendix VI: GAO Contact and Staff Acknowledgments Staff Acknowledgments In addition to the contacts named above, Pat Ward (Assistant Director), Joseph Applebaum, Meghan Hardy, Stuart Kaufman, Marc Molino, Barbara Roesmann, Andrew Stavisky, Jeff Tessin, Paul Thompson, and Richard Tsuhara made important contributions to this report.
Since the late 1990s, life settlements have offered consumers benefits but also exposed them to risks, giving rise to regulatory concerns. A policy owner with unneeded life insurance can surrender the policy to the insurer for its cash surrender value. Or, the owner may receive more by selling the policy to a third-party investor through a life settlement. These transactions have involved high-dollar-amount policies covering older persons. Despite their potential benefits, life settlements can have unintended consequences for policy owners, such as unexpected tax liabilities. Also, policy owners commonly rely on intermediaries to help them, and some intermediaries may engage in abusive practices. As requested, this report addresses how the life settlement market is organized and regulated, and what challenges policy owners, investors, and others face in connection with life settlements. GAO reviewed and analyzed studies on life settlements and applicable state and federal laws; surveyed insurance regulators and life settlement providers; and interviewed relevant market participants, state and federal regulators, trade associations, and market observers. The life settlement market is organized largely as an informal network of intermediaries facilitating the sale of life insurance policies by owners to third-party investors. Policy owners may sell policies directly to investors in some cases, but owners and investors commonly use intermediaries. Life settlement brokers represent policy owners for a fee or commission and may solicit bids for policies from multiple life settlement providers with the goal of obtaining the best price. Life settlement providers buy life insurance policies for investors or for their own accounts. No comprehensive data exist on market size, but estimates indicate it grew rapidly from its inception around 1998 until the recent financial crisis. Estimates of the total face value of policies settled in 2008 ranged from around $9 billion to $12 billion. State and federal regulators oversee various aspects of the life settlement market. Life settlements typically comprise two transactions: the sale of a policy by its owner to a provider, and the sale of a policy by the provider to an investor. As of February 2010, 38 states had insurance laws specifically to regulate life settlements. State insurance regulators focus on regulating life settlements to protect policy owners by imposing licensing, disclosure, and other requirements on brokers and providers. The Securities and Exchange Commission (SEC), where its jurisdiction permits, and state securities regulators regulate investments in life settlements to protect investors. One type of policy (variable life) is considered a security; thus, settlements involving these policies are under SEC jurisdiction. SEC also asserted jurisdiction over certain investments in life settlements involving nonvariable, or traditional, life insurance policies, but their status as securities is unclear because of conflicting circuit court decisions. All but two states regulate investments in life settlements as securities under their securities laws. Inconsistencies in the regulation of life settlements may pose challenges. Policy owners in some states may be afforded less protection than owners in other states and face greater challenges obtaining information to protect their interests. Twelve states and the District of Columbia do not have laws specifically governing life settlements, and disclosure requirements can differ among the other states. Policy owners also could complete a life settlement without knowing how much they paid brokers or whether they received a fair price, unless such information was provided voluntarily. Some investors may face challenges obtaining adequate information about life settlement investments. Because of conflicting court decisions and differences in state laws, individuals in different states with the same investments may be afforded different regulatory protections. Some life settlement brokers and providers may face challenges because of inconsistencies in laws across states. GAO developed a framework for assessing proposals for modernizing the financial regulatory system, two elements of which are consistent consumer and investor protection and consistent financial oversight for similar institutions and products. These two elements have not been fully achieved under the current regulatory structure of the life settlement market. Congress may wish to consider taking steps to help ensure that policy owners involved in life settlements are provided a consistent and minimum level of protection. SEC agreed with our matter for congressional consideration, and the National Association of Insurance Commissioners did not agree or disagree with it but raised related concerns.
GAO_HEHS-96-80
Background Cocaine addiction has been associated with a variety of serious health consequences: cardiovascular and respiratory problems, psychiatric disorders, acquired immunodeficiency syndrome (AIDS), sexually transmitted diseases, early child development abnormalities, and death. Because cocaine use became epidemic in the early 1980s, research opportunities have been limited, and a standard cocaine treatment has not yet been found. Many substance abuse centers have provided cocaine users with the same treatment approaches provided to opiate and other drug users. But these treatments have not been as successful for cocaine users, who have demonstrated high relapse and dropout rates. The large-scale Treatment Outcome Prospective Study (TOPS) showed that about one-third of the clients who reported returning to cocaine use in the year after treatment began to do so as early as the first week following treatment termination. Another 25 percent began using the drug within 2 to 4 weeks following treatment termination, for a cumulative first-month relapse rate of 57 percent. Studies of crack cocaine users found that 47 percent dropped out of therapy between the initial clinic visit and the first session; three-quarters dropped out by the fifth session. Because of this lack of treatment success, in the late 1980s and early 1990s the federal government began playing a more active role in sponsoring cocaine-related treatment research, principally through NIDA and the Center for Substance Abuse Treatment (CSAT). NIDA is the largest federal sponsor of substance abuse-related research, conducting work in treatment and prevention research, epidemiology, neuroscience, behavioral research, health services research, and AIDS. Since 1991, NIDA has funded about 100 cocaine treatment grants and conducted in-house research through its laboratory facilities. CSAT’s mission includes developing treatment services, evaluating the effectiveness of these services, and providing technical assistance to providers and states. Since 1991, CSAT has funded approximately 65 substance abuse research projects with implications for cocaine addiction treatment. CSAT cocaine-related data were not yet available at the time this report was published. Results therefore derive from a literature review of studies published from 1991 through 1995 and ongoing NIDA-supported cocaine studies, for which some outcome data were available. During the 5-year period, two broad types of cocaine treatment approaches received research emphasis: cognitive/behavioral therapy and pharmacotherapy. Additionally, acupuncture has emerged as a potential therapy in the treatment of cocaine. Much of this research has been conducted in outpatient treatment settings, with a focus on “cocaine-dependent” clients—many of whom are considered to be “hardcore” drug users. Cognitive/Behavioral Therapies Cognitive/behavioral therapies aim to modify the ways clients think, act, and relate to others, thereby facilitating initial abstinence and a continued drug-free lifestyle. These therapies include the psychotherapies, behavior therapies, skills training, and other counseling approaches. Three types of cognitive/behavioral therapies have received recent attention: relapse prevention, community reinforcement/contingency management, and neurobehavioral therapy. Relapse prevention focuses on helping clients to identify high-risk, or “trigger,” situations that contribute to drug relapse and to develop appropriate behaviors for avoiding, or better managing, these situations. For example, Yale University’s Substance Abuse Treatment Unit has three principal elements in its 12-week relapse prevention program. First, clients identify personal triggers by keeping a daily log of the situations in which they crave the drug. Second, they work with therapists to learn more effective ways of coping with and avoiding these and other commonly perceived triggers. And third, therapists help clients extinguish the drug-craving reactions to these triggers. Clients are taught that relapse is a process, that social pressures to use drugs can be formidable, and that lifestyle changes are necessary to discourage future substance abuse. Community reinforcement/contingency management aims to help the client achieve initial abstinence as well as an extended drug-free lifestyle. The therapy consists of several key community-oriented components, including the participation of a client’s significant other (family member or friend) in the treatment process; providing management incentives or rewards for drug abstinence; providing employment counseling when needed; and encouraging client participation in recreational activities as pleasurable, healthy alternatives to drug use. If clients remain abstinent, they receive vouchers from the program and earn the right to participate in desired activities with their significant other. If clients test positive for drug use, or do not submit to urine testing, negative sanctions are applied (for example, their vouchers are rescinded). In this manner, community reinforcement therapy teaches clients about the consequences of their actions and strengthens family and social ties. Neurobehavioral treatment is a comprehensive, 12-month outpatient treatment approach that includes individual therapy, drug education, client stabilization, and self-help groups. Relapse prevention techniques are included but constitute only a subset of neurobehavioral treatment.Five major stages of recovery are distinguished during the treatment process—withdrawal, “honeymoon,” “the wall,” adjustment, and resolution—with emphasis on addressing the client’s behavioral, emotional, cognitive, and relational problems at each stage of recovery. For example, in the withdrawal stage, depression, anxiety, self-doubt, and shame (emotional problems) and concentration difficulties, cocaine cravings, and short-term memory disruption (cognitive problems) are addressed. In the first 6 months, individual counseling is emphasized; in the second 6 months, weekly group counseling is provided, with optional individual and couple therapy sessions. Pharmacotherapy Pharmacotherapy involves the use of medications to combat cocaine abuse and addiction. Recently, NIDA’s pharmacotherapy research has focused on two objectives: facilitating initial abstinence and supporting an extended, drug-free lifestyle. To facilitate initial abstinence, research has focused on medications that treat the withdrawal symptoms of cocaine addiction and block the euphoric high induced by the drug. To help maintain an extended drug-free lifestyle, research has focused on blocking the client’s craving for cocaine, treating the underlying psychopathologies, and treating the toxic effects of cocaine on the brain. Acupuncture The use of acupuncture in drug abuse treatment has not been limited to cocaine addiction. It has also been used during the past 20 years to treat addictions to opiates, tobacco, and alcohol. A Yale University acupuncture treatment program for cocaine abuse involved the insertion of needles into each ear at five strategic points, for a period of 50 minutes per session, over an 8-week period. Through the first 6 weeks, clients received the acupuncture therapy 5 days a week; in weeks 7 and 8, treatment was reduced to 3 days a week. Treatment was provided in a group context. Three Cognitive/Behavioral Therapies Appear Favorable, but No Pharmacological Therapy Has Been Consistently Effective The results from NIDA’s cocaine treatment grants are only now becoming available. Because cocaine therapies are still in their early stages of development, treatment outcome results cannot be generalized to all cocaine users. However, early results from a review of the literature and ongoing NIDA studies reveal the promise of three cognitive/behavioral approaches to treatment. Moreover, while a pharmacological treatment has not yet been consistently demonstrated, NIDA is continuing to actively pursue the biology of cocaine addiction. Further, few well-designed methodological studies of acupuncture exist, but the limited research in this area demonstrates at least some positive findings. Three Cognitive/Behavioral Treatments Appear Effective in Outpatient Settings Early research indicates relapse prevention, community reinforcement/contingency management, and neurobehavioral therapy are potentially promising cocaine-addiction treatment approaches for promoting extended periods of client abstinence and treatment retention in outpatient treatment settings. Table 1 provides an overview of cognitive/behavioral study methodologies and results. Clients who received relapse prevention treatment have demonstrated favorable abstinence rates not only during the period of treatment, but during follow-up periods as well. Client treatment retention results also appear to be favorable. For example, cocaine-dependent clients participating in a 12-week Yale University program focusing on relapse prevention were able to remain cocaine abstinent at least 70 percent of the time while in treatment. A year after treatment, gains were still evident: clients receiving relapse prevention treatment and a placebo medication were reported to have used cocaine on average fewer than 3 days in the past month. Positive outcome results were also found in two other programs: more than 60 percent of the primarily middle-class, cocaine-addicted clients attending a relapse prevention program at the Washton Institute in New York were abstinent from cocaine during the 6-to 24-month follow-up period. Similarly, in the Seattle area, cocaine-using clients cut their average number of days of cocaine use by 71 percent within 6 months. Among high-severity cocaine addicts participating in another Yale program, it was also found that 54 percent receiving relapse prevention therapy were able to attain at least 3 weeks of continuous abstinence, while only 9 percent of those receiving the interpersonal psychotherapy could remain abstinent for that period of time. Retention rates were also favorable: 67 percent of the relapse prevention clients completed the entire 12-week Yale program and more than 70 percent completed the Washton program. Community Reinforcement/Contingency Management Community reinforcement/contingency management programs have also appeared promising in fostering abstinence and retaining clients in treatment. Almost one-half (46 percent) of the cocaine-dependent clients participating in a 12-week community reinforcement/contingency management program at the University of Vermont were able to remain continuously abstinent from cocaine through 2 months of treatment;when the program was extended to 24 weeks, 42 percent of the participating cocaine-dependent subjects were able to achieve 4 months of continuous abstinence. By comparison, only 5 percent of those in the control group receiving drug abuse counseling alone could remain continuously abstinent for the entire 4 months. A year after clients began treatment, community reinforcement/ contingency management treatment effects were still evident: 65 to 74 percent of those in the community reinforcement group reported 2 or fewer days of cocaine use in the past month. Only 45 percent of those in the counseling control group achieved such gains. Contingency management was also studied independently in an inner-city Baltimore program. Positive results were found when tying the 12-week voucher reward system to cocaine drug testing. Nearly half of the cocaine-abusing and cocaine-dependent clients (who were also heroin users) given vouchers for cocaine-free urine test results were able to remain continuously abstinent for 7 to 12 weeks. Among clients receiving vouchers unpredictably—not tied to urine test results—only 1 client achieved abstinence for more than 2 weeks. Client treatment retention was also high. Within the Vermont community reinforcement/contingency management group, 85 percent of the clients completed the 12-week program, compared with only 42 percent of those in the 12-step drug counseling control group. The 24-week program was completed by about five times as many clients in the community reinforcement group as those receiving drug counseling therapy (58 percent versus 11 percent). Neurobehavioral Therapy Several programs have demonstrated that a neurobehavioral therapeutic approach can also be effective in promoting cocaine abstinence and treatment retention. Thirty-six percent of the cocaine-abusing and cocaine-dependent clients participating in a neurobehavioral therapy program through the Matrix Institute in California succeeded in remaining continuously abstinent from cocaine for at least 8 consecutive weeks while in treatment. Follow-up results obtained 6 months after treatment entry showed that 38 percent of these clients still tested drug free. In a separate examination of two neurobehavioral outpatient treatment sites, at least 40 percent of the cocaine clients in each site remained continuously abstinent through the entire 6-month course of therapy. Given the high rate of cocaine use among methadone clients, the neurobehavioral model was adapted in New York for use among methadone clients meeting the diagnostic criteria for cocaine dependence. In an intensive 6-month program, a strong relationship was found between the number of treatment sessions attended and cocaine use reduction.Clients attending 3 to 19 sessions experienced a 5-percent reduction in cocaine use during the previous month. Those attending 85 to 133 sessions experienced a 60-percent reduction in their past 30-day use of cocaine. In another New York study with cocaine-addicted methadone clients, those clients receiving neurobehavioral treatment demonstrated a significant decrease in cocaine use between entering treatment and 6-month follow-up; the control group showed no statistically significant decrease. Neurobehavioral retention rates also proved favorable. In the California study of two treatment sites, clients were retained an average of about 5 months and 3 months, respectively; in the other California study, the average length of stay for cocaine users was about 4-1/2 months. For the first New York study, a total of 61 percent of the cocaine-dependent methadone clients completed the initial 6-month cocaine treatment regimen. No Effective Medication for Treating Cocaine Addiction Has Yet Been Found Currently, there is no FDA-approved pharmacotherapy for cocaine addiction. While some medications have proven successful in one or more clinical trials, no medication has demonstrated “substantial efficacy” once subjected to several rigorously controlled trials. Twenty major medications have been considered by NIDA’s Medications Development Division (MDD). Fourteen have been tested with humans, five are in the animal experimentation stage, and one is being tested on both humans and animals for different treatment effects. Table 2 provides a summary of the medications tested, their current phase of testing, and therapeutic uses. Of the 20 medications tested, MDD has labeled 6 as “disappointing”: buprenorphine, carbamazepine, desipramine, imipramine, mazindol, and nifedipine. The remainder are still under investigation, but numerous clinical trials thus far have yielded mixed results. For example, a 1992 study by Ziedonis and Kosten indicated that amantadine was effective in reducing cocaine craving; yet a 1989 study by Gawin, Morgan, Kosten, and Kleber indicated that this medication was not as effective as a placebo in reducing cocaine craving. Additional pharmacological studies are cited in the bibliography. Thus, no pharmacotherapy for cocaine exists that compares with methadone, which reduces heroin craving, enables the client to stabilize psychological functioning, and eliminates or reduces the heroin withdrawal process. Nor has any medication proven effective as a supportive therapy, to be used in combination with one or more cognitive/behavioral therapies, to enhance cocaine abstinence. But recent animal research has demonstrated the positive effects of a new immunization procedure in protecting rats against the stimulant effects of cocaine. When vaccinated, rats produced antibodies that acted like biological “sponges” or blockers, diminishing by more than 70 percent the amount of cocaine reaching the brain. As a result, inoculated rats experienced significantly lower cocaine stimulation levels than noninoculated rats. Further research needs to be conducted before human clinical trials can be planned. Few Well-Designed Acupuncture Outcome Research Studies Exist Some treatment centers are now offering acupuncture as therapy for cocaine and other substance abuse. For example, in 1993, the Lincoln Hospital Substance Abuse Treatment Clinic treated about 250 clients per day with acupuncture therapy. To date, however, few well-designed evaluation studies have assessed the utility of acupuncture treatment. But the limited research findings are somewhat favorable. Almost 90 percent of a group of inner-city, cocaine-dependent methadone clients who completed an 8-week course of acupuncture remained abstinent for more than a month. These individuals had been regular users of cocaine, on average, for 13 years. Fifty percent of the clients, however, did not complete the 2-month program. Inner-city, cocaine-dependent methadone clients participating in a second acupuncture research study decreased their frequency of cocaine use and craving for the drug after just 6 weeks of therapy. These participants had been regular cocaine users, on average, for more than 10 years. And chronic crack cocaine users demonstrated a statistically significant tendency toward greater day-to-day reductions in cocaine use during a 4-week course of acupuncture therapy. But they did not differ from the control group in their overall percentage of drug-free test results. More Research Is Needed to Formulate a Standard Cocaine Treatment Approach Much has been learned about cocaine treatment in the 15-year period since the epidemic began. Studies show that client abstinence and retention rates can be positively affected through a number of promising treatment approaches. However, according to cocaine treatment experts, additional research is needed before standard, generalizable cocaine treatment strategies can be formulated for cocaine addicts of varying demographic and clinical groups. (See app. II for a summary of the experts’ suggestions.) In the cognitive/behavioral area, for example, the experts indicated a need for additional clinical research aimed at identifying the important components of promising treatment practices, further development and testing of client reward systems (contingency contracting), additional study of the triggers that promote relapse, and identification of appropriate intensities and durations of treatment. In the pharmacological area, the experts recommended further development and testing of medications to block the effects of cocaine and reduce craving, examining the human toxicity effects of pharmaceutical agents found useful in animal experiments, conducting outcome studies combining cognitive/behavioral and pharmacological therapies, developing maintenance medications, and conducting more longitudinal studies of medication treatment effectiveness. The experts also highlighted the need for further research into client/treatment matching, client retention, client readiness and motivation for treatment, and long-term treatment outcomes. Agency Comments NIDA reviewed a draft of this report and provided comments, which are included in appendix IV. NIDA officials generally agreed with our conclusions on the effectiveness of cognitive/behavioral and pharmacological therapies for cocaine treatment. However, they felt we were too positive about the early results of acupuncture treatment, particularly given the lack of well-designed outcome studies. We agreed with NIDA on this point and reworded our statements on acupuncture’s use in treating cocaine addiction to clarify the preliminary nature of the results and the need for more well-controlled studies. Other technical and definitional changes were incorporated, as appropriate. We are sending copies of this report to the Director of the National Institute on Drug Abuse, the Director of the Center for Substance Abuse Treatment, and other interested parties. We will also make copies available to others on request. If you have any questions about this report, please call me at (202) 512-7119 or Jared Hermalin, the Evaluator-in-Charge, at (202) 512-3551. Dwayne Simpson of Texas Christian University and George DeLeon of the National Development and Research Institutes served as independent reviewers. Mark Nadel and Karen Sloan also contributed to this report. Methodology To determine the extent to which cocaine therapies have proven successful, we identified studies with current reportable data on two outcome variables: drug abstinence and treatment retention. We reviewed the literature published between 1991 and 1995; examined Center for Substance Abuse Treatment (CSAT) and National Institute on Drug Abuse (NIDA) agency records of cocaine-related grants awarded during this time period; and, as necessary, contacted project investigators. The approximately 65 cocaine-related grants supported by CSAT were still in progress at the time of this writing; neither abstinence nor retention data were available for inclusion in this report. Most of the approximately 100 NIDA longitudinal studies were also in progress. Our report was therefore based on articles published during the 5-year period, unpublished documents provided by federal drug agencies, and those available abstinence and retention findings from ongoing NIDA-supported studies. We classified the studies from each of these sources into two treatment categories: cognitive/behavioral and pharmacological treatments. We then classified the cognitive/behavioral studies as either relapse prevention, community reinforcement/contingency management, or neurobehavioral therapy and the pharmacological studies by drug type. We then reviewed those studies with reported abstinence and/or retention findings within each treatment area to determine the utility of each approach. In making determinations about treatment utility, we gave consideration to whether or not the studies had appropriate designs for determining treatment effectiveness. The intent of this report was not to provide an exhaustive evaluation synthesis of the cocaine studies currently available (particularly given the limited number of studies available), nor to assess the qualitative methodology of each study. Rather, the objective was to determine whether particular treatment approaches appeared favorable or promising, and to provide examples of such favorable cocaine treatment approaches in the text. Given the relatively limited number of studies available, additional work is necessary before determinations can be made about the utility of any treatment approach for specific demographic and clinical groups. To identify additional research initiatives necessary for increasing our knowledge of cocaine treatment effectiveness, we conducted telephone interviews with 20 cocaine treatment experts. Each of the experts we selected was either a principal investigator or coinvestigator on a currently funded cocaine-related federal grant or contract, a member of a federal cocaine grant/contract review committee within the past 2 years, or an author of at least two cocaine peer-reviewed publications. The names and affiliations of the 20 experts who participated are listed below. (Two additional individuals chose not to participate.) Research Initiatives Necessary for Increasing Understanding of Cocaine Treatment Effectiveness Following are the responses of the 20 treatment experts to the GAO question, “What important knowledge gaps remain in our understanding of cocaine treatment effectiveness in each of the following two areas: cognitive/behavioral and pharmacological interventions?” Relevant individual response items were placed into six clinical and methodological categories: cognitive/behavioral issues, pharmacological issues, the cognitive/behavioral and pharmacological synergy, clinical assessment/outcome issues, population subgroup treatment issues, and methodological issues. The frequency count for each category is also provided. Cognitive/Behavioral Issues Identifying important components of promising treatment practices, developing and testing contingency contracting strategies, recognizing the triggers of relapse, determining appropriate intensity and duration of treatment protocols, assessing the utility of low-intensity treatments, defining and increasing important aspects of social and community support, and codifying appropriate treatment practices. Categorical frequency: 12. Pharmacological Issues Developing drugs to diminish the craving for cocaine; developing drugs to block the effects of cocaine; developing maintenance medication for continued relapse prevention; examining the utility of multiple untried drugs indicated in the Physician’s Desk Reference; longitudinally testing the effects of drugs; assessing human toxicity effects of drugs found useful in animal experiments; developing detoxification medication; and further investigating vaccines, agonists, and antagonists. Categorical frequency: 14. Cognitive/Behavioral and Pharmacological Synergy Testing drugs as adjuncts to cognitive/behavioral therapies, determining the impact of combined drug and cognitive/behavioral therapies on the extension of relapse prevention, and assessing the combination of drugs and cognitive/behavioral therapies that works best for various subgroups. Categorical frequency: 6. Clinical Assessment/Outcome Issues Improving the effectiveness of recruitment and retention of clients in treatment, better assessing readiness and motivation for treatment, better assessing impact of dual disorders on treatment outcome, investigating unknown long-term drug treatment outcomes, developing information on long-term incentives for maintaining drug abstinence, increasing knowledge about “aftercare” treatment planning, increasing knowledge of treatment outcome for managed care/health maintenance organizations to plan client treatments, and improving the effectiveness of outpatient care. Categorical frequency: 11. Population Subgroup Treatment Issues Better matching client needs to treatment services as well as determining which clients do well with specific therapies, what groups can be effectively treated, who can become abstinent without use of drugs, what subgroups learn or do not learn about relapse risk factors in treatment settings, and what educational/IQ levels are necessary for making effective use of cognitive approaches. Categorical frequency: 10. Methodological Issues Need for the following: more clinical trials to demonstrate the efficacy of basic treatment services; testing treatments on a wider population of cocaine users; more systematic data collection; improved technology for conducting randomized, longitudinal trials; evaluating the patient selection process (volunteers may represent a biased sample); and conducting cost-effectiveness studies. Categorical frequency: 7. Cocaine Outcomes by Treatment Setting In addition to the study of particular treatment approaches (such as relapse prevention, community reinforcement/contingency management, and neurobehavioral therapy), researchers are also beginning to examine the results of cocaine treatment in different types of settings (that is, outpatient, inpatient, day-hospital, and therapeutic communities). In general, outpatient and day-hospital stays tend to be less costly than extended inpatient stays. Results of recent studies suggest that cocaine treatment can be effective in these less costly settings, but further replication is necessary before any firm conclusions can be drawn. Clients attending a California-based Veterans Administration intensive outpatient program with a self-help component were able to remain cocaine abstinent 73 percent of the time, when followed up 24 months after treatment admission. This result was comparable to that found among clients attending a more costly program consisting of both an inpatient stay and a highly intensive outpatient/self-help program. The California-based program results also surpassed those achieved by clients who participated in both an inpatient and a low-intensity outpatient/self-help program (56 percent). These results point to the conclusion that clients with a cocaine problem may be able to do quite well in an intensive outpatient setting that consists of at least four visits per month for at least 6 months. In a second California study, cocaine-dependent inpatients fared better than outpatients at both 6 and 12 months following treatment entry, although both groups fared well. Allowing for up to two slips (or brief episodes of use), at the 6-month period the inpatient abstinence rate was 79 percent, whereas the outpatient rate was 67 percent. At the 12-month period, the abstinence rates were 72 percent and 50 percent, respectively. The effects of day-hospital versus inpatient treatment were assessed in Philadelphia. About one-half (53 percent) of those cocaine-dependent clients attending a day-hospital program were able to remain continuously abstinent throughout the 6 months following treatment completion. This rate was comparable to that of inpatients: 47 percent. And finally, the impact of a day-treatment program (using therapeutic community techniques) was compared with standard methadone maintenance treatment in New York. At 6-month follow-up, only 19.1 percent of those remaining in the day-treatment program had used cocaine during the past 30 days. These results were substantially better than those of participants in the standard methadone maintenance treatment program, where 41.8 percent were using cocaine at 6-month follow-up. The day-treatment therapeutic community group also demonstrated significantly greater reductions in heroin use, needle use, criminal activity, and psychological dysfunction scores. Bibliography Alterman, A., M. Droba, R. Antelo, J. Cornish, K. Sweeney, G. Parikh, and C. O’Brien. “Amantadine May Facilitate Detoxification of Cocaine Addicts.” Drug and Alcohol Dependence, Vol. 31 (1992), pp. 19-29. Alterman, A., C.P. O’Brien, A. Thomas McLellan, D.S. August, E.C. Snider, M. Droba, J.W. Cornish, C.P. Hall, A.H. Raphaelson, and F.X. Schrade. “Effectiveness and Costs of Inpatient Versus Outpatient Hospital Cocaine Rehabilitation.” The Journal of Nervous and Mental Disease, Vol. 182, No. 3 (1994), pp. 157-63. Avants, S. Kelly, A. Margolin, P. Chang, T. Kosten, and S. Birch. “Acupuncture for the Treatment of Cocaine Addiction: Investigation of a Needle Puncture Control.” Journal of Substance Abuse Treatment, Vol. 12, No. 3 (1995), pp. 195-205. Batki, S., L. Manfredi, P. Jacob, and R. Jones. “Fluoxetine for Cocaine Dependence in Methadone Maintenance: Quantitative Plasma and Urine Cocaine/Benzoylecgonine Concentrations.” Journal of Clinical Psychopharmacology, Vol. 13 (1993), pp. 243-50. Batki, S., L. Manfredi, Sorenson, and others. “Fluoxetine for Cocaine Abuse in Methadone Patients: Preliminary Findings.” Proceedings of the Annual Meeting of the Committee on Problems of Drug Dependence, National Institute on Drug Abuse Research Monograph #105. Rockville, Md.: National Institute on Drug Abuse, 1991, pp. 516-17. Brewington, V., M. Smith, and D. Lipton. “Acupuncture as a Detoxification Treatment: An Analysis of Controlled Research.” Journal of Substance Abuse Treatment, Vol. 11, No. 4, pp. 289-307. Bridge, P., S. Li, T. Kosten, and J. Wilkins. “Bupropion for Cocaine Pharmacotherapy: Subset Analysis.” Poster abstract submission, enclosed with Dec. 28, 1994, letter from NIDA to GAO. Carroll, K., and C. Nich. Unpublished 12-month data provided to GAO, Oct. 19, 1995. Carroll, K., B. Rounsaville, and F. Gawin. “A Comparative Trial of Psychotherapies for Ambulatory Cocaine Abusers: Relapse Prevention and Interpersonal Psychotherapy.” American Journal of Drug and Alcohol Abuse, Vol. 17, No. 3 (1991), pp. 229-47. Carroll, K., B. Rounsaville, L. Gordon, C. Nich, P. Jatlow, R. Bisighini, and F. Gawin. “Psychotherapy and Pharmacotherapy for Ambulatory Cocaine Abusers.” Archives of General Psychiatry, Vol. 51 (1994), pp. 177-87. Carroll, K., D. Ziedonis, S. O’Malley, E. McCance-Katz, L. Gordon, and B. Rounsaville. “Pharmacologic Interventions for Abusers of Alcohol and Cocaine: Disulfiram Versus Naltrexone.” American Journal of the Addictions, Vol. 2 (1993), pp. 77-9. Condelli, W., J. Fairbank, M. Dennis, and J.V. Rachal. “Cocaine Use By Clients in Methadone Programs: Significance, Scope, and Behavioral Interventions.” Journal of Substance Abuse Treatment, Vol. 8 (1991), pp. 203-12. Covi, L., J. Hess, N. Kreiter, and C. Haertzen. “Three Models for the Analysis of a Fluoxetine Placebo Controlled Treatment in Cocaine Dependence.” Proceedings of the Annual Meeting of the College on Problems of Drug Dependence, National Institute on Drug Abuse Research Monograph #141. Rockville, Md.: National Institute on Drug Abuse, 1994, p. 138. DeLeon, G. “Cocaine Abusers in Therapeutic Community Treatment.” National Institute on Drug Abuse Research Monograph #135. Rockville, Md.: National Institute on Drug Abuse, 1993, pp. 163-89. DeLeon, G., and others. “Therapeutic Community Methods in Methadone Maintenance (Passages): An Open Clinical Trial.” Drug and Alcohol Dependence, Vol. 37 (1995), pp. 45-57. Drug Abuse Warning Network. Annual Medical Examiner Data 1993. Statistical Series 1, No. 13-B (Rockville, Md.: Substance Abuse and Mental Health Services Administration, 1995), p. 21. U.S. General Accounting Office. Drug Abuse: The Crack Cocaine Epidemic: Health Consequences and Treatment. GAO/HRD-91-55FS, Jan. 30, 1991, p. 24. _____. Methadone Maintenance: Some Treatment Programs Are Not Effective; Greater Federal Oversight Needed. GAO/HRD-90-104, Mar. 22, 1990, p. 18. _____. Treatment of Hardcore Cocaine Users. GAO/HEHS-95-179R, July 31, 1995. Grabowski, J., H. Rhoades, R. Elk, J. Schmitz, C. Davis, D. Creson, and K. Kirby. “Fluoxetine Is Ineffective for Treatment of Cocaine Dependence or Concurrent Opiate and Cocaine Dependence: Two Placebo Controlled Double-Blind Trials.” Journal of Clinical Psychopharmacology, Vol. 15 (1995), pp. 163-74. Havassy, B. Unpublished inpatient/outpatient data provided to GAO, Sept. 25, 1995. Higgins, S. Unpublished 12-month data provided to GAO, June 6, 1995. Higgins, S., A. Budney, W. Bickel, J. Hughes, F. Foerg, and G. Badger. “Achieving Cocaine Abstinence With a Behavioral Approach.” American Journal of Psychiatry, Vol. 150, No. 5 (1993), pp. 763-69. Higgins, S., D. Delaney, A. Budney, W. Bickel, J. Hughes, F. Foerg, and J. Fenwick. “A Behavioral Approach to Achieving Initial Cocaine Abstinence.” American Journal of Psychiatry, Vol. 148, No. 9 (1991), pp. 1218-24. Khalsa, M. Elena, A. Paredes, and M. Douglas Anglin. “A Natural History Assessment of Cocaine Dependence: Pre- and Post-Treatment Behavioral Patterns.” Unpublished manuscript. Kumor, M., M. Sherer, and J. Jaffe. “Effects of Bromocriptine Pretreatment on Subjective and Physiological Responses to IV Cocaine.” Pharmacology, Biochemistry and Behavior, Vol. 33 (1989), pp. 829-37. Lipton, D., V. Brewington, and M. Smith. “Acupuncture and Crack Addicts: A Single-Blind Placebo Test of Efficacy.” Presentation made at Advances in Cocaine Treatment, National Institute on Drug Abuse Technical Review Meeting, Aug. 1990. Magura, S., A. Rosenblum, M. Lovejoy, L. Handelsman, J. Foote, and B. Stimmel. “Neurobehavioral Treatment for Cocaine-Using Methadone Patients: A Preliminary Report.” Journal of Addictive Diseases, Vol. 13, No. 4 (1994), pp. 143-60. Magura, S., Q. Siddiqi, R. Freeman, and D. Lipton. “Changes in Cocaine Use After Entry to Methadone Treatment.” Journal of Addictive Diseases, Vol. 10, No. 4 (1991), pp. 31-45. Margolin, A., S. Kelly Avants, P. Chang, and T. Kosten. “Acupuncture for the Treatment of Cocaine Dependence in Methadone-Maintained Patients.” The American Journal on Addictions, Vol. 2, No. 3 (1993), pp. 194-201. Margolin, A., T. Kosten, I. Petrakis, S. Avants, and T. Kosten. “Bupropion Reduces Cocaine Abuse in Methadone-Maintained Patients.” Archives of General Psychiatry, Vol. 48 (1991), p. 87. Mello, N., J. Kamien, J. Mendelson, and S. Lukas. “Effects of Naltrexone on Cocaine Self-Administration By Rhesus Monkey.” National Institute on Drug Abuse Research Monographs, Vol. 105. Rockville, Md.: National Institute on Drug Abuse, 1991, pp. 617-18. Moscovitz, H., D. Brookoff, and L. Nelson. “A Randomized Trial of Bromocriptine for Cocaine Users Presenting to the Emergency Department.” Journal of General Internal Medicine, Vol. 8 (1993), pp. 1-4. “NIDA Media Advisory,” Dec. 14, 1995. NIDA Notes, Vol. 10, No. 5 (Sept./Oct. 1995), pp. 10, 14. Preston, K., J. Sullivan, E. Strain, and G. Bigelow. “Effects of Cocaine Alone and in Combination with Bromocriptine in Human Cocaine Abusers.” Journal of Pharmacology and Experimental Therapeutics, Vol. 262 (1992), pp. 279-91. RAND. “Treatment: Effective (But Unpopular) Weapon Against Drugs.” RAND Research Review, Vol. 19, No. 1, Spring 1995, p. 4. Rawson, R., J. Obert, M. McCann, and W. Ling. “Neurobehavioral Treatment for Cocaine Dependency: A Preliminary Evaluation.” Cocaine Treatment: Research and Clinical Perspectives, National Institute on Drug Abuse Research Monograph #135. Rockville, Md.: National Institute on Drug Abuse, 1993, pp. 92-115. Rosenblum, A., S. Magura, J. Foote, M. Palij, L. Handelsman, M. Lovejoy, and B. Stimmel. “Treatment Intensity and Reduction in Drug Use for Cocaine-Dependent Methadone Patients: A Dose Response Relationship.” Prior version of this paper was presented at the American Society of Addiction Medicine Annual Conference, New York, Apr. 1994. Shoptaw, S., R. Rawson, M. McCann, and J. Obert. “The Matrix Model of Outpatient Stimulant Abuse Treatment: Evidence of Efficacy.” Journal of Addictive Diseases, Vol. 13, No. 4 (1994), pp. 129-41. Silverman, K., R.K. Brooner, I.D. Montoya, C.R. Schuster, and K.L. Preston. “Differential Reinforcement of Sustained Cocaine Abstinence in Intravenous Polydrug Abusers.” In L.S. Harris, ed. Problems of Drug Dependence 1994: Proceedings of the 56th Annual Scientific Meeting, The College on Problems of Drug Dependence, National Institute on Drug Abuse Research Monograph #153. Rockville, Md.: National Institute on Drug Abuse, 1995, p. 212. Silverman, K., C.J. Wong, A. Umbricht-Schneiter, I.D. Montoya, C.R. Schuster, and K.L. Preston. “Voucher-Based Reinforcement of Cocaine Abstinence: Effects of Reinforcement Schedule.” In L.S. Harris, ed. Problems of Drug Dependence 1995: Proceedings of the 57th Annual Scientific Meeting, The College on Problems of Drug Dependence, National Institute on Drug Abuse Research Monograph, in press. Smith, M. “Acupuncture Treatment for Crack: Clinical Survey of 1,500 Patients Treated.” American Journal of Acupuncture, Vol. 16 (1988), pp. 241-47. Vocci, F., B. Tai, J. Wilkins, T. Kosten, J. Cornish, J. Hill, S. Li, H. Kraemer, C. Wright, and P. Bridge. “The Development of Pharmacotherapy for Cocaine Addiction: Bupropion As a Case Study.” Paper presented at the College on Problems of Drug Dependence Annual Scientific Meeting, 1994. Walsh, S., J. Sullivan, and G. Bigelow. “Fluoxetine Effects on Cocaine Responses: A Double-Blind Laboratory Assessment in Humans.” The College on Problems of Drug Dependence Annual Scientific Meeting Abstracts, 1994. Washton, A., and N. Stone-Washton. “Outpatient Treatment of Cocaine and Crack Addiction: A Clinical Perspective.” National Institute on Drug Abuse Research Monographs # 135. Rockville, Md.: National Institute on Drug Abuse, 1993, pp. 15-30. Wells, E., P. Peterson, R. Gainey, J. David Hawkins, and R. Catalano. “Outpatient Treatment for Cocaine Abuse: A Controlled Comparison of Relapse Prevention and Twelve-Step Approaches.” American Journal of Drug and Alcohol Abuse, Vol. 20, No. 1 (1994), pp. 1-17. The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 6015 Gaithersburg, MD 20884-6015 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (301) 258-4066, or TDD (301) 413-0006. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the extent to which federally funded cocaine treatment therapies have proven successful and additional research initiatives that are needed to increase knowledge of cocaine treatment effectiveness. GAO found that: (1) cocaine treatment research is still in its early stages; (2) preliminary study results have shown that relapse prevention, community reinforcement and contingency management, and neurobehavioral therapy may produce prolonged periods of abstinence among cocaine users; (3) relapse prevention programs have the highest abstinence rates, followed by community reinforcement and neurobehavioral programs; (4) community reinforcement programs have the highest retention rates, followed by relapse prevention and neurobehavioral programs; (5) pharmacological agents have not proven to be consistently effective in preventing cocaine use, and none have been submitted for Food and Drug Administration approval; (6) animal researchers have demonstrated the positive effects of a new immunization procedure in blocking the stimulant effects of cocaine; (7) few researchers have assessed the effectiveness of acupuncture treatment, but some research findings are favorable; and (8) experts believe that more rigorous treatment evaluation studies that focus on important treatment components, appropriate treatment intensities and durations, and clients' readiness and motivation for treatment are needed before standard cocaine treatment protocols can be formulated.
GAO_GAO-13-431
Background TANF, created as part of the 1996 welfare reforms, gives states the authority to make key decisions about how to allocate federal and state funds to assist low-income families. States generally determine cash assistance benefit levels and eligibility requirements for low-income families seeking support under state welfare programs. When states set their TANF cash assistance benefit levels, the amount a family receives depends, in part, on who is in the assistance unit. An assistance unit is a group of people living together, often related by blood or some other legal relationship. States can exclude adults from the assistance unit but still allow the children to receive some assistance. In these child-only cases, the adults in the family are excluded from the assistance unit and are generally not considered when calculating the benefit amount. States are also generally allowed to spend TANF funds on other services as long as these services support TANF purposes, which are: (1) to provide assistance to needy families so that children may be cared for in their own homes or homes of relatives; (2) to end dependence of needy parents on government benefits by promoting job preparation, work, and marriage; (3) to prevent and reduce out-of-wedlock pregnancies; and (4) to encourage two-parent families. Federal law governing TANF generally refers to the term “assistance” and does not make distinctions between different forms of aid funded by TANF. However, HHS draws distinctions between “assistance” and “nonassistance.” HHS regulations define assistance to include cash, payments, vouchers, or other forms of benefits designed to meet families’ ongoing, basic needs. 45 C.F.R. § 260.31. HHS also generally includes in assistance services, such as child care and transportation assistance for parents who are unemployed. HHS uses the term nonassistance to refer to TANF expenditures that fulfill one of the four TANF purposes, but do not meet this regulatory definition. In our report, we refer to HHS’s definition of assistance as “cash assistance” and its reference to nonassistance as “non-cash services.” focused on participants gaining employment and work-related skills. States that do not meet minimum work participation rates may be penalized by a reduction in their block grant. Several factors may help states meet their work participation rates, such as reductions in their cash assistance caseloads and spending state funds for TANF purposes above the required MOE amount. In addition, states are limited in the amount of time they can provide federal cash assistance to families. In general, states may not use federal TANF funds to provide cash assistance to a family that includes an adult who has received cash assistance for 5 years or more.other TANF-funded services. Such time limits do not apply to child-only cases or to Federal law sets forth the basic TANF reporting requirements for states. For example, states are required to provide information and report to HHS on their use of TANF funds in TANF state plans outlining how each state intends to run its TANF program (generally filed every 2 years), quarterly reports on demographic and economic circumstances and work activities of families receiving cash assistance, quarterly financial reports providing data on federal TANF and state MOE expenditures, and annual reports on state programs funded with MOE funds, among other things. HHS reviews state information and reports to ensure that states meet the conditions outlined in federal law. For example, HHS uses the reported information to determine whether states are meeting work participation rates. In creating the TANF block grant, Congress emphasized the importance of state flexibility, and restricted HHS's regulatory authority over the states except to the extent expressly provided in the law. For example, HHS generally has limited authority to impose new TANF reporting requirements on states unless directed by Congress, so many changes to the types of information that states are required to report would require congressional action. As a fixed federal funding stream, the federal TANF block grant amount does not automatically adjust as caseloads or needs change, and the level of the federal grant has not been adjusted for inflation since the program’s creation in 1996. States may reserve federal TANF funds under a “rainy day fund” for use in future years, providing states additional flexibility in their budget decisions. In fact, we reported in 2010 that many states had some TANF reserves that they drew down to meet increasing needs in the recent economic downturn. The federal law that established TANF also created a TANF Contingency Fund that states could access in times of economic distress. Similarly, during the recent economic recession, the federal government created a $5 billion Emergency Contingency Fund for state TANF programs through the American Recovery and Reinvestment Act of 2009, available in fiscal years 2009 and 2010. In addition, TANF supplemental funds had been awarded to 17 states with historically low welfare spending per person and high population growth each year, although these grants expired in June 2011. TANF’s Role in Providing Cash Assistance to Needy Families Has Evolved Fewer Eligible Families Receive Cash Assistance A key TANF purpose stated in law is to provide assistance to needy families so that children may be cared for in their own homes or homes of relatives. With the TANF block grant in effect replacing AFDC—a key federal cash welfare program for needy families—in fiscal year 1997, much attention has focused since then on the decline in the number of families receiving TANF cash assistance and the implications for poor children and families. The law does not explicitly state that poverty reduction is a TANF purpose, and there are generally no federal requirements or benchmarks as to eligibility criteria or benefit amounts, or on the percentage of low-income families who are to be covered by a state’s TANF program. When states implemented TANF during fiscal year 1997, a monthly average of 3.9 million families were receiving cash assistance. This number declined by over half within the first 5 years of TANF. Since that time, the average number of families receiving cash assistance each month has remained well below the initial number of 3.9 million families, and averaged about 1.9 million families in 2011. Our previous work shows that although TANF caseloads have declined, many families with incomes still low enough to receive aid did not do so for a variety of reasons. In a 2010 report, we assessed changes in the number of families eligible for and receiving cash assistance under AFDC and TANF from 1995 to 2005, the most recent data available at that time. The strong economy of the 1990s, TANF's focus on work, and other factors such as additional funding for child care and expansions in the Earned Income Tax Credit contributed to increases in the share of single mothers working and fewer families receiving TANF cash assistance. While some families worked more, had higher incomes, and were not eligible for cash assistance, others had income that left them still eligible; however, many of these eligible families were not participating in the program. According to our estimates, the majority—87 percent—of that caseload decline can be explained by the decline in eligible families participating in the program, in part because of changes to state welfare programs. These changes include mandatory work requirements; changes to application procedures; lower benefits; policies such as lifetime limits on assistance; diversion strategies such as providing one- time, non-recurring benefits instead of monthly cash assistance to families facing temporary hardships; and sanctions for non-compliance, according to a review of the research. Among eligible families who did not receive cash assistance, 11 percent did not work, did not receive means- tested disability benefits, and had very low incomes (see fig. 1). We have not updated this analysis; however, some recent research shows that this potentially vulnerable group may be growing. We have also reported in 2012 that during and after the recent significant recession, caseloads increased in most states, and the overall national increase totaled about 15 percent from fiscal years 2008 to 2011. This has been the first test of TANF—with its capped block grant structure— during severe economic times. We noted that almost 40 percent of households with children and income below 200 percent of the federal poverty threshold that had exhausted Unemployment Insurance benefits received aid through the Supplemental Nutrition Assistance Program (SNAP)(formerly known as food stamps); however, less than 10 percent received TANF cash assistance in 2009. The relatively modest increase in TANF caseloads—and decreases in some states—has raised questions about the responsiveness of TANF to changing economic conditions. After initial declines in the poverty rate among children— from 21 percent in 1995 (prior to TANF’s implementation) to 16 percent in 2000—the rate had risen to 22 percent in 2011, according to the Bureau of the Census. In our recent work, we identified several actions that states have taken to address increased needs while also experiencing budgetary distress. These include drawing down TANF reserves and accessing TANF Contingency Funds. In addition, nearly all states received a combined total of $4.3 billion of the $5 billion TANF Emergency Contingency Fund, created by Congress under the American Recovery and Reinvestment Act of 2009, in fiscal years 2009 through 2011. States used these funds in part to create or expand subsidized employment programs. Setting eligibility criteria and benefit levels are ways that states may manage the costs of their TANF cash assistance programs, directly affecting the number of families served and the amount of assistance they receive. 2012 report cited tension between the need to provide cash assistance and the need to provide other state services during the recent economic downturn. Eligibility criteria and benefit amounts for cash assistance can vary greatly by state. For example, in Arkansas, as of July 2011, for a family of three, earnings had to be equal to or below $279 per month in order to be eligible for cash assistance, and their maximum benefit amount was $204. In contrast, in California, as of July 2011, a family of three’s income had to be equal to or below $1,224 per month to be eligible for cash assistance, and their maximum benefit amount was $714. See Urban Institute, Welfare Rules Databook: State TANF Policies as of July 2011 (Washington, D.C.: Aug. 2012). stringent eligibility criteria and reduced benefit amounts for cash assistance to help manage costs. We estimated in a 2010 report that had certain 2005 TANF eligibility-related rules been in place in 1995, 1.6 percent fewer families overall would have been eligible for cash assistance in 1995. We also noted in that report that the value of TANF cash benefits had fallen over time; average cash benefits under 2005 TANF rules were 17 percent lower than they were under 1995 AFDC rules. States are required to report on some features of their cash assistance programs, but there is no requirement for them to report on eligibility criteria, benefit amounts, or coverage rates. In 2012, HHS officials noted that they do not have the authority to require states to provide basic information about the cash assistance programs, including state TANF eligibility criteria, benefits levels, and other program features. HHS provides support to the Urban Institute to create and maintain the Welfare Rules Database on characteristics of state TANF programs, including features such as eligibility criteria and benefit levels. Regarding information on TANF coverage of low-income families, in our 2005 report on several means-tested programs including TANF, we noted that having participation or coverage rate information is an important tool for program managers and policymakers, even among programs that were not intended to serve everyone eligible for program benefits. However, HHS generally does not include these rates in TANF annual performance plans or the agency’s TANF Annual Report to Congress. Composition of the Cash Assistance Caseload Has Changed Much of the federal welfare policy discussion has focused on how to help low-income parents caring for their children become employed and less dependent on government assistance. Yet in 2010, over 40 percent of families receiving TANF cash assistance were “child-only,” meaning the adults in the household were not included in the benefit calculation, and aid was provided only for the children. There are four main categories of child-only cases in which the caregiver (a parent or non-parent) does not receive TANF benefits: (1) the parent is receiving Supplemental Security (2) the parent is a noncitizen or a recent legal immigrant; (3) Income; the child is living with a non-parent caregiver, often a relative; and (4) the parent has been sanctioned and removed from the assistance unit for failing to comply with program requirements, and the family's benefit has been correspondingly reduced. Families receiving child-only assistance are generally not subject to federal work requirements and time limits. HHS collects descriptive information from states on the number and selected characteristics of child-only cases; however, information on state policies and plans for specifically assisting these families is not required and not available at the national level. As the number of TANF cases with an adult in the assistance unit has declined significantly, child-only cases have become more prominent. We reported in 2012 that the percentage of child-only cases increased from about 23 percent from July through September 1997 to over 40 percent in fiscal year 2010. Our work and other research have pointed out the need for more attention to child-only cases. Our 2011 report focused on non-parent caregivers in TANF child-only cases, often relatives, who have stepped in to help raise children for a variety of reasons, in some cases due to child abuse or neglect by a parent. available to children living with non-parents depends on the extent to which a child welfare agency becomes involved in the family’s situation, among other things. However, we reported that information sharing between TANF and child welfare services to better serve children living with relative caregivers was a challenge. Another study, prepared under a grant from HHS and issued in December 2012, noted that child-only cases have not been a focus of TANF policies, yet the program can serve as an important source of support for vulnerable children in these situations, although this support is not uniform among the states. It also noted the significant differences among the various types of child-only cases, concluding that future attention needs to take into account the varying policy contexts—child welfare, disability, and immigration policies—involved. Some Potential Options GAO, TANF and Child Welfare Programs: Increased Data Sharing Could Improve Access to Benefits and Services, GAO-12-2 (Washington, D.C.: Oct. 7, 2011). Congress and program managers. Such information may also help clarify states’ TANF policies for providing income support for low-income families and children (see table 1). Approach to Measuring Work Participation Has Limitations States Have Generally Met Work Participation Rates by Using Credits Allowed by Law One of the four TANF purposes is to end dependence of needy parents on government benefits by promoting job preparation, work, and marriage; TANF's work participation rate requirement is in keeping with the purpose of helping parents prepare for and find jobs. PRWORA established higher work participation rate requirements and eliminated many exemptions from these requirements for recipients compared to what was in place prior to TANF.mandatory work requirements could reduce welfare receipt and increase This reflected research that found that employment among single mothers and help address concerns about long-term welfare receipt. Pub. L. No. 109-171, 120 Stat. 4 (2006). GAO-10-525 and GAO, Temporary Assistance for Needy Families: Update on Families Served and Work Participation, GAO-11-880T (Washington, D.C.: Sept. 8, 2011). numbers of families receiving TANF cash assistance over a specified time period are accounted for in each state’s caseload reduction credit, which essentially then lowers the states’ required work participation rate from 50 percent.For example, if a state’s caseload decreases by 20 percent during the relevant time period, the state receives a caseload reduction credit equal to 20 percentage points, which results in the state work participation rate requirement being adjusted from 50 to 30 percent. Because of the dramatic declines in the number of families receiving cash assistance after TANF implementation, caseload reduction credits effectively eliminated work participation rate requirements in some states. For example, we reported that in fiscal year 2006, 18 states had caseload reductions that were at least 50 percent, which reduced their required work participation rates to 0. We noted that state caseload declines have generally been smaller after DRA changed the base year for measuring caseload reductions from fiscal year 1995 to fiscal year 2005, among other things.However, many states are still able to use caseload declines to help them lower their required work participation rates. For example, for the most recent data available in fiscal year 2009, 38 of the 45 states that met their required work participation rates for all TANF families did so in part because of their caseload declines (see fig. 2). Additionally, we reported that while states’ caseload reduction credits before DRA were based primarily on their caseload declines, after DRA, states’ spending of their own funds on TANF-related services also became a factor in some states’ credits. Specifically, states are required to spend a certain amount of funds every year—their MOE funds—in order to receive all of their federal TANF block grant. However, if states spend in excess of the required amount (“excess MOE”), they are allowed to functionally increase their caseload reduction credits.that, in fiscal year 2009, 32 of the 45 states that met their required work participation rates for all families receiving cash assistance claimed excess MOE toward their caseload reduction credits. In addition, 17 states would not have met their rates without claiming these expenditures (see fig. 2). Measure Has Limitations as a National Performance Measure for TANF In 2010, we concluded that because of the various factors that affect the calculation of states’ work participation rates, the rate’s usefulness as a national performance measure for TANF is limited, and changes intended to improve data quality may be creating new challenges for states. In addition to the caseload reduction credits and excess MOE discussed above, we reported that some states have made changes to their TANF programs that may affect which families are counted in their work participation rates, such as providing some families assistance in non- TANF programs, discussed in the next section. Given these various factors, we have noted that the work participation rate does not allow for clear comparisons across state TANF programs or comparisons of individual state programs over time. This is the same conclusion we reached in our 2005 report that recommended changes to improve this measure of states’ performance. In that report, we found differences across states that contributed to an inconsistent measurement of work participation. For example, we found that some states reported the hours recipients were scheduled to work, rather than those actually worked, as work participation. DRA contained changes generally expected to increase internal controls and improve data quality, however it also created new challenges for states. In our 2010 review of work participation rates, many states cited challenges in meeting work performance standards under DRA, such as new requirements to verify participants’ actual activity hours and certain limitations on the types and timing of activities that count toward meeting the requirements. Local TANF officials noted that verification of TANF families’ work participation requires significant time and collaboration between TANF staff and employers and other staff at work activity sites. Because of this, some noted that they have had to designate or hire specific staff to manage the tracking and verification of families’ work participation, and yet these activities also remain a routine part of all local TANF staff’s responsibilities. We concluded at the time that the TANF work participation rate requirements may not yet have achieved the appropriate balance between flexibility for states and accountability for federal TANF goals. States May Not Serve Some Families that are Not Work-Ready Work participation rate requirements can play an important role in encouraging states to move TANF recipients into work; however, our work indicates some ways that current policies may be discouraging states from engaging some TANF recipients with complex needs and from providing an appropriate mix of activities. According to the preamble to a TANF final rule from 1999, several provisions of the law, including time limits, higher participation rate requirements, and fewer individual exemptions from participation requirements, taken together, signal that states must broaden participation beyond the "job ready." However, some state TANF officials we interviewed for a 2012 report said the pressure to meet TANF work participation rate requirements causes them to focus on the “ready to work” cash assistance population, which can leave the “harder-to-serve” population without services. States may generally only count a family’s participation in job readiness assistance, which can include mental health and substance abuse treatment, towards the work participation rate for six weeks in a year. A 2012 MDRC study conducted for HHS suggested that combining work-focused strategies with treatment or services may be more promising than using either strategy alone, especially for people with disabilities and behavioral health problems. Additionally, we have reported that some states find the restrictions on the amount of time they are allowed to count vocational educational training towards the work participation rate to be a challenge. State TANF administrators have expressed concerns that the 12-month lifetime limit on vocational educational training may be insufficient for TANF participants to progress to higher-wage employment that will prevent them from needing assistance in the future. Officials we interviewed more recently also noted that the restrictions may not match the needs of workers who lost jobs during the recession, who may require more education or retraining to find a new job. Finally, we have reported that many states choose to provide cash assistance to two-parent families outside of TANF. State officials have told us that two-parent families often have as many or more challenges as single parents, and states’ work participation rate requirement for two-parent families is 90 percent minus any caseload reduction credit the state receives. In 2010, we reported that 28 states provide cash assistance to two-parent families through separate programs funded solely with state dollars, and that families for whom states use these programs to provide cash assistance are those that typically have the most difficulty meeting the TANF work requirements. Some Potential Options In view of our prior work that has identified limitations in the work participation rate’s usefulness, potential options are available that may motivate states to engage more families in work activities and provide a more accurate picture of state performance (see table 2). Additional information may be needed before adopting any of these potential options. The work participation rate is complex and has affected significant state policy decisions. Any adjustment to or replacement of the measure would likely have a profound impact on state TANF programs. For example, introducing an employment credit would constitute a significant change in the way states may meet work participation requirements, but the effects this approach would have on participation rates and state TANF programs are unknown. Additionally, it is difficult to anticipate ways that the potential options may interact with one another. We have reported that allowing states to test approaches can foster innovation and help identify possible unintended consequences. Members of Congress have raised concerns about a 2012 announcement by HHS that the agency would use waiver authority to allow states to test various strategies, policies, and procedures designed to improve employment outcomes for needy families.remains controversial and the House of Representatives passed a bill in The potential for waivers 2013 aimed at preventing HHS from implementing them. According to HHS, as of February 25, 2013, no state had formally submitted a request for a waiver related to TANF work requirements. Still, state experience with many of the potential options outlined above could provide valuable information to policymakers about the effects of changes if they choose to alter the work participation rate as it is currently implemented. If Congress wanted to make changes, it could set parameters for testing some approaches through pilots in selected states, for example, to gather additional information for considering changes to TANF that would maintain or improve its focus on work and self-sufficiency. Information Available to Assess Recent Trends in TANF Spending is Limited Performance Information for Non-Cash Services is Incomplete We reported in 2012 that the TANF block grant has evolved into a flexible funding stream that states use to support a broad range of allowable services, but the accountability framework currently in place in federal law Declining cash and regulations has not kept pace with this evolution.assistance caseloads freed up federal TANF and state MOE funds for states, and over time, states shifted spending to other forms of aid, which we refer to as non-cash services. Non-cash services can include any other services meeting TANF purposes, such as job preparation activities, child care and transportation assistance for parents who are employed, out-of-wedlock pregnancy prevention activities, and child welfare services, as well as some cash benefits such as non-recurring short-term benefits and refundable tax credits to low-income working families. In fiscal year 1997, nationwide, states spent about 23 percent of federal TANF and state MOE funds on non-cash services. In contrast, states spent almost 64 percent of federal TANF and state MOE funds for these purposes in fiscal year 2011. However, there are no reporting requirements mandating performance information specifically on families receiving non-cash services or their outcomes. There is also little information related to TANF’s role in filling needs in other areas like child welfare, even though this has become a more prominent spending area for TANF funds in many states. We reported that while states prepare state plans and expenditure reports that individually provide some information on non-cash services, even when considered together, these do not provide a complete picture on state goals and strategies for uses of TANF funds. For instance, we noted that state plans currently provide limited descriptions of a state’s goals and strategies for its TANF block grant, including how non-cash services fit into these goals and strategies, and the amount of information in each plan can vary by state. We reported that HHS is taking some steps to improve expenditure reports from states. Still, we concluded that without more information that encompasses the full breadth of states’ uses of TANF funds, Congress will not be able to fully assess how funds are being used, including who is receiving services or what is being achieved. We included a Matter for Congressional Consideration regarding ways to improve reporting and performance information, though Congress has not yet enacted such legislative changes. Questions Exist on Whether Increases in State MOE Reflect New Spending on Low-Income Families Increases in the expenditures states have claimed as MOE, including expenditures by third parties, may warrant additional attention. We reported in 2012 that MOE is now playing an expanded role in TANF programs. As shown in figure 3, according to HHS data, until fiscal year 2006, MOE levels remained relatively stable, hovering around the 80 percent required minimum or the reduced rate of 75 percent for states From fiscal years that met their work participation rate requirements.2006 through 2009, they increased each year. We reported that several reasons account for the increase during this period: Many states claimed additional MOE to help them meet the work participation rate requirements, as discussed above. During the recession states accessed TANF Contingency Funds, which required them to meet a higher MOE level, and Emergency Contingency Funds, which required them to have had increases in certain expenditures or in the number of families receiving cash assistance. An interim rule temporarily broadened the types of activities on which states could spend state funds and be countable for MOE purposes. We noted that this greater emphasis on the use of MOE increases the importance of understanding whether effective accountability measures are in place to ensure MOE funds are in keeping with requirements. These recent increases in state MOE have raised questions about how to ensure that state expenditures represent a sustained commitment to spending in line with TANF purposes. We noted in 2012 that if MOE claims do not actually reflect maintaining or increasing service levels, low- income families and children may not be getting the assistance they need and federal funds may not be used in the most efficient manner. However, the recent increases in state MOE spending which states have used to access contingency funds and meet work participation rate requirements may not represent new state spending. For example, officials in one state told us in 2012 that they began claiming MOE expenditures for an existing state early-childhood education program for needy families in fiscal year 2008. Officials in two other states said they hired consultants during the economic downturn to identify opportunities to claim MOE expenditures from existing state programs that were not originally used for TANF purposes. For example, one state found that many of its programs could be counted under TANF as “prevention of out- of-wedlock pregnancies” so it claimed funds spent on these programs as MOE. Additionally, we reported in 2012 that many states have recently begun to count third party nongovernmental expenditures to help meet TANF MOE spending requirements. In addition to its own spending, a state may count toward its MOE certain in-kind or cash expenditures by third parties—such as nongovernmental organizations—as long as the expenditures meet other MOE requirements, including those related to eligible families and allowable activities. We reported that between fiscal years 2007 and 2011, about half of all states reported counting third party nongovernmental expenditures toward MOE in at least one year, and 17 states reported that they intend to count these expenditures in the future. Some Potential Options Potential options are available to provide additional information on non- cash services and state MOE expenditures that may be useful for making decisions regarding the TANF block grant and better ensure accountability for TANF funds (see table 3). In particular, requiring additional information on non-cash services would be consistent with our 2012 Matter for Congressional Consideration on improving performance and reporting information. Concluding Observations We have identified a number of potential options that could improve TANF performance and oversight as the program is currently designed, based on our prior work. These options are not intended to be exhaustive, and it is not the purpose of this report to recommend or endorse any particular policy option. In addition, there may be a number of other options that would warrant further analysis. However, it is clear that TANF has evolved beyond a traditional cash assistance program and now also serves as a source of funding for a broad range of services states provide to eligible families. The past 16 years has shown many changes in how states use TANF funds and the populations they serve. Any extension or reauthorization of TANF presents an opportunity to re-examine how it provides assistance to needy families and whether TANF, as currently structured, continues to address Congress’ vision for the program. Agency Comments We provided a draft of our report to HHS for review and comment. HHS provided technical comments which we incorporated as appropriate. We are sending copies of this report to the appropriate congressional committees, the Secretary of Health and Human Services, and other interested parties. In addition, the report will be available at no charge on the GAO website at http://www.gao.gov. If you or your staff have any questions concerning this report, please contact me at (202) 512-7215 or [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. GAO staff who made key contributions to this report are listed in appendix I. Appendix I: GAO Contacts and Staff Acknowledgments GAO Contact Staff Acknowledgements In addition to the contact named above, Gale Harris (Assistant Director), Nhi Nguyen, and Michael Pahr made significant contributions to all aspects of this report. Also contributing to this report were James Bennett, Caitlin Croake, Alexander Galuten, Almeta Spencer, and Walter Vance. Related GAO Products Temporary Assistance for Needy Families: More Accountability Needed to Reflect Breadth of Block Grant Services. GAO-13-33. Washington, D.C.: December 6, 2012. Temporary Assistance for Needy Families: More States Counting Third Party Maintenance of Effort Spending. GAO-12-929R. Washington, D.C.: July 23, 2012. Temporary Assistance for Needy Families: Update on Program Performance. GAO-12-812T. Washington, D.C.: June 5, 2012. Temporary Assistance for Needy Families: State Maintenance of Effort Requirements and Trends. GAO-12-713T. Washington, D.C.: May 17, 2012. Unemployment Insurance: Economic Circumstances of Individuals Who Exhausted Benefits. GAO-12-408. Washington, D.C.: February 17, 2012. TANF and Child Welfare Programs: Increased Data Sharing Could Improve Access to Benefits and Services. GAO-12-2. Washington, D.C.: October 7, 2011. Temporary Assistance for Needy Families: Update on Families Served and Work Participation. GAO-11-880T. Washington, D.C.: September 8, 2011. Temporary Assistance for Needy Families: Implications of Caseload and Program Changes for Families and Program Monitoring. GAO-10-815T. Washington, D.C.: September 21, 2010. Temporary Assistance for Needy Families: Implications of Recent Legislative and Economic Changes for State Programs and Work Participation Rates. GAO-10-525. Washington, D.C.: May 28, 2010. Temporary Assistance for Needy Families: Fewer Eligible Families Have Received Cash Assistance Since the 1990s, and the Recession’s Impact on Caseloads Varies by State. GAO-10-164. Washington, D.C.: February 23, 2010. Welfare Reform: Better Information Needed to Understand Trends in States’ Uses of the TANF Block Grant. GAO-06-414. Washington, D.C.: March 3, 2006. Welfare Reform: HHS Should Exercise Oversight to Help Ensure TANF Work Participation Is Measured Consistently across States. GAO-05-821. Washington, D.C.: August 19, 2005. Means-Tested Programs: Information on Program Access Can Be an Important Management Tool. GAO-05-221. Washington, D.C.: March 11, 2005. Welfare Reform: Federal Oversight of State and Local Contracting Can Be Strengthened. GAO-02-661. Washington, D.C.: June 11, 2002. Welfare Reform: States Provide TANF-Funded Services to Many Low- Income Families Who Do Not Receive Cash Assistance. GAO-02-564. Washington, D.C.: April 5, 2002. Welfare Reform: Challenges in Maintaining a Federal-State Fiscal Partnership. GAO-01-828. Washington, D.C.: August 10, 2001.
In 1996, Congress made sweeping changes to federal welfare policy by replacing the previous cash assistance program with the TANF block grant. Since then through fiscal year 2011, the federal government and states have spent a total of nearly $434 billion for TANF. The block grant was reauthorized under the Deficit Reduction Act of 2005, and is currently authorized through September 30, 2013. To inform a potential reauthorization of TANF, GAO was asked to discuss its key findings on TANF performance and oversight from its previous work and identify potential options that would address these findings. This report discusses issues and options in three selected areas: (1) TANF's role in providing cash assistance to low-income families, (2) measurement of TANF work participation, and (3) information on states' use of TANF funds. In addition to summarizing its previous work on these issues, GAO reviewed relevant federal laws, regulations, and agency documents as well as transcripts from relevant congressional hearings from 2009 through 2012 to identify potential options. GAO also spoke with HHS officials and selected three TANF experts with a range of views to share their perspectives on these issues. Temporary Assistance for Needy Families' (TANF) role in providing cash assistance has evolved; fewer eligible families receive cash assistance and the composition of the caseload has changed. GAO noted in 2010 that 87 percent of the dramatic decline from 1995 through 2005 in the number of families receiving cash assistance was due a decline in eligible families participating in TANF, rather than increased incomes. Changes to state TANF programs, such as mandatory work requirements and lower benefits, account in part for this decline. Relatively modest caseload increases in recent years nationwide, as well as decreases in some states, have raised questions about TANF's responsiveness to changing economic conditions. GAO also reported in 2011 that the composition of the TANF caseload has changed, with about 40 percent of cases now comprised of children only, with the adult not receiving benefits, and little known nationwide about state policies for aiding these children. Potential options to better understand TANF's role as a cash assistance program may include: improving information on the extent to which states provide cash assistance to eligible low-income families, and requiring states to include more information--for example in TANF state plans submitted to the Department of Health and Human Services (HHS)--on features such as benefit amounts and services provided. The current approach used to measure the extent to which states engage TANF recipients in work activities as defined by federal law has limitations. GAO reported in 2010 and 2011 that most states relied on several factors allowed in law, including credits for caseload reductions, to reduce the percentage of families they needed to engage in work to meet their work participation rate requirements. GAO also reported that current policies may be discouraging states from serving some families who are not "work-ready" through TANF, such as those with significant barriers to employment or complex needs. Potential options to address these issues may include: eliminating, limiting, or modifying some of the credits states may use to reduce their work participation rate requirements; adjusting requirements to better ensure states engage those not work-ready; and developing an additional or alternate set of measures that focus on employment outcomes. However, more information may be needed to assess the potential impacts of any changes to work participation requirements. Limitations exist in the information available to assess states' use of federal TANF funds and state expenditures related to minimum state spending requirements under TANF, known as maintenance of effort (MOE) requirements. GAO reported in 2012 that the TANF block grant has evolved into a flexible funding stream that states use to support a broad range of non-cash services, but information requirements for assessing TANF performance have not kept pace with this evolution. For example, there are no reporting requirements mandating performance information specifically on families receiving non-cash services or their outcomes. GAO also reported in 2012 that states have reported increased levels of MOE spending for a variety of reasons, including helping them reduce their work participation rate requirements as allowed by law. Potential options to better understand federal and state TANF spending may include: improving reporting and performance information to encompass the full breadth of states' use of TANF funds, and requiring a review of MOE expenditures used to meet TANF requirements.
GAO_GAO-06-404
Background In response to legislation, the Immigration and Naturalization Service (INS) established in 2002 an Entry/Exit Program to strengthen management of the pre-entry, entry, visa status, and exit of foreign nationals who travel to the United States. With the creation of DHS in March 2003 and the inclusion of INS as part of the new department, this initiative was renamed US-VISIT. The goals of US-VISIT are to enhance the security of U.S. citizens and visitors, facilitate legitimate travel and trade, ensure the integrity of the U.S. immigration system, and protect the privacy of our visitors. To achieve these goals, US-VISIT is to collect, maintain, and share information on certain foreign nationals who enter and exit the United States; detect fraudulent travel documents, verify traveler identity, and determine traveler admissibility through the use of biometrics; and facilitate information sharing and coordination within the border management community. As of October 2005, about $1.4 billion has been appropriated for the program, and according to program officials, about $962 million has been obligated. Acquisition and Implementation Approach DHS plans to deliver US-VISIT capability in four increments: Increments 1 through 3 are interim, or temporary, solutions that were to fulfill legislative mandates to deploy an entry/exit system by specified dates; Increment 4 is to implement a long-term vision that is to incorporate improved business processes, new technology, and information sharing to create an integrated border management system for the future. For Increments 1 through 3, the program is building interfaces among existing (“legacy”) systems; enhancing the capabilities of these systems; deploying these capabilities to air, sea, and land ports of entry; and modifying ports of entry facilities. These increments are to be largely acquired and implemented through task orders placed against existing contracts. Increment 1 concentrates on establishing capabilities at air and sea ports of entry and is divided into two parts—1 and 1B. Increment 1 (air and sea entry) includes the electronic capture and matching of biographic and biometric information (two digital index fingerscans and a digital photograph) for selected foreign nationals, including those from visa waiver countries. Increment 1 was deployed on January 5, 2004, at 115 airports and 14 seaports. Increment 1B (air and sea exit) collects biometric exit data for select foreign nationals; it is currently deployed at 14 airports and seaports. Increment 2 focuses primarily on extending US-VISIT to land ports of entry. It is divided into three parts—2A, 2B, and 2C. Increment 2A includes the capability to biometrically compare and authenticate valid machine-readable visas and other travel and entry documents issued by the Department of State and DHS to foreign nationals at all ports of entry (air, sea, and land ports of entry). Increment 2A was deployed on October 23, 2005, according to program officials. It is also to include the deployment by October 26, 2006, of technology to read biometrically enabled passports from visa waiver countries. Increment 2B redesigned the Increment 1 entry solution and expanded it to the 50 busiest U.S. land border ports of entry with certain modifications to facilities. This increment was deployed to these 50 ports of entry as of December 29, 2004. Increment 2C is to provide the capability to automatically, passively, and remotely record the entry and exit of covered individuals using radio frequency technology tags at primary inspection and exit lanes. In August 2005, the program office deployed the technology to five border crossings (at three ports of entry) to verify the feasibility of using passive radio frequency technology to record traveler entries and exits via a unique identification number embedded within government- issued travel documentation. The program office reported the evaluation results in January 2006, and according to the Increment 2C project manager, the program is planning to move forward with the second phase of this increment. Increment 3 extended Increment 2B entry capabilities to 104 of the remaining 105 land ports of entry as of December 19, 2005. Increment 4 is to define, design, build, and implement more strategic US- VISIT program capability, which program officials stated will likely consist of a further series of incremental releases or mission capability enhancements that will support business outcomes. The first three increments of US-VISIT include the interfacing of existing systems, the modification of facilities, and the augmentation of program staff. Key existing systems include the following: The Arrival Departure Information System (ADIS) is a database that stores noncitizen traveler arrival and departure data received from air and sea carrier manifests and that provides query and reporting functions. The Treasury Enforcement Communications Systems (TECS) is a system that maintains lookout (i.e., watch list) data, interfaces with other agencies’ databases, and is currently used by inspectors at ports of entry to verify traveler information and update traveler data. TECS includes the Advance Passenger Information System (APIS), a system that captures arrival and departure manifest information provided by air and sea carriers. The Automated Biometric Identification System (IDENT) is a system that collects and stores biometric data about foreign visitors. In May 2004, DHS awarded an indefinite-delivery/indefinite-quantity prime contract to Accenture, which has partnered with a number of other vendors. According to the contract, the prime contractor will develop an approach to produce the strategic solution. In addition, it is to help support the integration and consolidation of processes, functionality, and data, and is to assist the program office in leveraging existing systems and contractors in deploying and implementing the interim solutions. Organizational Structure and Responsibilities In July 2003, DHS established the US-VISIT program office, which is responsible for managing the acquisition, deployment, and operation of the US-VISIT system and supporting people, processes, and facilities. Accordingly, the program office’s responsibilities include, among other things, delivering program and system capabilities on time and within budget and ensuring that program goals, mission outcomes, and program results are achieved. Within DHS, the US-VISIT program organizationally reports directly to the Deputy Secretary for Homeland Security, as seen in figure 1. The program office is composed of a number of functional groups. Among these groups, three deal with contractor management. These are the Acquisition and Program Management Office (APMO), the Office of Facilities and Engineering Management, and the Office of Budget and Financial Management. As seen in figure 2, all three groups report directly to the US-VISIT Program Director. APMO is to manage execution of the program’s acquisition and program management policies, plans, processes, and procedures. APMO is also charged with ensuring effective selection, management, oversight, and control of vendors providing services and solutions. The Office of Facilities and Engineering Management is to implement the program’s physical mission environment through, for example, developing and implementing physical facility requirements and developing cooperative relationships and partnering arrangements with appropriate agencies and activities. The Office of Budget and Finance is to develop executable budgets to contribute to cost-effective performance of the US-VISIT program and mission; ensure full accountability and control over program financial assets; and provide timely, accurate, and useful financial information for decision support. US-VISIT Relationships with Other DHS and Non- DHS Agencies Since its inception, US-VISIT has relied extensively on contractors to deliver system and other program capabilities; these contractors include both contractors managed directly by the program office and those managed by other DHS and non-DHS agencies. Within the program office, APMO manages the prime contract mentioned earlier, as well as other program management-related contracts. All other contracts were awarded and managed either by other DHS agencies or by two non-DHS agencies, GSA and AERC. For the contracts managed by other DHS agencies, the program office has entered into agreements with these agencies. These agreements allow the program to use previously awarded contracts to further develop and enhance the existing systems that now are part of US- VISIT. By entering into agreements with the various owners of these systems, the program office has agreed to fund US-VISIT–related work performed on the systems by these agencies, which include CBP, which owns and manages TECS; Immigration and Customs Enforcement (ICE), which owned and managed IDENT (until 2004) and ADIS (until 2005), and still provides some information technology support services; and the Transportation Security Administration (TSA), which in 2003 managed the development of the air/sea exit pilot program. In addition, through its Office of Facilities and Engineering Management, the program office has established an interagency agreement with AERC and has established reimbursable work authorizations with GSA. The agreements with GSA and AERC generally provide for management services in support of US-VISIT deployment. When the US-VISIT program office was created in July 2003, the program did not own or manage any of the key systems described earlier. Rather, all systems were owned and managed by other DHS agencies (see fig. 3). As of March 2005, the program office had assumed ownership and management responsibility for IDENT, which was originally managed by ICE; assumed management responsibility for the air/sea exit project, which was originally managed by TSA; and shares responsibility for ADIS, which was initially owned and managed by ICE. US-VISIT owns ADIS, but CBP is responsible for managing the system. These relationships are shown in figure 3. IAAs establish a means for US-VISIT to transfer funds to other DHS and non-DHS agencies for work done on its behalf. The IAAs first give the servicing agencies (that is, the agencies performing the work for US-VISIT) obligation authority to contract for US-VISIT work. Once the work has been performed, the servicing agencies pay their vendors according to the terms of their respective contracts and then request reimbursement of the vendor payment from US-VISIT via the Intra-governmental Payment and Collection (IPAC) system. In addition, the servicing agencies also receive IPAC payments for the services they themselves provided for US-VISIT— essentially a fee for the cost of managing contracts on the program’s behalf. Table 1 lists the various agencies currently managing US-VISIT–related contracts and summarizes their respective relationships with the program office and the purpose of the contract actions that we reviewed. Summary of DHS Reported Obligations for US-VISIT Contracts Documentation provided by the agencies responsible for managing US- VISIT–related contracts shows that between March 2002 and March 31, 2005, they obligated about $347 million for US-VISIT–related contract work. As shown in figure 4, about $152 million, or less than half (44 percent), of the $347 million in obligations reported to us was for contracts managed directly by the US-VISIT program office. The remaining $195 million, or 56 percent, was managed by other DHS and non-DHS agencies. Specifically, $156 million, or 45 percent of the $347 million in obligations reported to us for contracts, was managed by other DHS agencies (TSA and CBP); $39 million, 11 percent, was managed by non- DHS agencies (GSA and AERC). From the inception of the US-VISIT program office through September 30, 2005, the program reports that it transferred about $96.7 million to other agencies via the IPAC system for direct reimbursement of contract costs and for the agencies’ own costs. Prior Reviews Related to DHS Contractor Oversight and Management In January 2005, we observed the increased use of interagency contracting by the federal government and noted the factors that can make interagency contract vehicles high risk in certain circumstances. One of these factors was that the use of such contracting vehicles contributes to a much more complex environment in which accountability had not always been clearly established, including designation of responsibility for such critical functions as describing requirements and conducting oversight. We concluded that interagency contracting should be designated a high-risk area because of the challenges associated with such contracts, problems related to their management, and the need to ensure oversight. In March 2005, we also reported on challenges facing DHS’s efforts to integrate its acquisition functions. One significant challenge was a lack of sufficient staff in the Office of the Chief Procurement Officer to ensure compliance with the department’s acquisition regulations and policies. Another challenge was that the department’s Office of Procurement Operations, which was formed to support DHS agencies that lacked their own procurement support (such as US-VISIT), did not yet have sufficient staff and relied heavily on interagency contracting. Further, the office had not implemented management controls to oversee procurement activity, including ensuring that proper contractor management and oversight had been performed. We concluded that unless these challenges were addressed, the department was at risk of continuing with a fragmented acquisition organization that provided only stop-gap, ad hoc solutions. Importance of Contractor Management Controls Organizational policies and procedures are important management controls to help program and financial managers achieve results and safeguard the integrity of their programs. Agency management is responsible for establishing and implementing financial and nonfinancial controls, which serve as the first line of defense in ensuring contractor performance, safeguarding assets, and preventing and detecting errors and fraud. Pursuant to 31 U.S.C. § 3512 (c),(d), the Comptroller General has promulgated standards that provide an overall framework for establishing and maintaining internal controls in the federal government. Policy and guidance on internal control in executive branch agencies are provided by the Office of Management and Budget (OMB) in Circular A-123, which defines management’s fundamental responsibility to develop and maintain effective internal controls. Specifically, management is responsible for implementing appropriate internal controls; assessing the adequacy of internal controls, including those over financial reporting; identifying needed improvements and taking corrective action; and reporting annually on internal controls. The five general standards in our framework for internal control are summarized below. Control environment. Management and employees should establish and maintain an environment throughout the organization that sets a positive and supportive attitude toward internal control and conscientious management. A key factor relevant to contractor management is having clearly defined areas of authority and responsibility and appropriate lines of reporting. Risk assessment. Internal control should provide for an assessment of the risks the agency faces from both external and internal sources. Control activities. Internal control activities help ensure that management’s directives are carried out. The control activities should be effective and efficient in accomplishing the agency’s control objectives. Key control activities associated with contract management include appropriate documentation of transactions, accurate and timely recording of transactions and events, controls over information processing, reviews by appropriate management in the organization, and segregation of duties. Information and communications. Information should be recorded and communicated to management (and others who need it) in a form, and within a time frame, that enables them to carry out their internal control and other responsibilities. Key contract management activities include identifying, capturing, and distributing information in a form and time frame that allows people to perform their duties efficiently; and ensuring that information flows throughout the organization and to external users as needed. Monitoring. Internal control monitoring should assess the quality of performance over time and ensure that the findings of audits and other reviews are promptly resolved. To complement the standards, we developed a tool to help managers and evaluators determine how well an agency’s internal controls are designed and functioning and what, where, and how improvements may be implemented. This tool is intended to be used concurrently with the standards described above and with OMB Circular A-123. The tool associates each standard with a list of major factors to be considered when users review the controls for that standard, as well as points to be considered that may indicate the degree to which the controls are functioning. Relevant acquisition regulations and IT acquisition management guidance also provide criteria for effectively managing contractor activities. The Federal Acquisition Regulation (FAR) requires that government agencies ensure that the contractor performs the requirements of the contract, and the government receives the service intended. However, the FAR does not prescribe specific methods for doing so. Other such methods or practices can be found in other acquisition management guidance. In particular, the Capability Maturity Model Integration model, developed by the Software Engineering Institute (SEI) of Carnegie Mellon University, explicitly defines process management controls that are recognized hallmarks for successful organizations and that, if implemented effectively, can greatly increase the chances of successfully acquiring software and systems. These controls define a number of practices and subpractices relevant to managing and overseeing contracts. These practices are summarized below. Establish written policies and procedures for performing contractor management. Polices establish the organization’s expectations for performing contractor management activities. Procedures provide the “how to” or method to be followed in implementing the policies. Establish and maintain a plan for performing the contract oversight process. The plan should include, among other things, a contractor management and oversight process description, requirements for work products, an assignment of responsibility for performing the process, and the evaluations and reviews to be conducted with the contractor. Assign responsibility and authority for performing the specific contractor management activities. Responsibility should be assigned for performing the specific tasks of the contractor management process. Train the people performing or supporting the contractor management process. Personnel participating in the contract oversight process should be adequately trained and certified, as appropriate, to fulfill their assigned roles. Document the contract. This documentation should include, among other things, a list of agreed-upon deliverables, a schedule and budget, deliverable acceptance criteria, and types of reviews that will be conducted with the contractor. Verify and accept the deliverables. Procedures for accepting deliverables should be defined; those accepting the deliverables should verify that they meet requirements; the results of acceptance reviews or tests should be documented; action plans should be developed for any products that do not pass their review or test; and action items should be identified, documented, and tracked to closure. Monitor risks involving the contractor and take corrective actions as necessary. Risks should be identified and categorized (e.g., risk likelihood or risk consequence) and then analyzed according to these assigned categories. Conduct technical reviews with the contractor. Reviews should ensure that technical commitments are being met in a timely manner and should verify that the contractor’s interpretation and implementation of the requirements are consistent with the project’s interpretation. Conduct management reviews. Reviews should address critical dependencies, project risks involving the contractor, and the contract schedule and budget. US-VISIT Established and Implemented Key Controls for Contracts That It Managed Directly, but It Did Not Have Controls for Overseeing Contracts Managed by Others or for Effective Financial Management Given the US-VISIT program’s dependence on contracting, it is extremely important for the program office to effectively manage and oversee its contracts via the establishment and implementation of key contractor management and oversight controls. To its credit, the program office established and implemented most of the key practices associated with effectively managing nonfinancial contractor activities for those contracts that it directly manages. In particular, it established policies and procedures for implementing all but one of the key practices that we reviewed, and it implemented many of these practices—including assigning responsibilities and training key personnel involved in contractor management activities, verifying that contractor deliverables satisfied established requirements, and monitoring the contractor’s cost and schedule performance for the task orders that we reviewed. In doing so, the program has increased the chances that program deliverables and associated mission results will be produced on time and within budget. However, the program office did not effectively oversee US-VISIT–related contract work performed on its behalf by other DHS and non-DHS agencies, and these agencies did not always establish and implement the full range of controls associated with effective management of their respective contractor activities. Without effective oversight, the program office cannot adequately ensure that program deliverables and associated mission results will be produced on time and within budget. Further, the program office and other agencies did not implement effective financial controls. The program office and other agencies managing US- VISIT–related work were unable to reliably report the scope of contracting expenditures. In addition, some agencies improperly paid and accounted for related invoices, including making a duplicate payment and making payments for non-US-VISIT services from funds designated for US-VISIT. Without effective financial controls, DHS cannot reasonably ensure that payments made for work performed by contractors are a proper and efficient use of resources. According to the US-VISIT program official responsible for contract matters, the program office has initially focused on contracts that it manages directly. For US-VISIT contracts managed by other agencies, the program office has decided to rely on those agencies to manage the contracts and associated financial matters. In addition, it has decided to rely on another agency for financial management support of the program office. Program Office Established and Implemented Key Contractor Management Practices The US-VISIT program office is responsible and accountable for meeting program goals and ensuring that taxpayer dollars are expended effectively, efficiently, and properly. Within the program office, APMO is responsible for establishing and maintaining disciplined acquisition and program management processes to ensure the efficient support, oversight, and control of US-VISIT program activities. Accordingly, it is important that APMO establish and implement effective contractor management controls. As mentioned previously, federal regulations and acquisition management guidance identify effective contractor management as a key activity and describe a number of practices associated with this activity, including (among other things) establishing policies and procedures for contractor management, defining responsibilities and authorities, providing training, verifying and accepting deliverables, and monitoring contractor performance. These general practices often consist of more detailed subpractices. Appendix III lists the practices and associated subpractices, as well as the extent to which they were performed on each of the contract actions that we reviewed. For contracts that it directly managed, APMO established policies and procedures for all but one of the key nonfinancial practices associated with effective contractor management. For example, it established policies and procedures for performing almost all contractor management activities (practices) through its Contract Administration and Management Plan. This programwide plan, in conjunction with its Acquisition Procedures Guide Deskbook, defines the methodology and approach for performing contractor management for all contracts and task orders managed by APMO. However, it neither established polices and procedures for having a plan for overseeing individual contract actions, nor actually developed such a plan. Instead, APMO relied on its programwide polices and procedures for performing contract management activities and to define what and how it actually implemented them. However, without a plan for specific contracting actions, the program office cannot be assured that contract management activities will be implemented for each contracting action. Table 2 shows the extent to which APMO, in its documented policies and procedures, requires that the critical contractor management practices be performed; this is shown under the heading “practice established?” Under “practice implemented?” the table also shows the extent to which APMO had actually implemented such practices for those contracting actions that we reviewed, regardless of any documented requirement. APMO also implemented the aforementioned policies and procedures that it established for each of the contracting actions that we reviewed. For example, APMO implemented all of the key subpractices associated with verifying and accepting contract deliverables. Specifically, APMO defined acceptance procedures, verified that deliverables satisfied their requirements, documented the results of the review, developed a plan for addressing deliverable deficiencies, and tracked those issues to closure. With respect to one program support task order, for example, a designated US-VISIT team reviewed a project plan delivered by the contractor and returned it with a “conditionally acceptable” letter; this letter stated that the comments included were to be incorporated into the plan and assigned a date that the revised plan was due back. The contractor resubmitted the plan by the assigned date, and the contracting officer’s technical representative (COTR) accepted it. Throughout the process, APMO tracked the status of this deliverable by means of a database designed to track and update the status of deliverables owed to US-VISIT by its contractors. The database included such information as current document status and when the revised document was due back to the program office. APMO also implemented all critical subpractices associated with contractor technical and management review activities. For example, APMO required that the prime contactor provide monthly cost performance reports that compared actual with budgeted cost and addressed critical dependencies. For example, one report noted that schedule and costs were impacted by a change in resources. In the report, the contractor proposed a corrective action and resolution date. APMO staff analyzed these reports and, according to APMO officials, distributed the analysis results to program office management for informational purposes (the results focused on the causes of and planned corrective actions for the most noteworthy cost and schedule variances). The information contained in the monthly reports was also discussed at quarterly programwide management reviews, which included contractor personnel. In addition to management issues, these reviews addressed technical issues such as deliverable status and requirements. The quarterly reviews were also used to evaluate the contractor’s overall performance, as well as the contractor’s performance on each task order active during that reporting period. The task orders that we examined were among those reviewed in this way. For each task order, the quarterly reviews included an assessment of schedule, cost and funding, technical performance, staffing, and risks. For example, the information presented on one task order that we reviewed reported that all of these categories were on track and were forecast to remain on track. During these reviews, technical requirements for each of the task orders were discussed among stakeholders, contractor personnel, and management to ensure a common understanding of those requirements and the status of their implementation. The results of these reviews were documented, and key discussion topics and a list and status of action items were identified. The action items were assigned due dates and were assigned to US-VISIT, the contractor, or specific individuals. In some cases, an action item identified a specific task order, such as a request to restructure a staffing report on a program management task order (in order to more accurately portray the level of contractor staffing). In the case of the staffing report, it was assigned to a contractor for action. Updated status of open items was also provided. According to APMO’s acquisition policy, the office established and implemented these contractor management practices to establish a standard approach for conducting contract activities and to ensure that US-VISIT contracts continue to be managed in accordance with relevant laws, regulations, policies, and acquisition requirements. In doing so, the program has increased the chances that program deliverables and associated mission results will be produced on time and within budget. The Program Office Did Not Effectively Oversee US-VISIT–Related Contracts Managed by Other Agencies The US-VISIT program office’s APMO is responsible for the program’s contract-related matters. That means that APMO should, among other things, effectively oversee contracts being managed by others on the program’s behalf. However, the program office did not establish and implement effective controls for overseeing US-VISIT–related contracts being managed by others. Specifically, the program office did not know the full range of US-VISIT–related contract actions that had been completed and were under way, and it had not performed key practices associated with gaining visibility into and understanding of contractor performance in meeting the terms of these contracts. This oversight gap is exacerbated by the fact that the other agencies did not always establish and implement the full range of controls associated with effective management of their contractor activities. For example, these agencies did not always implement effective controls for ensuring that contractor deliverables satisfy established requirements. Without effective oversight of all US-VISIT–related contracts, the program office is increasing the risk that program goals and outcomes will not be accomplished on time and within budget. US-VISIT’s Oversight of Other Agencies’ Contracting Activities Has Been Informal and Inconsistent To effectively oversee program-related contracts being managed by others, it is important for a program office to, at a minimum, depending on the nature of the contract, (1) define the roles and responsibilities for both itself and the entities it relies on to manage the contracts, (2) know the full range of such contract work that has been completed and is under way, and (3) define and implement the steps it will take to obtain visibility into the degree to which contract deliverables meet program needs and requirements, which underpin the program goals and outcomes. However, the US-VISIT program office did not effectively perform the following oversight activities for contracts that are being managed by other agencies: Defining roles and responsibilities. The program office did not define and document program office roles and responsibilities for overseeing the contractor work managed by other agencies and did not define the roles and responsibilities of the agencies managing US-VISIT–related contracts. According to the APMO Director, the roles and responsibilities were defined in IAAs between these agencies and the program office. However, the IAAs generally did not define roles and responsibilities. For example, US-VISIT provided us with 12 agreements for the agencies that we reviewed, and only one of them described roles and responsibilities for either APMO or the agency managing the contract work. Although responsibilities were identified, they were at a high level and the same for both the program office and the agency managing the contractor. Specifically, the IAA states that the US-VISIT COTR or point of contact and the servicing agency program office are responsible for technical oversight of the specified product or service identified in the statement of work. However, the IAA does not identify any specific contract oversight practices to be performed. According to the APMO Director, the program office did not define roles and responsibilities because the office is relatively new, and most efforts have been focused on developing policies and procedures for managing contracts that it directly controls. As noted earlier, we have previously reported that the use of IAAs is a high-risk approach to contracting. Although these contract vehicles can offer benefits of improved efficiency and timeliness, effective management of IAAs is challenging. Accordingly, we concluded that the use of IAAs requires, among other things, that the issuing agency clearly define roles and responsibilities for conducting contractor management and oversight. Knowing the full range of contract work. The program office was not able to provide us with a complete list of US-VISIT–related contract actions. Instead, US-VISIT told us that we needed to obtain a list of actions from each of the DHS and non-DHS agencies that managed the contract work. Once we compiled the list of contracting actions provided to us by the other agencies, the Director told us that no one in the program office could verify that the list was complete and correct. The Director further stated that APMO is not responsible for overseeing contracts managed outside the program office. Defining and implementing the steps to verify that deliverables meet requirements. According to DHS’s directive on IAAs, the issuing agency (US-VISIT, in this case) is to, among other things, monitor the performance of the servicing agency and/or contractor; the directive also assigns responsibility for monitoring performance to the program office (or program point of contact) and the contracting officer. The contracting officer responsible for US-VISIT’s IAAs told us that he relied on the program office’s designated points of contact to conduct oversight of those IAAs. However, the program office did not define any specific performance monitoring activities. As a result, oversight activities performed have been informal and inconsistent. For example, on the AERC contracts, the Facilities and Engineering Budget Officer held weekly teleconferences with AERC to discuss project progress and contract issues, and concerns on an exception basis. However, these meetings were not documented; in other words, any follow-up on open issues and tracking to closure was handled informally. On the CBP contract actions, the US-VISIT Deputy Chief Information Officer (or one of his representatives) attended most, but not all, of the system development- milestone progress reviews related to US-VISIT work, and held ad hoc discussions with a CBP program manager to discuss funding and work status. On air/sea exit, the US-VISIT Director of Implementation relied on weekly meetings with TSA and the contractor to keep apprised of project status. However, he relied on a representative from US-VISIT Mission Operations to certify that testing on air/sea exit was completed in a satisfactory manner, and neither he nor a member of his team reviewed the results themselves. According to the Director of APMO, specific activities to monitor contracts managed by other agencies have not been established because the program office’s efforts to date have focused on developing policies and procedures for contracts that the program office manages directly. Without clearly defined roles and responsibilities, as well as defined oversight activities for ensuring successful completion of the work across all US-VISIT–related contract activities, the program office cannot be adequately assured that required tasks are being satisfactorily completed. Agencies Managing US-VISIT– Related Contractors Did Not Establish and Implement Key Contractor Management Practices As mentioned previously, acquisition management guidance identifies effective contractor management as a key activity and describes a number of practices associated with this activity, including (among other things) establishing policies and procedures for contractor management, defining responsibilities and authorities, providing training, verifying and accepting deliverables, and monitoring contractor performance. As mentioned earlier, these practices often consist of more detailed subpractices; appendix III provides further details on the practices, subpractices, and agency performance of these on each of the contract actions we reviewed. Table 3 shows the extent to which agencies, in their documented policies or procedures, require that the critical contractor management practices be performed (see columns under “practice established?”); it also shows (under “practice implemented?”) the extent to which agencies had actually implemented such practices for the contracting actions that we reviewed, regardless of any documented requirement. As table 3 shows, agencies’ establishment and implementation of the key contractor management practices for US-VISIT–related contracts have been uneven. All of the agencies had established policies or procedures for performing some of the key contractor management practices. Only CBP, however, had established policies and procedures for some aspect of all the key practices, while GSA and AERC had established procedures for about half of the key practices. Nevertheless, most of the agencies at least partially implemented most of the practices, even though they did not establish written procedures for doing so. For example, although three of the agencies did not establish documented policies or procedures for conducting technical and management reviews with the contractor, two of them implemented some aspects of the practice. All Agencies Established Some Policies and Procedures for Contractor Management Activities Contractor management policies and procedures define the organization’s expectations and practices for managing contractor activities. All of the agencies (DHS and non-DHS) had established polices or procedures for governing some key contractor management practices. For example, CBP’s Systems Development Life Cycle, augmented by its Office of Information Technology Project Manager’s Guidebook, defines policies and procedures for assigning responsibilities and authorities for key contracting personnel and training those people responsible for implementing contractor management activities. Among other things, these documents provide descriptions of the duties of the contracting officer, the project manager, and COTR. The documents also require all affected agencies to train the members of their groups in the objectives, procedures, and methods for performing contractor management activities. CBP guidance also addresses contractor management procedures, including verifying and accepting deliverables, monitoring contract risk and taking corrective action, and conducting various reviews with the contractor. Other agencies, such as GSA and AERC, have established fewer procedures for contractor management. For example, GSA had not established procedures for three practices: (1) establishing and maintaining a plan for performing contractor oversight, (2) conducting technical reviews with the contractor, and (3) conducting management reviews with the contractor. According to GSA officials, they have not documented their oversight process in order to allow for as much flexibility as possible in performing the process. Further, they said they relied on the professional expertise of the contracting officer’s representative (COR) and/or COTR to ensure the technical accuracy of work produced by a contractor. Without established policies and procedures for contractor management, the organizations responsible for managing US-VISIT–related contracts cannot adequately ensure that these vital contractor management activities are performed. Agencies’ Implementation of Key Practices Was Uneven Implementation of key practices in the contracting actions that we reviewed was uneven. As table 3 shows, one practice—assigning responsibilities and authorities—was implemented by all agencies. Other key practices were only partially implemented or not implemented by all agencies. The following discussion provides selected examples. Most agencies implemented training of contractor management personnel. Training the personnel performing or supporting contractor management activities helps to ensure that these individuals have the necessary skills and expertise to adequately perform their responsibilities. Most of the agencies had trained some of the key contracting officials responsible for the contracting actions that we reviewed and were able to produce documentation of that training. For example, CBP relied on a DHS-mandated training program to train its key contract personnel. However, that program was not established until March 2004 for contracting officers and December 2004 for COTRs, and so it did not apply to all the contracting actions that we reviewed. Before these programs were established, CBP relied on the previously existing qualifications of its contracting personnel. However, it provided training documentation for only some of the key contracting personnel for the contracting actions that we reviewed. With respect to non-DHS agencies, AERC and GSA records showed that contracting personnel had completed contracting-related training for the contracting actions that we reviewed. Most agencies did not implement all key practices for verifying and accepting contract deliverables. Verifying that contract deliverables satisfy specified requirements provides an objective basis to support a decision to accept the product. Verification depends on the nature of the deliverable and can occur through various means, such as reviewing a document or testing software. Effectively verifying and accepting contract deliverables includes, among other things, (1) defining procedures for accepting deliverables; (2) conducting deliverable reviews or tests in order to ensure that the acquired product satisfies requirements; (3) documenting the results of the acceptance review or test; (4) establishing an action plan for any deliverables that do not pass the acceptance review or test; and (5) identifying, documenting, and tracking action items to closure. All agencies implemented some (but not all) of the key practices associated with verifying and accepting contract deliverables. The following two examples from CBP and TSA illustrate this. CBP implemented most of the subpractices associated with this practice. For one contracting action reviewed (software development for Increment 2B functionality), CBP defined acceptance (testing) procedures, conducted the tests to verify that the deliverables satisfied the requirements, and documented the results. However, it did not develop an action plan to identify, document, and track unresolved action items to closure. Further, CBP accepted the deliverable before verifying that it had satisfied the requirements. Specifically, test results were presented at a production readiness review (one of the progress reviews called for in CBP’s system development life cycle) on November 4, 2004. The review meeting included a US-VISIT stakeholder representative who signed off on the test results, indicating that US-VISIT accepted the deliverable and concurred that it was ready to operate in a production environment. However, the test analysis report highlighted several issues that called this conclusion into question. For example, the report stated that testing continued after the review (through November 8, 2004), and the report identified 67 issues at severity level 2, which CBP defines as a function that does not work and whose failure severely impacts or degrades the system. The report further stated that some test cases were delayed and subject to further testing. CBP could not provide any documentation that these open issues were resolved or that the test cases were executed. Further, the COTR told us that CBP did not define specific acceptance standards, such as the number and severity of defects permissible for acceptance. Instead, acceptance of the deliverable was subjectively based on the COTR’s assessment of whether the software could provide critical functionality. For another contract action (Increment 1 hardware and software installation at ports of entry), CBP did not verify that the equipment was installed according to contract requirements. We were told by both the CBP Director of Passenger Systems (who was involved with much of the US-VISIT work) and the contract task monitor that the formal process for verifying and accepting contract deliverables consisted of a site-specific deployment checklist that recorded acceptance of deployment at each port. Acceptance required a signature from a government employee, a date, and an indication of deployment status (the two options for this status were (1) that the equipment was installed and operational or (2) that it was not installed, along with a description of reasons why it was not). However, as shown in table 4, not all checklists that we reviewed were signed or indicated that the equipment was installed and operational, and CBP could not provide documentation on how the identified issues were resolved. Further, although the deliverable was deployed to 119 sites, CBP provided checklists for 102 sites and was unable to provide them for the other 17 sites. TSA implemented three of the practices associated with verifying and accepting deliverables—defining acceptance procedures, verifying that deliverables satisfy requirements, and documenting the results of the tests. Specifically, TSA tested the air/sea exit software and hardware, and developed a test plan that included test procedures and a traceability matrix. It also documented the test results in a test analysis report that noted that the software was ready for deployment because of the low severity of identified deficiencies. The report included, among other things, a list of system deficiencies identified during testing. The report also included copies of documents provided to a US-VISIT technical representative: a test problem report, a summary of testing defects, and a document indicating that the contractor had approved the test analysis. However, TSA did not provide evidence that the deficiencies were managed and tracked to closure. TSA officials told us that open issues were tracked informally via twice-weekly meetings with a US-VISIT representative, TSA personnel, and contactor staff. Although these meetings were documented, the minutes did not provide any evidence of testing issues being discussed. According to program officials, this was due to the short development time frame (about 4 months) and the need to bypass traditional TSA milestone reviews in order to ensure that the product was delivered on time. Without adequately verifying that contract deliverables satisfy requirements before acceptance, an organization cannot adequately know whether the contractor satisfied the obligations of the contract and whether the organization is getting what it has paid for. Most agencies performed contractor technical and management reviews. Monitoring contractor performance is essential for understanding the contractor’s progress and taking appropriate corrective actions when the contractor’s performance deviates from plans. Such monitoring allows the acquiring organization to ensure that the contractor is meeting schedule, effort, cost, and technical performance requirements. Effective monitoring activities include conducting reviews in which budget, schedule, and critical dependencies are assessed and documented, and the contractor’s implementation and interpretation of technical requirements are discussed and confirmed. Three of the four agencies implemented some contractor review activities, including, among other things, addressing technical requirements progress against schedule and costs through regular meetings with the contractor. For example, TSA conducted weekly reviews with the contractor to discuss the status of contract performance; material prepared for some of these weekly meetings indicated that topics discussed were “actual dollars expended” versus “budget at project completion,” projected and actual schedule versus baseline, anticipated product delivery dates against planned due dates, and issues and risks. As another example, CBP held weekly documented meetings with its contractor to discuss open issues, the status of the project, and the current stage of the systems development life cycle. Additionally, CBP milestone reviews addressed project schedule, budget, and risk, some of which could be traced to specific contracts. In contrast, AERC did not document the monitoring of contractor performance during the performance period of the contract. Instead, to document contractor performance, it relied solely on end-of-contract evaluations required by the FAR. Program Office and Other Agencies’ Contract Management Was Impaired by Financial Management Weaknesses Financial management weaknesses at both the program office and the other agencies impaired their ability to adequately manage and oversee US-VISIT–related contracting activities. Specifically, well-documented, severe financial management problems at DHS (and at ICE in particular) affected the reliability and effectiveness of accounting for the US-VISIT program. Accordingly, the program office and the other DHS agencies were unable to provide accurate, reliable, and timely accounts for billings and expenditures made for contracts related to US-VISIT. In addition, a number of invoice payments were improperly paid and accounted for. Serious DHS Financial Management Problems Affected the Quality of Financial Data for US-VISIT Contracts DHS’s financial management problems are well-documented. When the department began operations in 2003, one of the challenges we reportedwas integrating a myriad of redundant financial management systems and addressing the existing financial management weaknesses inherited by the department. Since that time, DHS has undergone three financial statement audits and has been unable to produce fully auditable financial statements for any of the audits.In its most recent audit report, auditors reported 10 material weaknesses and 2 reportable conditions. Among the factors contributing to DHS’s inability to obtain clean audit opinions were serious financial management challenges at ICE, which provides accounting services for several other DHS agencies, including the US-VISIT program. For fiscal years 2004 and 2005, auditors reported that financial management and oversight at ICE was a material weakness, principally because its financial systems, processes, and control activities were inadequate to provide accounting services for itself and other DHS agencies. According to the auditors, ICE did not adequately maintain its own accounting records or the accounting records of other DHS agencies, including US-VISIT. The records that were not maintained included intradepartmental agreements and transactions, costs, and budgetary transactions. These and other accounts required extensive reconciliation and adjustment at year-end, which ICE was unable to complete. In addition, in fiscal year 2005, ICE was unable to establish adequate internal controls that reasonably ensured the integrity of financial data and that adhered to our Standards for Internal Control in the Federal Government; the Chief Financial Officer of ICE also issued a statement of “no assurance” on internal control over financial reporting. These systemic financial challenges impaired the US-VISIT program’s contract management and oversight. As the accounting service provider for the US-VISIT program, ICE is responsible for processing and recording invoice payments both for contractors working directly for the program and for the work ICE procures on the program’s behalf. However, because of its financial problems, the reliability of the financial information processed by ICE as the accounting-services provider for the program office was limited. Further, ICE was unable to produce detailed, reliable financial information regarding the contracts it managed on behalf of US- VISIT. Program Office and Other DHS Agencies Did Not Adequately Track Billings and Expenditures Of the DHS agencies we reviewed, the program office and two others managing US-VISIT–related contracts on the program’s behalf did not track contract billings and expenditures in a way that was accurate, reliable, and useful for contract oversight and decision making. Specifically, the amounts reportedly billed were not always reliable, and expenditures for US-VISIT were not always separately tracked. Our Standards for Internal Control in the Federal Government identifies accurate recording of transactions and events as an important control activity. In addition, the standards state that pertinent financial information should be identified, captured, and distributed in a form that permits people to perform their duties effectively. In order for people to perform their duties effectively, they need access to information that is accurate, complete, reliable, and useful for oversight and decision making. In the case of US-VISIT, expenditures and billings made for US-VISIT– related contracts should be tracked by the program office and the agencies managing the contracts on the program office’s behalf, and controls should be in place to ensure that the information is reliable, complete, and accurate. Furthermore, in order for the information to be useful for oversight and decision making, billings and expenditures made for US- VISIT work should be separately tracked and readily identifiable from other billings and expenditures. Separately accounting for program funds is an important budgeting and management tool, especially when those funds are reimbursed by another agency for a program-specific purpose, as was the case for US-VISIT. Finally, according to our internal control standards and more specifically, our Internal Control Management and Evaluation Tool, information should be available on a timely basis for effective monitoring of events, activities, and transactions. The Amounts Reportedly Billed on US-VISIT–Related Contracts Are Nor Reliable Because effective internal controls were not in place, the reliability of US- VISIT–related billings by DHS agencies was questionable. First, the program office could not verify the scope of completed and ongoing contracting actions. Second, for the contracting actions that were reported, not all agencies provided billing information that was reliable. The program office did not track all contracting activity and thus could not provide a complete list of contracting actions. In the absence of a comprehensive list, we assembled a list of contracting actions from the program office and from each of the five agencies responsible for contracting for US-VISIT work. However, the APMO Director did not know whether the list of contracting actions was valid. In addition, to varying degrees, other DHS agencies could not reliably report to us what had been invoiced on the US-VISIT–related contracts they managed. In particular, ICE’s substantial financial management challenges precluded it from providing reliable information on amounts invoiced against its contracts. Its inability to provide us with key financial documents for US-VISIT–related contracts illustrated its challenges. Over a period of 9 months, we repeatedly requested that ICE provide various financial documents, including expenditure listings, invoice documentation, and a list of all contracting actions managed on behalf of US-VISIT. However, it did not provide complete documentation in time to be included in this report. In particular, ICE was not able to provide complete and reliable expenditures to date. It did provide a list of US- VISIT–related contracting actions, but it did not include the amounts invoiced on those contracting actions, and program office staff noted several problems with ICE’s list, including several contracts that were likely omitted. A comparable list provided by the DHS Office of the Chief Procurement Officer showed ICE’s invoiced amounts, but the contracting actions on this list differed from those provided by ICE. Without accurate tracking of financial information related to US-VISIT contracts, the full scope of contracting and spending on the program cannot be known with reasonable certainty. This limitation introduces the increased possibility of inefficiencies in spending, improper payments, and poor management of limited financial resources. For CBP, a list of contacting actions provided by program officials included discrepancies that raised questions about the accuracy both of the list and of the invoiced amounts. First, the task order number of a 2002 contracting action changed during our period of review, and CBP initially reported the task order as two different contracting actions—one issued in 2002 and another issued in 2004. Second, the task order was for services performed bureauwide, not just for US-VISIT, and from the contract documentation it was not discernable which work was specific to US- VISIT. Such discrepancies suggest that the amount invoiced specifically to US-VISIT was not accurate. Finally, our summation of all the invoices, through March 31, 2005, on this contracting action totaled about $8.8 million, which was about $1.3 million more than the total invoiced amount that CBP had reported. This discrepancy indicated that CBP was not adequately tracking funds spent for US-VISIT on this contracting action, which increased the risk that the program was improperly reimbursing CBP on this contract. No such discrepancy existed between reported and actual invoiced amounts on the 2003 and 2004 CBP contracting actions we reviewed. TSA was able to provide accurate billing information on the one US-VISIT– related contracting action that it managed, but delays in invoicing on this contracting action increase the risk of future problems. As of February 2005, development on the TSA contract action was finished, and the contract had expired. However, from April 2005 through February 2006 (the latest date available), TSA reported that it continued to receive and process about $5 million in invoices, and that the contractor can still bill TSA for prior work performed for up to 5 years after expiration of the contract. According to TSA, the contractor estimated (as of February 2006) that it would be sending TSA an additional $2 million in invoices to pay for work already completed. TSA officials could not explain this delay in invoicing. Such a significant lag between the time in which work is completed and when it is billed can present a challenge to the proper review of invoices. DHS Agencies Did Not Always Separately Track Expenditures Made to Contractors for US-VISIT Work ICE did not track expenditures made to contractors for US-VISIT work separately from other expenditures, and CBP experienced challenges in its efforts to do so. Reliable, separate tracking of such expenditures is an important internal control for ensuring that funds are being properly budgeted and that the program office is reimbursing agencies only for work performed in support of the program. In the case of ICE, its financial management system did not include unique codes or any other means to reliably track expenditures made for US- VISIT–related contracts separately from non-US-VISIT expenditures. As a result, ICE did not have reliable information on what it spent for the program, which means that it could have requested improper reimbursements from the program office. More specifically, the most detailed list ICE could provide of its US-VISIT–related payments was by querying its financial management system by contract number, which provided all payments under the contract number. However, each contract’s scope of work is generally broad and includes work throughout ICE, not just for US-VISIT. Thus, this method would not give an accurate picture of what expenditures ICE had made for US-VISIT–related work. In the case of CBP, it began using coding in its financial management system to separately track US-VISIT obligations and expenditures beginning in fiscal year 2003, when CBP first received funding for US- VISIT. At that time, CBP tracked all US-VISIT expenditures under a single project code. However, between fiscal years 2003 and 2004, CBP underwent a system conversion that interrupted its tracking of US-VISIT– related funds, which made it challenging to separately report US-VISIT– related expenditures. During this time, several changes were made to the codes used to track US-VISIT information. When we requested a listing of the US-VISIT–related expenditures by CBP, it took several weeks for CBP finance center staff to document the financial management system coding changes and produce a reasonably complete listing of the US-VISIT– related expenditures that CBP made during the system conversion. In fiscal years 2004 and 2005, CBP again began tracking all US-VISIT–related expenditures separately under a single budget code. Thus, in the future, the tracking and reporting of US-VISIT expenditures by CBP should be more timely and reliable. Several Payments to Contractors for US-VISIT Work Were Improperly Paid and Accounted for Although the program office and the agencies—both DHS and others— doing work on its behalf usually documented approval of contractor invoices before payment, a number of invoices were improperly paid and accounted for, resulting in a potential loss of funds control and, in one case, a duplicate payment on an invoice of over $3 million. Our Internal Control Management and Evaluation Tool states that transactions and events need to be appropriately classified and that pertinent information is to be identified and captured in the right form. Overpayments occurred as a result of two kinds of errors: on one occasion a duplicate payment was made, and on several other occasions incorrect balances were paid. A duplicate payment was made on an invoice for over $3 million. APMO had sent an authorization for payment in full on the invoice to its finance center. Then, 1 month later, APMO sent another authorization for payment in full on the same invoice. The second payment was later noticed, and the contractor refunded the amount. The other set of overpayments, although small in dollar value, exemplify a significant breakdown in internal control. Invoices billed to AERC on a fiscal year 2005 contract listed the current amount billed on the invoice, as well as a cumulative balance; the cumulative balance included invoice payments that AERC had already made, but that had not been recorded by the contractor when the next invoice was generated. On several of the invoices, AERC mistakenly paid the higher cumulative balance when the current amount should have been paid. As a result, AERC overpaid the vendor by about $26,600. Moreover, it was the contractor that first reported this overpayment in September 2005 and refunded the overpayment amount to AERC. According to DHS officials, the US-VISIT program office had independently identified the overpayment in November 2005 and requested clarification from AERC the following day. Also at APMO, two questionable payments were made that arose from the overriding of controls created for the prime US-VISIT contract. The prime contract has been implemented through 12 task orders with multiple modifications that either increased funding or made other changes to the contract terms. To account for the obligations made on each task order, the program’s Office of Budget and Finance created separate tracking codes in the financial system for each task order and sometimes for each modification of a task order. The separate tracking of each obligation is a good control for tracking and controlling spending against task order funds. However, APMO overrode this control when it instructed the finance center to pay two invoices—one for about $742,000 and one for about $101,000—out of the wrong account: that is, with funds for task orders other than those for which the invoices were billed. APMO did not provide any justification for payment with funds from the improper account. Our Internal Control Management and Evaluation Tool states that any intervention or overriding of internal controls should be fully documented as to the reasons and specific actions taken. CBP also inappropriately paid for work unrelated to US-VISIT out of funds designated for US-VISIT. For a 2003 contracting action that we reviewed, invoices included a significant amount in travel billings. However, several travel vouchers that accompanied these invoices were for work unrelated to US-VISIT. For example, terms like “Legacy ag/legacy Customs unification,” “Agriculture Notes Installation,” and “Agriculture AQI” were indicated on the vouchers. CBP confirmed that these vouchers were billed to US-VISIT in error. Additionally, other vouchers included descriptions that were vague and not clearly related to any specific program (e.g., emergency hardware replacement), and thus it was not clear that the work being billed was related to the program. Along with the travel expenses, the labor hours associated with the above vouchers were also being billed to the program. This circumstance calls into question not only whether or not the travel charges were inappropriately classified as US-VISIT work, but also whether the time that these employees were charging was inappropriately classified, and thus improperly paid. On one CBP contracting action, some charges that were not related to US- VISIT may have been reimbursed by the program office. The contracting action in question was a 2002 action for CBP-wide disaster recovery services, and thus not all charges were directly related to the US-VISIT program. On this task order, CBP expended about $1.28 million from program-designated funds on items that were not clearly specified as US- VISIT work on the invoices. Of that amount, about $43,000 could be attributed to a contract modification specific to the program. However, CBP stated that one invoice for about $490,000 included in this $1.28 million was paid from the program’s funds to correct two payments for earlier US-VISIT invoices that were erroneously made from nonprogram funds. We also found about $771,000 of invoice dollars that were specified as US-VISIT work, but that were not on the CBP-provided expenditure reports for program funds. As a result of these various discrepancies, the US-VISIT program may have reimbursed CBP for work that was not done on its behalf. Also, the program official responsible, under DHS policy, for monitoring the CBP contracts related to US-VISIT told us that he had not been reviewing invoices on IPAC reimbursement requests from CBP, even though such reviews are required by DHS policy. In addition, on the 2003 CBP contracting action that we reviewed, many of the travel vouchers included first-class flights taken by contract personnel, although (with few exceptions) purchase of first-class travel is not allowed for travel on cost-reimbursable type contracts. However, travel documentation indicated first-class travel on numerous instances with no explanation or justification of the first-class travel or documentation to indicate that CBP had requested any explanation. CBP officials noted that some frequent fliers are automatically upgraded when purchasing a full- fare flight. Although this is a reasonable explanation, CBP provided no documentation showing that it completed any inquiry or research at the time it was invoiced to determine if first-class travel was being purchased or if upgrades were being given, and invoice documentation did not clarify this. Further, in several instances, complete documentation was not provided for the costs of all airline travel expenses. A final concern regarding payments to contractors is raised by the fact that several of the agencies made late payments on invoices. Under the Prompt Payment Act, the government must pay interest on invoices it takes over 30 days to pay. Not only do these interest payments deplete funds available for US-VISIT, but excessive late invoice payments are also a signal that the contract payment oversight process is not being effectively managed. CBP and TSA experienced agencywide increases in contract prompt-payment interest. CBP reported that in fiscal year 2004, the year that it converted to a new accounting system, prompt pay interest accounted for 7.66 percent of all payments, a sharp increase from the prior year’s frequency rate of 1.74 percent. In fiscal year 2005, the rate of interest payments at CBP receded to 1.80 percent of total payments. APMO also paid substantial amounts in prompt payment interest. According to DHS’s auditors, ICE, which provides US-VISIT payment services, had not established internal controls to ensure that invoices were paid in a timely manner. For the invoices that we reviewed, prompt- payment interest was paid on approximately 26 percent of the prime contract invoices that we reviewed, representing over $27,000 in payments. In addition, we could not verify that the proper amount of interest was paid because information in the ICE financial management system was incorrect. For example, in many instances, important dates used for determining prompt-pay interest were entered incorrectly, or the dates in the system could not be validated based on invoice documentation provided. A program official told us that certain program staff have recently been granted read-only access to ICE’s financial management system to monitor invoice payments. If the program office effectively uses this increased oversight ability, it could reduce the number of prompt-payment violations as well as reduce other improper contract payments made by the program office. Conclusions Contractors have played, and will continue to play, a major role in delivering US-VISIT capabilities, including technology, facilities, and people. Therefore, the success of the program depends largely on how well DHS manages and oversees its US-VISIT–related contracts. Establishing and implementing effective contractor management and oversight controls, including financial management controls, can greatly increase the department’s ability to manage and oversee US-VISIT–related contracts. However, the department’s management and oversight of US- VISIT–related contracts are not yet at the level that they need to be to adequately ensure, for example, that contract deliverables satisfy program requirements, that cost and schedule commitments are met, that program outcomes are achieved, that funds are not overspent and improperly reimbursed, and that payments are made in a proper and timely manner. Although the program office has generally established and implemented key contractor management controls on those contracts that it manages directly, it has not adequately overseen US-VISIT–related contracts that were managed by other DHS and non-DHS agencies. According to program office officials, this is because they have initially focused on those contracts that they manage directly. However, this narrow focus raises concerns because the agencies managing contracts on the program office’s behalf have not implemented the full range of management controls needed to have a full, accurate, reliable, and useful understanding of the scope of contract activities and performance. Moreover, none of the US-VISIT contracts that we reviewed have been subject to important financial management controls. As previous audits have shown, DHS suffers from numerous material weaknesses in financial management, some of which are directly related to ICE (the DHS component that provides financial management services to the program office). These weaknesses have contributed to the program’s inability to know the full scope of contract activities and fully account for expenditures, among other things. By impairing the reliability and effectiveness of accounting for US-VISIT contracts, these weaknesses have diminished the program’s ability to effectively manage and oversee work performed by contractors—work that is essential for the program to achieve its goals. Until DHS addresses these contract management and oversight weaknesses, the US-VISIT program will remain at risk of not delivering required capabilities and promised benefits on time and within budget, and it will be vulnerable to financial mismanagement. Recommendations for Executive Action Given the US-VISIT program’s mission importance, size, and heavy reliance on contractor assistance, we recommend that the Secretary of Homeland Security direct the US-VISIT Program Director to take the following five actions to strengthen contract management and oversight, including financial management: For each US-VISIT contract action that the program manages directly, establish and maintain a plan for performing the contractor oversight process, as appropriate. Develop and implement practices for overseeing contractor work managed by other agencies on the program office’s behalf, including (1) clearly defining roles and responsibilities for both the program office and all agencies managing US-VISIT–related contracts; (2) having current, reliable, and timely information on the full scope of contract actions and activities; and (3) defining and implementing steps to verify that deliverables meet requirements. Require, through agreements, that agencies managing contract actions on the program office’s behalf implement effective contract management practices consistent with acquisition guidance for all US-VISIT contract actions, including, at a minimum, (1) establishing and maintaining a plan for performing contract management activities; (2) assigning responsibility and authority for performing contract oversight; (3) training the people performing contract oversight; (4) documenting the contract; (5) verifying that deliverables satisfy requirements; (6) monitoring contractor-related risk; and (7) monitoring contractor performance to ensure that the contractor is meeting schedule, effort, cost, and technical performance requirements. Require DHS and non-DHS agencies that manage contracts on behalf of the program to (1) clearly define and delineate US-VISIT work from non- US-VISIT work as performed by contractors; (2) record, at the contract level, amounts being billed and expended on US-VISIT–related work so that these can be tracked and reported separately from amounts not for US-VISIT purposes; and (3) determine if they have received reimbursement from the program for payments not related to US-VISIT work by contractors, and if so, refund to the program any amount received in error. Ensure that payments to contractors are timely and in accordance with the Prompt Payment Act. Agency Comments and Our Evaluation We received written comments on a draft of this report from DHS, which were signed by the Director, Departmental GAO/IG Liaison Office, and are reprinted in appendix II. We also received comments from the Director of AERC and the Assistant Commissioner for Organizational Resources, Public Buildings Service, GSA. Both the Department of Defense audit liaison and the GSA audit liaison requested that we characterize these as oral comments. In its written comments, DHS stated that although it disagreed with some of our assessment, it agreed with many areas of the report and concurred with our recommendations and the need for improvements in US-VISIT contract management and oversight. The department disagreed with certain statements and provided additional information about three examples of financial management weaknesses in the report. Summaries of DHS’s comments and our response to each are provided below. The department characterized as misleading our statements that US-VISIT (1) depended on other agencies to manage financial matters for their respective contracts and (2) relied on another agency for US-VISIT’s own financial management support. With respect to the former, DHS noted that the decision to use other agencies was based on the nature of the services that were required, which it said were outside the scope of the program office’s areas of expertise. We understand the rationale for the decision to use other agencies, and the statement in question was not intended to suggest anything more than that such a decision was made. We have slightly modified the wording to avoid any misunderstanding. With respect to its own financial management, DHS said that for us to declare that US-VISIT depended on another agency for financial management support without identifying the agency and the system, in combination with our acknowledging that we did not examine the effectiveness of this unidentified system, implies that our report’s scope is broader than what our congressional clients asked us to review. We do not agree. First, our report does identify ICE as the agency that the program office relies on for financial management support. Second, although we did not identify by name the ICE financial management system, we did describe in detail the serious financial management challenges at ICE, which have been reported repeatedly by the department’s financial statement auditors and which have contributed to the department’s inability to obtain a clean audit opinion. Moreover, we fully attributed these statements about these serious challenges to the auditors. The department said that our statement regarding the purpose of the contracts managed by AERC needed to be clarified, stating that our report reflects the scope of the two contract actions reviewed and not the broader scope of services under the interagency agreement. We agree that the description of AERC services in our report is confined to the scope of the two contract actions that we reviewed. This is intentional on our part since the scope of our review did not extend to the other services. We have modified the report to clarify this. The department provided additional information about three examples of invoice discrepancies and improper payments cited in the report, including reasons why they occurred. Specifically, the department said that the reason that CBP reported a 2002 contracting action as also a 2004 contracting action was because of the concurrent merger of CBP within DHS and the implementation of CBP’s new financial system. It further stated that the reason that US-VISIT made a duplicate payment to the prime contractor was, at least partially, due to poor communication between US-VISIT and its finance center. Regarding two other duplicate payments, DHS stated that while the cause of the duplicate payments is not completely clear from the available evidence, both are almost certainly errors resulting from processes with significant manual components, as opposed to deliberate control overrides, since adequate funds were available in the correct accounts for each case. The department also noted that communications may have also contributed to one of these two duplicate payments. We do not question the department’s reasons or the additional information provided for the other payments, but neither changes our findings about the invoice discrepancies and improper payments. The department stated that although the contractor initially identified the AERC overpayment on September 13, 2005, the US-VISIT program office independently identified the billing discrepancy on November 1, 2005, and requested clarification from AERC the following day. The department further stated that because we describe the overpayment example in the report as being a small dollar value, we should have performed a materiality test in accordance with accounting principles in deciding whether the overpayment should be disclosed in a public report. We do not dispute whether the US-VISIT program independently identified the overpayment in question. Our point is that an invoice overpayment occurred because adequate controls were not in place. In addition, while we agree that materiality is relevant to determining whether to cite an example of an improper payment, another relevant consideration to significance is the frequency of the error. Our decision to disclose this particular overpayment was based on our judgment regarding the significance of the error as defined in generally accepted government auditing standards. It is our professional judgment that this overpayment is significant because of the frequency with which it occurred. Specifically, of the eight invoices that we reviewed, four were improperly paid. In oral comments, the Director of AERC questioned the applicability of the criteria we used to evaluate AERC contract management practices and our assessment of its process for verifying and accepting deliverables. Despite these disagreements, he described planned corrective actions to respond to our findings. The Director stated in general that the Capability Maturity Model Integration (CMMI)® model was not applicable to the contracts issued by the Corps of Engineers, and in particular that a contract oversight plan was not applicable to the two contract actions that we reviewed. In addition, the Director commented that AERC’s practices were adequate to deal appropriately with contractor performance issues had these been raised. Nonetheless, to address this issue, the Director stated that AERC would require the US-VISIT program office to submit an oversight plan describing the project’s complexity, milestones, risks, and other relevant information, and it would appoint qualified CORs or COTRs to implement the plans and monitor contractor performance. We disagree with AERC’s comments on the applicability of our criteria. Although the CMMI model was established to manage IT software and systems, the model’s practices are generic and therefore applicable to the acquisition of any good or service. Specifically, the contractor management oversight practices discussed in this report are intended to ensure that the contractor performs the requirements of the contract, and the government receives the services and/or products intended within cost and schedule. We also do not agree that the contract actions in question did not warrant oversight plans. Although the content of oversight plans may vary (depending on the type, complexity, and risk of the acquisition), each acquisition should have a plan that, at a minimum, describes the oversight process, defines responsibilities, and identifies the contractor evaluations and reviews to be conducted. Since the chances of effective oversight occurring are diminished without documented plans, we support the program manager’s commitment to require these plans in the future. Regarding an overpayment discussed in our report, the Director indicated that this problem was resolved as described in DHS’s comments, and that in addition, AERC has procedures and controls to prevent the government from paying funds in excess on a firm-fixed price contract such as the one in question. Nonetheless, the Director described plans for strengthening controls over contract progress payments and invoices, including having trained analysts review all invoices and ensuring that a program/project manager has reviewed the invoices and submitted written authorization to pay them. The Director also stated that AERC has an established process for controlling and paying invoices, which provides for verifying and accepting deliverables. We do not consider that the AERC process was established because although AERC officials described it to us, it was neither documented nor consistently followed. For example, one contracting action that we reviewed had three invoices that did not have a signature or other documentation of approval, even though such approval, according to AERC, is a required part of the process. In oral comments, the GSA Assistant Commissioner disagreed with the applicability of certain of the criteria that we used in our assessment, as well as with our assessment that these and other criteria had not been met. For example, the Assistant Commissioner stated that regulations or policies do not require GSA to establish and maintain a plan for performing the contract oversight process, that its current practices and documents (such as the contract statement of work and COR/COTR delegation letters) in effect establish and maintain such a plan, that GSA documented the oversight process and results to the extent necessary to ensure contractor performance, and that GSA had established a requirement to conduct contractor reviews. Although, as we state in our report, GSA policies do not include a requirement for an oversight plan, we still believe that it is appropriate to evaluate GSA against this practice (which is consistent with sound business practices and applies to any acquisition), and that GSA’s processes and activities did not meet the criteria for this practice and ensure effective oversight of the contracts. We did not find that the delegation letters and contract statements of work were sufficient substitutes for such plans, because, for example, they do not consistently describe the contractor oversight process or contractor reviews. Further, the inclusion of a requirement for contractor reviews in some contracts/statements of work does not constitute agencywide policies and procedures for performing reviews on all contracts. GSA also provided further descriptions of its financial management controls and oversight processes and activities, but these descriptions did not change our assessment of GSA’s financial management controls or the extent to which the oversight processes and activities satisfy the practices that we said were not established or not consistently implemented. Among these descriptions was information on an automated tool that GSA provided its contracting officers; however, this tool was not used during the period under review. GSA also provided certain technical comments, which we have incorporated in our report, as appropriate. As agreed with your offices, unless you publicly announce the contents of this report earlier, we plan no further distribution until 30 days from the report date. At that time, we will send copies of this report to the Chairmen and Ranking Minority Members of the Senate and House Appropriations Committees, as well as to the Chairs and Ranking Minority Members of other Senate and House committees that have authorization and oversight responsibility for homeland security. We will also send copies to the Secretary of Homeland Security, the Secretary of Defense, the Administrator of GSA, and the Director of OMB. Copies of this report will also be available at no charge on our Web site at http://www.gao.gov. Should your offices have any questions on matters discussed in this report, please contact Randolph C. Hite at (202) 512-3439 or at [email protected], or McCoy Williams at (202) 512-9095 or at [email protected]. Contact points for our Offices of Congressional Relations and Public Affairs may be found on the last page of this report. Key contributors to this report are listed in appendix IV. Appendix I: Objective, Scope, and Methodology Our objective was to determine whether the Department of Homeland Security (DHS) has established and implemented effective controls for managing and overseeing contracts related to the U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT) program. To address our objective, we assessed the implementation of key contractor management controls at the program office and at other DHS and non- DHS agencies responsible for managing US-VISIT–related contracts. We also evaluated the program office’s oversight of US-VISIT–related contracts managed by these other organizations. Finally, we reviewed internal control processes and procedures in place over contract financial management. Besides the US-VISIT program office, the organizations within DHS that we identified as having responsibility for managing US-VISIT–related contracts were Customs and Border Protection (CBP), the Transportation Security Agency (TSA), and Immigration and Customs Enforcement (ICE). The non-DHS agencies performing work in support of US-VISIT were the General Services Administration (GSA) and the Army Corps of Engineers Architect-Engineer Resource Center (AERC). Contract management controls: To assess key contract management controls and implementation of those controls at US-VISIT and other agencies responsible for managing US-VISIT–related contracts, we identified leading public and private sector practices on contract management, such as those prescribed by the Federal Acquisition Regulation (FAR) and Carnegie Mellon University’s Software Engineering Institute, which publishes the Capability Maturity Model Integration. US-VISIT officials identified the contracts being managed by the program, all within the Acquisition Program Management Office (APMO). To evaluate the management of the program’s contracts, we assessed APMO’s and other agencies’ documented policies against the leading practices that we identified. We also determined the extent to which those policies were applied to specific contracting actions and determined the extent to which, if any, other formal or otherwise established practices were used to manage or oversee the specific contract actions. We also discussed any variances with agency officials to determine the reasons why those variances existed. In determining the extent to which practices/subpractices were judged to be established/implemented, we categorized them into one of the following: not established/implemented. We judged whether the practice was established, partially established, or not established depending on whether the agency had documented policies and procedures addressing the practice and all, some, or none of the subpractices (where applicable). We judged whether a practice was implemented, partially implemented, or not implemented on the basis of documentation demonstrating that the practice and all, some, or none of the subpractices (where applicable) had been implemented for the contracting actions that we reviewed. We judged that an agency had “partially established” the requirement for a practice or subpractice if the agency relied only on the FAR requirement to perform this activity, but did not establish a process (i.e., documented procedures) for how the FAR requirement was to be met. We judged that an agency had “partially implemented” a practice or subpractice if it had implemented some, but not all, facets of the practice (including its own related requirements for that practice). To select specific contracting actions for review, we analyzed documentation provided by the program and by the DHS and non-DHS agencies responsible for managing US-VISIT–related contracts, to identify all contracting work performed in support of the program. Program officials were unable to validate the accuracy, reliability, and completeness of the list of contracting actions. Therefore, we did not perform a statistical sampling of the identified contracting actions. Rather, we judgmentally selected from each agency one contracting action for US- VISIT–related work awarded in each fiscal year from March 1, 2002, through March 31, 2005, focusing on service-based contracts. Thus, fiscal years 2002 through 2005 were each reviewed to some extent. Not all organizations awarded contracting actions in every fiscal year covered under our review, in which case an action was not selected for that fiscal year for that organization. The contracting actions selected from ICE were excluded in our analysis of the implementation of management and financial controls because of delays in receiving contract-specific documentation. One program management contract that was reported to us by US-VISIT was transferred to the program from ICE shortly before the end of our review period, and so we were unable to determine, because of the issues with ICE identified above, what management activities were performed on the contract. For each selected contracting action, we reviewed contract documentation, including statements of work, project plans, deliverable reviews, and other contract artifacts, such as contractor performance evaluations. We then compared documentary evidence of contract management activity to leading practices and documented policies, plans, and practices. Finally, we determined what, if any, formal or established oversight practices were in existence at the contract level. Table 5 shows the judgmental selection of contract actions that were reviewed for each agency, including APMO. Contract oversight controls: To assess the program’s oversight of program-related contracts, we used DHS guidance pertaining to intra- and intergovernmental contracting relationships, as well as practices for oversight developed by us. We met with program office officials to determine the extent to which the program office oversaw the performance of US-VISIT–related contracts and identified the organizations performing work in support of the program (as listed earlier). We met with these organizations to determine the extent to which the program office interacted with them in an oversight capacity. Financial management controls: To assess internal control processes and procedures in place over contract financial management, we reviewed authoritative guidance on contract management found in the following: our Policy and Procedures Manual for Guidance of Federal Agencies, Title 7—Fiscal Guidance; Office of Management and Budget (OMB) Revised Circular A-123, Management’s Responsibility for Internal Control; and OMB Revised Circular A-76, Performance of Commercial Activities. We also reviewed DHS’s performance and accountability reports for fiscal years 2003, 2004, and 2005, including the financial statements and the accompanying independent auditor’s reports, and we reviewed other relevant audit reports issued by us and Inspectors General. We interviewed staff of the independent public accounting firm responsible for auditing ICE and the DHS bureaus for which ICE provides accounting services (including US-VISIT). We obtained the congressionally approved budgets for US-VISIT work and other relevant financial information. For each of the contracting actions selected for review, listed above, at US-VISIT, AERC, GSA, CBP, and TSA, we obtained copies of available invoices and related review and approval documentation. We reviewed the invoice documentation for evidence of compliance with our Standards for Internal Control in the Federal Government and Internal Control Management and Evaluation Tool.Specifically, we reviewed the invoices for evidence of the performance of certain control activities, including the following: review and approval before payment by a contracting officer, contracting officer’s technical representative, and other cognizant officials; reasonableness of expenses billed (including travel) and their propriety in relation to US-VISIT; payment of the invoice in the proper amount and to the correct vendor; payment of the invoice from a proper funding source; and payment of the invoice within 30 days as specified by the Prompt Payment Act. We also reviewed the invoices for compliance with requirements of the specific contract provisions for which they were billed. We did not review invoice documentation for the selected contracting actions managed by ICE, because ICE did not provide us with invoice documentation for all requested contracts in time to meet fieldwork deadlines. We also obtained copies of invoices paid through July 2005 and available payment review and approval documentation on the prime contract from the ICE finance center. We reviewed this documentation for evidence of execution of internal controls over payment approval and processing. In addition, we performed data mining procedures on the list of payments from APMO for unusual or unexpected transactions. Based on this analysis, we chose a judgemental selection of payments and reviewed their related invoice and payment approval documentation. We interviewed agency officials involved with budgeting, financial management, contract oversight, and program management at the program office, ICE, CBP, TSA, AERC, and GSA. We obtained and reviewed DHS and US-VISIT policies, including the DHS Acquisition Manual; US-VISIT Contract Management and Administration Plan; US-VISIT Acquisition Procedures Guide (APG-14)—Procedures for Invoice DHS Management Directive 0710.1 (Reimbursable Agreements); and CBP and ICE’s standard operating procedures regarding financial activities. We also interviewed representatives from the prime contractor to determine how they track certain cost information and invoice the program. In addition, we observed how requisitions and obligations are set up in the financial management system used by the program. We observed invoice processing and payment procedures at the CBP and ICE finance centers, the two major finance centers responsible for processing payments for program-related work. From the CBP finance center, we obtained data on expenditures for US-VISIT–related work made by CBP from fiscal year 2003 through fiscal year 2005. From the ICE finance center, which processes payments for the program office, we obtained a list of payments made by US-VISIT from August 2004 through July 2005. We did not obtain this level of detail for expenditures at AERC and GSA because these agencies are external to DHS; therefore we do not report on the reliability of expenditure reporting by either agency. From ICE’s finance center, we also obtained and reviewed a list of Intra- governmental Payment and Collection system transactions paid by the US- VISIT program office to its federal trading partners through September 30, 2005. We requested a list of expenditures on program-related contracts managed by ICE; however, ICE was unable to provide a complete, reliable list. Officials at ICE’s Debt Management Center, however, did provide a list of ICE’s interagency agreements related to US-VISIT. In assessing data reliability, we determined that the available data for this engagement were not sufficiently reliable for us to conduct statistical sampling or to base our conclusions solely on the data systems used by the program and other agencies managing US-VISIT–related contracts. Specifically, the contracting actions managed by the program office and these agencies were self-reported and could not be independently validated. Further, recent audit reports found that the financial system used by the program office and ICE was unreliable, and because of the system, among other reasons, the auditors could not issue an opinion on DHS’s fiscal year 2004 and 2005 financial statements. Our conclusions, therefore, are based primarily on documentary reviews of individual contracting actions and events, and our findings cannot be projected in dollar terms to the whole program. We conducted our work at DHS finance centers in Dallas, Texas and Indianapolis, Indiana; CBP facilities in Washington, D.C., and Newington, Virginia; ICE facilities in Washington, D.C.; TSA facilities in Arlington, Virginia; the US-VISIT program offices in Rosslyn, Virginia; and GSA and AERC facilities in Ft. Worth, Texas. Our work was conducted from March 2005 through April 2006, in accordance with generally accepted government auditing standards. Appendix II: Comments from the Department of Homeland Security Appendix III: Detailed Agency Evaluations Staff Acknowledgments In addition to the contacts named above, the following people made key contributions to this report: Deborah Davis, Assistant Director; Casey Keplinger, Assistant Director; Sharon Byrd; Shaun Byrnes; Barbara Collier; Marisol Cruz; Francine Delvecchio; Neil Doherty; Heather Dunahoo; Dave Hinchman; James Houtz; Stephanie Lee; David Noone; Lori Ryza; Zakia Simpson; and Charles Youman.
The Department of Homeland Security (DHS) has established a multibillion-dollar program--U.S. Visitor and Immigrant Status Indicator Technology (US-VISIT)--to control and monitor the pre-entry, entry, visa status, and exit of foreign visitors. To deliver system and other program capabilities, the program relies extensively on contractors, some of whom are managed directly by US-VISIT and some by other agencies (including both DHS agencies, such as Customs and Border Protection, and non-DHS agencies, such as the General Services Administration). Because of US-VISIT's heavy reliance on contractors to deliver program capabilities, GAO was asked to determine whether DHS has established and implemented effective controls for managing and overseeing US-VISIT-related contracts. US-VISIT-related contracts have not been effectively managed and overseen. The US-VISIT program office established and implemented certain nonfinancial controls for those contracts that it managed directly, such as verifying that contractor deliverables satisfied established requirements. However, it did not implement effective controls for overseeing its contracts managed by other DHS agencies and by non-DHS agencies. Moreover, effective financial controls were not in place on any contracts that GAO reviewed. The program office did not know the full extent of US-VISIT-related contract actions, and it had not performed key nonfinancial practices associated with understanding contractor performance in meeting the terms of these contracts. This oversight gap was exacerbated by the fact that the other agencies had not always established and implemented effective controls for managing their respective contracts. These other agencies directly managed more than half (56 percent) of the total US-VISIT-related contract obligations reported to GAO. The program office and other agencies did not implement effective financial controls. Without these controls, some agencies were unable to reliably report US-VISIT contracting expenditures. Further, the program office and these other agencies improperly paid and accounted for related invoices, including making duplicate payments and payments for non-US-VISIT services with funds designated for US-VISIT. According to the US-VISIT program official responsible for contract matters, the program office has focused on contracts that it manages directly and decided to rely on the responsible agencies to manage the other contracts. Further, it has decided to use other agencies to properly manage financial matters for their respective contracts, and it also decided to rely on another agency for its own financial management services. Without effective contract management and oversight controls, the program office does not know that required program deliverables and mission results will be produced on time and within budget, and that proper payments are made.
GAO_AIMD-98-136
Background Handling increasing service workloads is a critical challenge facing SSA. The agency is processing a growing number of claims for Social Security benefits. SSA estimates that it will face continued growth in beneficiaries over the next few decades as the population ages and life expectancies increase. The number of OASI and DI beneficiaries is estimated to increase substantially between calendar years 1997 and 2010—from approximately 44 million to over 54 million. Recognizing constraints on its staff and resources, SSA has moved to better serve its increasing beneficiary population and improve its productivity by redesigning its work processes and modernizing the computer systems used to support these processes. A key aspect of the modernization effort is the agency’s transition from its current centralized mainframe-based computer processing environment to a highly distributed client/server processing environment. IWS/LAN is expected to play a critical role in the modernization by providing the basic automation infrastructure for using client/server technology to support the redesigned work processes and improve the availability and timeliness of information to employees and appropriate users. Under this initiative, SSA plans to replace approximately 40,000 “dumb” terminals and other computer equipment used in over 2,000 SSA and state DDS sites with an infrastructure consisting of networks of intelligent workstations connected to each other and to SSA’s mainframe computers. The national IWS/LAN initiative consists of two phases. During phase I, SSA plans to acquire 56,500 workstations, 1,742 LANs, 2,567 notebook computers, systems furniture, and other peripheral devices. Implementation of this platform is intended to provide employees in the sites with office automation and programmatic functionality from one terminal. It also aims to provide the basic, standardized infrastructure to which additional applications and functionality can later be added. The projected 7-year life-cycle cost of phase I is $1.046 billion, covering the acquisition, installation, and maintenance of the IWS/LAN equipment. Under a contract with Unisys Corporation, SSA began installing equipment for this phase in December 1996; it anticipates completing these installations in June 1999. Through fiscal year 1997, SSA had reported spending approximately $565 million on acquiring workstations, LANs, and other services. Phase II is intended to build upon the IWS/LAN infrastructure provided through the phase I effort. Specifically, during this phase, SSA plans to acquire additional hardware and software, such as database engines, scanners, bar code readers, and facsimile and imaging servers, needed to support future process redesign initiatives and client/server applications. SSA plans to award a series of phase II contracts in fiscal year 1999 and to carry out actual installations under these contracts during fiscal years 1999 through 2001. Currently, SSA is developing the first major programmatic software application to operate on IWS/LAN. This software—the Reengineered Disability System (RDS)—is intended to support SSA’s modernized disability claims process in the new client/server environment. Specifically, RDS is intended to automate and improve the Title II and Title XVI disability claims processes from the initial claims-taking in the field office, to the gathering and evaluation of medical evidence in state DDSs, to payment execution in the field office or processing center and the handling of appeals in hearing offices. In August 1997, SSA began pilot testing RDS for the specific purposes of (1) assessing the performance, cost, and benefits of this software and (2) determining supporting IWS/LAN phase II equipment requirements. Agencies, in undertaking systems modernization efforts, are required by the Clinger-Cohen Act of 1996 to ensure that their information technology investments are effectively managed and significantly contribute to improvements in mission performance. The Government Performance and Results Act of 1993 requires agencies to set goals, measure performance, and report on their accomplishments. One of the challenges that SSA faces in implementing IWS/LAN is ensuring that the planned systems and other resources are focused on helping its staff process all future workloads and deliver improved service to the public. In a letter and a report to SSA in 1993 and 1994, respectively, we expressed concerns about SSA’s ability to measure the progress of IWS/LAN because it had not established measurable cost and performance goals for this initiative. In addition, SSA faces the critical challenge of ensuring that all of its information systems are Year 2000 compliant. By the end of this century, SSA must review all of its computer software and make the changes needed to ensure that its systems can correctly process information relating to dates. These changes affect not only SSA’s new network but computer programs operating on both its mainframe and personal computers. In October 1997, we reported that while SSA had made significant progress in its Year 2000 efforts, it faced the risk that not all of its mission-critical systems will be corrected by the turn of the century. At particular risk were the systems used by state DDSs to help SSA process disability claims. Objectives, Scope, and Methodology Our objectives were to (1) determine the status of SSA’s implementation of IWS/LAN, (2) assess whether SSA and state DDS operations have been disrupted by the installations of IWS/LAN equipment, and (3) assess SSA’s practices for managing its investment in the IWS/LAN initiative. To determine the status of SSA’s implementation of IWS/LAN, we analyzed key project documentation, including the IWS/LAN contract, project plans, and implementation schedules. We observed implementation activities at select SSA field offices in Alabama, Florida, Georgia, Minnesota, South Carolina, Texas, and Virginia; at program service centers in Birmingham, Alabama, and Philadelphia, Pennsylvania; and at teleservice centers in Minneapolis, Minnesota, and Fort Lauderdale, Florida. In addition, we reviewed IWS/LAN plans and observed activities being undertaken by state DDS officials in Alabama, Georgia, and Minnesota. We also interviewed representatives of the IWS/LAN contractor—Unisys Corporation—to discuss the status of the implementation activities. To assess whether SSA and state DDS operations have been disrupted by the installations of IWS/LAN equipment, we reviewed planning guidance supporting the implementation process, such as the IWS/LAN Project Plan, and analyzed reports summarizing implementation activities and performance results identified during pilot efforts. We also interviewed SSA site managers, contractor representatives, and IWS/LAN users to identify installation and/or performance issues, and observed operations in select SSA offices where IWS/LAN equipment installations had been completed. In addition, we discussed IWS/LAN problems and concerns with DDS officials in 10 states: Alabama, Arkansas, Arizona, Delaware, Florida, Louisiana, New York, Virginia, Washington, and Wisconsin, and with the president of the National Council of Disability Determination Directors, which is a representative body for all state DDSs. To assess SSA’s management of the IWS/LAN investment, we applied our guide for evaluating and assessing how well federal agencies select and manage their investments in information technology resources. We evaluated SSA’s responses to detailed questions about its investment review process that were generated from the evaluation guide and compared the responses to key agency documents generated to satisfy SSA’s process requirements. We also reviewed IWS/LAN cost, benefit, and risk analyses to assess their compliance with OMB guidance. We did not, however, validate the data contained in SSA’s documentation. We performed our work from July 1997 through March 1998 in accordance with generally accepted government auditing standards. We requested comments on a draft of this report from the Commissioner of Social Security or his designee. The Commissioner provided us with written comments, which are discussed in the “Agency Comments and Our Evaluation” section and are reprinted in appendix I. SSA Met Its IWS/LAN Milestones Through March 1998, but Future Milestones May Be Missed Using a strategy that includes installing workstations and LANs in up to 20 sites per weekend, SSA, through mid-March 1998, had generally met its phase I schedule for implementing IWS/LAN. However, the contractor installing IWS/LAN has expressed concerns about the availability of the workstations specified in the contract, raising questions as to whether they can continue to be acquired. In addition, the pilot effort that SSA began in August 1997 to assess the performance, cost, and benefits of RDS and identify IWS/LAN phase II requirements has experienced delays that could affect the schedule for implementing phase II of this initiative. IWS/LAN Phase I Implemented on Schedule Through March 1998 Under the phase I schedule, 56,500 intelligent workstations and 1,742 LANs are to be installed in approximately 2,000 SSA and state DDS sites between December 1996 and June 1999. The schedule called for approximately 30,500 workstations and about 850 LANs to be installed by mid-March 1998. According to SSA records, the agency generally met this schedule with the actual installation of 31,261 workstations and 850 LANs by March 15, 1998. These installations occurred at 753 SSA sites and 20 DDS sites (covering 12 states and the federal DDS). SSA reported in its fiscal year 1997 accountability report that the number of front-line employees using IWS/LAN increased to 50.2 percent—exceeding by 2.2 percent the fiscal year 1997 Results Act goal. The standard intelligent workstation configuration includes a 100-megahertz Pentium personal computer with 32 megabytes of random access memory, the Windows NT 4.0 operating system, a 1.2-gigabyte hard (fixed) disk drive, 15-inch color display monitor, and 16-bit network card with adaptation cable. Last year the contractor, Unisys, submitted a proposal to upgrade the intelligent workstation by substituting a higher speed processor at additional cost. Unisys noted that it was having difficulty obtaining 100-megahertz workstations. However, SSA did not accept Unisys’ upgrade proposal. Further, the Deputy Commissioner for Systems stated that SSA did not believe it was necessary to upgrade to a faster processor because the 100-megahertz workstation meets its current needs. For its modernization efforts to succeed, SSA must have the necessary workstations to support its processing needs. This is particularly important given the agency’s expressed intent to operate future client/server software applications on IWS/LAN to support redesigned work processes. Adding database engines, facsimile, imaging, and other features like those planned by SSA during phase II of the IWS/LAN initiative could demand a workstation with more memory, larger disk storage, and a processing speed higher than 100 megahertz. Personal computers available in today’s market operate at about three times this speed. Preliminary testing of the RDS software has already shown the need for SSA to upgrade the workstation’s random access memory from 32 megabytes to 64 megabytes. However, systems officials told us that their tests have not demonstrated a need for a faster workstation. As discussed in the following section, SSA is encountering problems and delays in completing its tests of the RDS software. In addition, at the conclusion of our review, SSA had begun holding discussions with Unisys regarding the availability of the 100-megahertz workstations. Problems in RDS Pilot Could Delay IWS/LAN Phase II Implementation SSA has experienced problems and delays in the pilot effort that it initiated in August 1997 to assess the performance, cost, and benefits of RDS and identify IWS/LAN phase II requirements. Under the pilot, an early release of the software is being tested in one SSA field office and the federal DDS to acquire feedback from end users regarding its performance. SSA planned to make improvements to the software based on these pilot results and then expand its testing of the software to all SSA and DDS components in the state of Virginia. The results of the pilot testing in Virginia were to be used in determining hardware and software requirements to support IWS/LAN phase II acquisitions, beginning in fiscal year 1999. SSA encountered problems with RDS during its initial pilot testing. For example, systems officials stated that, using RDS, the reported productivity of claims representatives in the SSA field office dropped. Specifically, the officials stated that before the installation of RDS, each field office claims representative processed approximately five case interviews per day. After RDS was installed, each claims representative could process only about three cases per day. At the conclusion of our review, systems officials stated that because the RDS software has not performed as anticipated, SSA has entered into a contract with Booz-Allen and Hamilton to independently evaluate and recommend options for proceeding with the development of RDS. In addition, SSA has delayed expanding the pilot by 9 months—from October 1997 to July 1998. This is expected to further delay SSA’s national roll-out and implementation of RDS. Moreover, because RDS is essential to identifying IWS/LAN phase II requirements, the Deputy Commissioner for Systems has stated that delaying the pilot will likely result in slippages in SSA’s schedule for acquiring and implementing phase II equipment. SSA Offices Reported Smooth Transition to IWS/LAN, but Network Management Concerns in State Offices Could Affect Service to the Public Nationwide implementation of IWS/LAN is a complex logistical task for SSA, requiring coordination of site preparation (such as electrical wiring and cabling) in over 2,000 remote locations, contractor-supplied and installed furniture and intelligent workstation components, and training of over 70,000 employees in SSA and DDS locations. Moreover, once installed, these systems must be managed and maintained in a manner that ensures consistent and quality service to the public. During our review, staff in the 11 SSA offices that we visited generally stated that they had not experienced any significant disruptions in their ability to serve the public during the installation and operation of IWS/LAN. They attributed the smooth transition to SSA’s implementation of a well-defined strategy for conducting site preparations, equipment installations, and employee training. Part of that strategy required equipment installation and testing to be performed on weekends so that the IWS/LAN equipment would be operational by the start of business on Monday. In addition, staff were rotated through training and client service positions and augmented with staff borrowed from other field offices to maintain service to the public during the post-installation training period. Further, because the new workstations provide access to the same SSA mainframe software applications as did the old terminals and LAN equipment, staff were able to process their workloads in a similar manner as with the previous environment. State DDSs generally were less satisfied with the installation and operation of IWS/LAN in their offices. Administrators and systems staff in the 10 DDS sites that we visited expressed concerns about the loss of network management and control over IWS/LAN operations in their offices and dissatisfaction with the service and technical support received from the contractor following the installation of IWS/LAN equipment. In particular, SSA initially planned to centrally manage the operation and maintenance of IWS/LAN equipment. However, DDS officials in 7 of the 10 offices expressed concern that with SSA managing their networks and operations, DDSs can no longer make changes or fixes to their equipment locally and instead, must rely on SSA for system changes or network maintenance. Eight of the 10 DDSs reported that under this arrangement, the IWS/LAN contractor had been untimely in responding to certain of their requests for service, resulting in disruptions to their operations. For example, a DDS official in one state told us that at the time of our discussion, she had been waiting for approximately 2 weeks for the IWS/LAN contractor to repair a hard disk drive in one of the office’s workstations. In January 1998, the National Council of Disability Determination Directors (NCDDD), which represents the state DDSs, wrote to SSA to express the collective concerns of the DDSs regarding SSA’s plan to manage and control their IWS/LAN networks. NCDDD recommended that SSA pilot the IWS/LAN equipment in one or more DDS office to evaluate options for allowing the states more flexibility in managing their networks. It further proposed that IWS/LAN installations be delayed for states whose operations would be adversely affected by the loss of network control. At least one state DDS—Florida—refused to continue with the roll-out of IWS/LAN in its offices until this issue is resolved. Because IWS/LAN is expected to correct Year 2000 deficiencies in some states’ hardware, however, NCDDD cautioned that delaying the installation of IWS/LAN could affect the states’ progress in becoming Year 2000 compliant. At the conclusion of our review, the Deputy Commissioner for Systems told us that SSA had begun holding discussions with state officials in early March 1998 to identify options for addressing the states’ concerns about the management of their networks under the IWS/LAN environment. SSA Will Not Measure Benefits Derived From IWS/LAN Federal legislation and OMB directives require agencies to manage major information technology acquisitions as investments. In implementing IWS/LAN, SSA has followed a number of practices that are consistent with these requirements, such as involving executive staff in the selection and management of the initiative and assessing the cost, benefits, and risks of the project to justify its acquisition. However, SSA’s practices have fallen short of ensuring full and effective management of the investment in IWS/LAN because it did not include plans for measuring the project’s actual contributions to improved mission performance. Management Oversight and Analysis Supported IWS/LAN Implementation According to the Clinger-Cohen Act and OMB guidance, effective technology investment decision-making requires that processes be implemented and data collected to ensure that (1) project proposals are funded on the basis of management evaluations of costs, risks, and expected benefits to mission performance and (2) once funded, projects are controlled by examining costs, the development schedule, and actual versus expected results. These goals are accomplished by considering viable alternatives, preparing valid cost-benefit analyses, and having senior management consistently make data-driven decisions on major projects. SSA followed an established process for acquiring IWS/LAN that met a number of these requirements. For example, senior management reviewed and approved the project’s acquisition and has regularly monitored the progress of the initiative against competing priorities, projected costs, schedules, and resource availability. SSA also conducted a cost-benefit analysis to justify its implementation of IWS/LAN. This analysis was based on comparisons of the time required to perform certain work tasks before and after the installation of IWS/LAN equipment in 10 SSA offices selected for a pilot study during January through June 1992. For example, the pilot tested the time savings attributed to SSA employees not having to walk from their desks or wait in line to use a shared personal computer. Based on the before and after time savings identified for each work task, SSA projected annual savings from IWS/LAN of 2,160 workyears that could be used to process growing workloads and improve service. In a review of the IWS/LAN initiative in 1994, the Office of Technology Assessment (OTA) found SSA’s cost-benefit analysis to be sufficient for justifying the acquisition of IWS/LAN. SSA Is Not Using Key Performance Measures to Assess the Impact of IWS/LAN on Mission Performance Although SSA followed certain essential practices for acquiring IWS/LAN, it has not yet implemented performance goals and measures to assess the impact of this investment on productivity and mission performance. Under the Clinger-Cohen Act, agencies are to establish performance measures to gauge how well their information technology supports program efforts and better link their information technology plans and usage to program missions and goals. Successful organizations rely heavily upon performance measures to operationalize mission goals and objectives, quantify problems, evaluate alternatives, allocate resources, track progress, and learn from mistakes. Performance measures also help organizations determine whether their information systems projects are really making a difference, and whether that difference is worth the cost. The Clinger-Cohen Act also requires that large information technology projects be implemented incrementally and that each phase should be cost effective and provide mission-related benefits. It further requires that performance measures be established for each phase to determine whether expected benefits were actually achieved. In our September 1994 report, we noted that as part of an effort with the General Services Administration (GSA) to develop a “yardstick” to measure the benefits that IWS/LAN will provide the public, SSA had initiated actions aimed at identifying cost and performance goals for IWS/LAN. SSA identified six categories of performance measures that could be used to determine the impact of IWS/LAN technology on service delivery goals and reengineering efforts. It had planned to establish target productivity gains for each measure upon award of the IWS/LAN contract. GSA was to then use these measures to assess IWS/LAN’s success. As of March 1998, however, SSA had established neither the target goals to help link the performance measures to the agency’s strategic objectives nor a process for using the measures to assess IWS/LAN’s impact on agency productivity and mission performance. In addition, although the Clinger-Cohen Act and OMB guidance state that agencies should perform retrospective evaluations after completing an information technology project, SSA officials told us that they do not plan to conduct a post-implementation review of the IWS/LAN project once it is fully implemented. According to the Director of the Information Technology Systems Review Staff, SSA currently does not plan to use any of the measures to assess the project’s impact on agency productivity and mission performance because (1) the measures had been developed to fulfill a specific GSA procurement requirement that no longer exists and (2) it believes the results of the pilots conducted in 1992 sufficiently demonstrated the savings that will be achieved with each IWS/LAN installation. It is essential that SSA follow through with the implementation of a performance measurement process for each phase of the IWS/LAN effort. Measuring performance is necessary to show how this investment is contributing to the agency’s goal of improving productivity. Among leading organizations that we have observed, managers use performance information to continuously improve organizational processes, identify performance gaps, and set improvement goals. The performance problems that SSA has already encountered in piloting software on IWS/LAN make it even more critical for SSA to implement performance measures and conduct post-implementation reviews for each phase of this initiative. SSA believes that the results of its pilot effort undertaken in 1992 to justify the acquisition of IWS/LAN sufficiently demonstrate that it will achieve its estimated workyear savings. However, the pilot results are not an acceptable substitute for determining the actual contribution of IWS/LAN to improved productivity. In particular, although the pilots assessed task savings for specific functions performed in each office, they did not demonstrate IWS/LAN’s actual contribution to improved services gained through improvements in the accuracy of processing or improvements in processing times. In addition, OTA noted in its 1994 review that the relatively small number of pilots may not have adequately tested all the potential problems that could arise when the equipment is deployed at all of SSA’s sites. Further, information gained from post-implementation reviews is critical for improving how the organization selects, manages, and uses its investment resources. Without a post-implementation review of each phase of the IWS/LAN project, SSA cannot validate projected savings, identify needed changes in systems development practices, and ascertain the overall effectiveness of each phase of this project in serving the public. Post-implementation reviews also serve as the basis for improving management practices and avoiding past mistakes. Conclusions SSA is relying on IWS/LAN to play a vital role in efforts to modernize its work processes and improve service delivery, and it has made good progress in implementing workstations and LANs that are a part of this effort. However, equipment availability and capability issues, problems in developing software that is to operate on the IWS/LAN workstations, and concerns among state DDSs that their equipment will not be adequately managed and serviced by SSA, threaten the continued progress and success of this initiative. Moreover, absent target goals and a defined process for measuring performance, SSA will not be able to determine whether its investment in each phase of IWS/LAN is yielding expected improvements in service to the public. Recommendations To strengthen SSA’s management of its IWS/LAN investment, we recommend that the Commissioner of Social Security direct the Deputy Commissioner for Systems to immediately assess the adequacy of workstations specified in the IWS/LAN contract, and based on this assessment, determine (1) the number and capacity of workstations required to support the IWS/LAN initiative and (2) its impact on the IWS/LAN implementation schedule; work closely with state DDSs to promptly identify and resolve network management concerns and establish a strategy for ensuring the compliance of those states relying on IWS/LAN hardware for Year 2000 corrections; establish a formal oversight process for measuring the actual performance of each phase of IWS/LAN, including identifying the impact that each IWS/LAN phase has on mission performance and conducting post-implementation reviews of the IWS/LAN project once it is fully implemented. Agency Comments and Our Evaluation In commenting on a draft of this report, SSA generally agreed with the issues we identified and described actions that it is taking in response to our recommendations to resolve them. These actions include (1) determining remaining IWS/LAN workstation needs, (2) addressing state DDS network management concerns and related Year 2000 compliance issues, and (3) implementing a performance measurement strategy for the IWS/LAN initiative. These actions are important to the continued progress and success of the IWS/LAN initiative, and SSA must be diligent in ensuring that they are fully implemented. In responding to our first recommendation to assess the adequacy of workstations specified in the IWS/LAN contract, SSA stated that it had determined the number of workstations required to complete the IWS/LAN implementation and was working on a procurement strategy and schedule for this effort. SSA also stated that its current tests do not show a need for workstations with a processing speed higher than 100 megahertz. The agency further noted that terms and conditions in the IWS/LAN contract will enable it to acquire a higher powered computer or other technology upgrades when the need arises. As discussed earlier in our report, it is important that SSA have the necessary workstations to support its processing needs in the redesigned work environment. Therefore, as SSA continues its aggressive pace in implementing IWS/LAN, it should take all necessary steps to ensure that it has fully considered its functional requirements over the life of these workstations. Doing so is especially important since SSA has encountered problems and delays in completing tests of the RDS software that are vital to determining future IWS/LAN requirements. Our second recommendation concerned SSA’s working closely with state DDSs to identify and resolve network management concerns and establish a strategy for ensuring the compliance of those states relying on IWS/LAN hardware for Year 2000 corrections. SSA identified various actions, which if successfully implemented, could help resolve DDS concerns regarding network management and the maintenance of IWS/LAN equipment, and facilitate its efforts in becoming Year 2000 compliant. In responding to our final recommendation that it establish a formal oversight process for measuring the actual performance of each phase of IWS/LAN, SSA agreed that performance goals and measures should be prescribed to determine how well information technology investments support its programs and provide expected results. SSA stated that it is determining whether expected benefits are being realized from IWS/LAN installations through in-process and postimplementation assessments. SSA further noted that its planning and budgeting process ensures that it regularly assesses the impact of IWS/LAN on agency productivity and mission performance. However, during the course of our review, SSA could not provide specific information to show how its planning and budgeting process and data on workyear savings resulting from IWS/LAN installations were being used to assess the project’s actual contributions to improved productivity and mission performance. In addition, two of the three measures that SSA identified in its response—the number of IWS/LANs installed per month and existing terminal redeployment and phase-out—provide information that is more useful for assessing the progress of SSA’s IWS/LAN installations and existing terminal redeployment efforts. To ensure that its investments are sound, it is crucial that SSA develop measures to assess mission-related benefits, and use them in making project decisions. We will continue to monitor SSA’s efforts in assessing the benefits of IWS/LAN installations through its in-process and postimplementation assessments and its planning and budgeting process. We are sending copies of this report to the Commissioner of Social Security; the Director of the Office of Management and Budget; appropriate congressional committees; and other interested parties. Copies will also be made available to others upon request. Please contact me at (202) 512-6253 or by e-mail at [email protected] if you have any questions concerning this report. Major contributors to this report are listed in appendix II. Comments From the Social Security Administration Major Contributors to This Report Accounting and Information Management Division, Washington, D.C. Atlanta Field Office Pamlutricia Greenleaf, Senior Evaluator Kenneth A. Johnson, Senior Information Systems Analyst The first copy of each GAO report and testimony is free. Additional copies are $2 each. Orders should be sent to the following address, accompanied by a check or money order made out to the Superintendent of Documents, when necessary. VISA and MasterCard credit cards are accepted, also. Orders for 100 or more copies to be mailed to a single address are discounted 25 percent. U.S. General Accounting Office P.O. Box 37050 Washington, DC 20013 Room 1100 700 4th St. NW (corner of 4th and G Sts. NW) U.S. General Accounting Office Washington, DC Orders may also be placed by calling (202) 512-6000 or by using fax number (202) 512-6061, or TDD (202) 512-2537. Each day, GAO issues a list of newly available reports and testimony. To receive facsimile copies of the daily list or any list from the past 30 days, please call (202) 512-6000 using a touchtone phone. A recorded menu will provide information on how to obtain these lists.
Pursuant to a congressional request, GAO reviewed the Social Security Administration's (SSA) ongoing efforts to implement its intelligent workstation/local area network (IWS/LAN) project, focusing on: (1) determining the status of SSA's implementation of IWS/LAN; (2) assessing whether SSA and state disability determination service (DDS) operations have been disrupted by the installations of IWS/LAN equipment; and (3) assessing SSA's practices for managing its investment in IWS/LAN. GAO noted that: (1) SSA has moved aggressively in installing intelligent workstations and LANs since initiating IWS/LAN acquisitions in December 1996; (2) as of mid-March 1998, it had completed the installation of about 31,000 workstations and 850 LANs, generally meeting its implementation schedule for phase I of the initiative; (3) the contractor that is installing IWS/LAN has expressed concerns about the future availability of the intelligent workstations that SSA is acquiring; (4) problems encountered in developing software intended to operate on IWS/LAN could affect SSA's planned schedule for proceeding with phase II of this initiative; (5) staff in SSA offices generally reported no significant disruptions in their ability to serve the public during the installation and operation of their IWS/LAN equipment; (6) some state DDSs reported that SSA's decision to manage and control DDS networks remotely and the IWS/LAN contractor's inadequate responses to DDS' service calls have led to disruptions in some of their operations; (7) because IWS/LAN is expected to correct year 2000 deficiencies in some states' hardware, delaying the installation of IWS/LAN could affect states' progress in becoming year 2000 compliant; (8) consistent with the Clinger-Cohen Act of 1996 and Office of Management and Budget guidance, SSA has followed some of the essential practices required to effectively manage its IWS/LAN investment; (9) SSA has not established essential practices for measuring IWS/LAN's contribution to improving the agency's mission performance; (10) although the agency has developed baseline data and performance measures that could be used to assess the project's impact on mission performance, it has not defined target performance goals or instituted a process for using the measures to assess the impact of IWS/LAN on mission performance; (11) SSA does not plan to conduct a post-implementation review of IWS/LAN once it is fully implemented; and (12) without targeted goals and a defined process for measuring performance both during and after the implementation of IWS/LAN, SSA cannot be assured of the extent to which this project is improving service to the public or that it is actually yielding the savings anticipated from this investment.