Google
 

 

 

 

 

 

Technological Decision-Making at the National Level

by William F. Schreiber
Massachusetts Institute of Technology

wfs@image.mit.edu


1. Introduction

In 1987, the Federal Communications Commission initiated an Inquiry, initially intended to last two years, that evolved into a process for setting standards for digital terrestrial television broadcasting in the US. I participated in and followed this process carefully up to its culmination with the Fourth Report and Order 96-493 in December 1996, available at the FCC Web site. The FCC has run many such Inquiries, and has established elaborate procedures intended to ensure that each issue is thoroughly and properly explored and that decisions will be rendered that serve the "public interest, convenience, and necessity." In spite of having all the required apparatus and procedures in place, the final decision, in my opinion, had many serious flaws, as a result of which the transition to digital television (DTV) may well not be successful.

Having spent so much time and effort since 1983 on this subject, and having been convinced, after careful study, that a standard could have been developed that would have greatly eased the transition from analog broadcasting and that would have met the needs of all the participants, I have been highly motivated to study the reasons for failure. This study has led me to believe that the failure was methodological, and was likely to occur in any governmental decision-making involving technology. The main flaw appears to have been failure to separate the political from the scientific or technological (sci/tech) aspects of the issue.

All acts of government affect different entities and individuals in the society differently. By "political" decision, I mean the selection of winners and losers. By "sci/tech" decision, I refer to a process that can be carried out by objective methods based on careful analysis. In the DTV case, I believe that the problems in the DTV Inquiry were mainly due to the failure to separate these two kinds of decisions.

If the decision-making process that failed in the FCC's DTV case can be improved, the improvements could likely be applied to many other regulatory agencies. In this paper, I have attempted to devise a scheme that would avoid these problems and therefore produce a better result. A key element is the separation of the political from the technological or scientific aspects of decisions. In a democracy, it is the people's sovereign right to determine the distribution of the good things that the country provides. The political aspects would therefore be dealt with by Congress or perhaps, in some cases, by regulatory agencies following detailed Congressional guidelines. The technological aspects would be dealt with by the regulatory agencies using the best unbiased sci/tech advice available.

2. The Proper Locus of Decision-Making in a Democratic Society

Before we can address the question of just how government should make these decisions, we must deal with a point of view that has gained substantial following in the last 25 years. Those holding this view believe that most decisions, especially those concerning money, can be left to the unsupervised market. Faith in this principle has been shaken by recent events in the economies of Asian countries, but the view that the market is always right is far from dead. My own opinion is that, in order to achieve the proper aims of a democratic society, among which is the promotion of the general welfare, government action is often necessary. No one argues that we can do without traffic lights, for example, and there is growing awareness that "traffic lights" are needed in many places besides roads. There is no necessary conflict between promoting the general welfare by government action and allowing each person to seek his own maximum benefit entirely without government interference, since everyone generally benefits from ensuring that everyone else has a reasonable standard of living. This is a large topic, far beyond the scope of this paper, in which we will assume that we are dealing with issues on which there is general agreement that at least some government action is required at the national level.

2.1 Issues properly decided by markets.

The classical market idea is that there are many buyers and sellers, no one of whom alone affects prices very much. In the market for goods and services, supply and demand are balanced at equilibrium, and prices settle at a point where the buyers are just able to afford to fulfill their needs and producers make a very small profit. As products of perceived superiority come to market, they displace inferior products. In the market for labor, employers pay as little as possible, and workers try to maximize their pay. At equilibrium, the cost of labor permits the employer to make a very small profit while producing just what can be sold, and wages are just sufficient to sustain workers and their families. There is no role for government in this arrangement, since the "invisible hand" of the market keeps everything in balance. However, there is nothing in the model that sets the absolute level of production or that controls the distribution of income. This rather grim picture is lightened by the principle that, as productivity increases and more goods and services can be produced with a given amount of labor, everyone's standard of living rises. A common metaphor is that a rising tide -- i.e., the GDP -- lifts all boats, but the economy has not worked that way for the last 25 years. A more accurate metaphor for this period is that unless all the boats rise together, a phenomenon that never occurs without government intervention, a rising tide sinks the lowest ones.

Obviously, the economic jungle of the classical free market would not meet the aims of the founding fathers in crafting the constitution. In addition, the real market is not "free," in that it does not have perfect competition in that monopoly power exists at least to some extent. All developed countries have added regulations to establish a criminal code, to protect health and safety, to provide "safety net" for those who are not able to compete, to protect children and the elderly, to reduce fraud, and for other agreed-upon purposes. All developed countries also levy taxes that are used for defense, education, construction and maintenance of physical infrastructure, police and judicial functions, etc. However, in spite of these various limitations and imperfections, the somewhat regulated free market that exists does work rather well in many areas, particularly the provision of consumer goods and services.

Since the early seventies, the idea that the country would be better off with less regulation has become the prevailing viewpoint of a majority of mainstream economists and politicians. Airline deregulation is an example. The results are mixed, as there are fewer airlines and less competition than ever. The larger airlines have all introduced the "hub-and-spoke" route system, which has made travel less convenient by reducing the number of direct flights between many cities. Average fares are lower, but the fare structure has become extremely complicated and tickets not bought well in advance are much more expensive. It is of interest that no one has yet suggested deregulating airline safety. After all, in a free market, the public would surely give more business to airlines with better safety records. Perhaps this is related to the fact that business executives and members of Congress, who fly a lot, are not willing to leave their personal safety to the free market. In addition, they have assistants to arrange their travel plans and almost never pay their own fares, making them indifferent to the complexities just mentioned.

The Communications Act of 1996 was intended to promote competition in the telecommunications industry through deregulation, but the main effect so far seems to be consolidation of the industry into a rather small group of companies of unprecedented size. Deregulation of the electric power industry is now underway and many observers, including this author, feel that we are in for some unpleasant surprises.

2.2 Issues that are properly governmental decisions

Among the issues that require action by government, the appropriate method of deciding what action to take depends on the degree to which technology is involved.

Purely political issues. By a purely political issue, I mean one in which governmental action produces winners and losers, and in which the technology to be used is not a significant issue. For example, if the United States adopted a single-payer health-care system of the kind used in most other developed countries, those profiting from the present system would lose out and could be expected to object vigorously. Of course, the objections would be cast as predictions of a decline in the quality of health care, while the prospect of lost profits would not be mentioned. If the nation decided to pay the tuition for every student attending college, everyone in the higher-education business would profit, but the advantages of such a system would surely be put in terms of the prospective benefit to students and country from raising the educational level of the populace. The structure of the Social Security system, regulation of financial markets, minimum wage laws, and the welfare and unemployment-insurance systems are all examples of purely political issues.

Deciding on the merits of such cases, which have nothing to do with the technology that might be employed, is a political matter to be decided by Congress. One would hope that such decisions would be made on the basis of careful analysis rather than as a result of pressure from vested interests. The legislative system is not perfect, but it is the best that can be done in a democracy. Improving the system so as to get better decisions is primarily a question of devising an electoral system that produces electees of high quality who are truly representative of the electorate, and are not beholden to campaign contributors.

Purely technological issues. By a purely technological issue, I mean one in which the decision can be made entirely on an objective sci/tech basis. Generally, the objective or function of the structure or process would have first been specified by a political body. For example, the route and capacity of a new highway might have been specified and the question to be decided is the design of the roadbed. A good design can be chosen on the basis of cost and performance by a government agency, using expert knowledge obtained from unbiased sources. The decision will usually have political (win/lose) consequences, but these should not enter into the decision. For example, concrete or macadam could be selected, producing a benefit for the producer of the chosen material. What would be wrong (and corrupt) would be to specify the material so as to give the business to a particular company. A problem of exactly this nature is under consideration at the moment. It concerns paper for use in printing US currency, which has been supplied only by the Crane Paper Company for many years.

Just because an issue is truly scitech does not mean that it can be settled quickly or easily, and that all qualified professionals will agree. Global warming is a case in point. However, on issues of this kind, at least everyone agrees on what we have to do and what kind of information we need in order to make a judgment highly likely to be correct. When we do not have all the information, at least we can make an informed judgment.

Mixed issues. Most issues up for consideration have both political and technological aspects, corresponding roughly to the questions of what to do and how to do it. Examples of such mixed issues are telecommunication networks and environmental regulations. Both come under the interstate commerce clause of the constitution. There is little argument as to whether the federal government has the power to enact regulations in these fields, and it is clear that both involve political as well as sci/tech questions.

It is obvious that the market will not produce clean air and water by itself. Government action is therefore called for, but there is a great deal of argument as to the degree of cleanliness that is desirable and the amount of money that ought to be spent to achieve it. Cost/benefit analysis has been suggested as a help in decision-making. This might be useful in comparing two methods of reducing infant mortality, for example, but it certainly does not eliminate all quandaries, as the basic assumptions of the electorate, such as the idea that all children deserve an equal chance to live a satisfying life, are hard to quantify. Attempts are being made to reduce pollution by charging for the right to emit noxious material rather than simply forbidding it. Some have even suggested establishing a market for trading in pollution rights. It is too early to tell whether these attempts to achieve the aims of regulation in a market-like manner will be effective, or whether this approach has any advantages over straightforward regulation.

In the case of telecommunications networks, the demise of the concept of natural monopoly has made this field extremely complicated. It was once thought that, in each area of the country, the existence of a single electric-power company, a single water supplier, and a single telephone company (and more recently, a single cable company) would be more efficient than allowing competition, with its implied duplication of very expensive infrastructure. In return for the monopoly, the chosen company was typically closely regulated by a public-utility commission with respect to price of services, level of investment, and permissible profit. Indeed, in many countries and in some localities in the US, government itself operates such utilities. For example, the federal government still operates the enormous flood-control and power generation projects built during the New Deal years. The city of Cambridge, Massachusetts operates its own water-supply system. In the particular case of terrestrial ("over-the-air") broadcasting, the limited spectrum, which certainly is owned by the people, requires a government regulator, in this case the FCC, to decide who gets the license and under what conditions.

Adding to the complication in this field is the rapid development of technologies such as digital compression and transmission, packet-switched networks, cellular broadcasting, interactive systems (still in their infancy, as far as actual deployment is concerned), and computer control of communication systems.

With respect to services that use the airwaves, one proposed solution, advanced by the-market-is-always-right crowd, is to auction the spectrum and to allow successful bidders to use it in any way that they think will be profitable. With respect to services that do not use the airwaves but have some of the characteristics of a public utility, such as the Internet, confusion reigns, as nobody has come up with an intellectual or philosophical approach that has gained general support. How such complex subjects might be approached will become more apparent after discussing what actually happened in the FCC Inquiry intended to deal with digital TV.

3. The Digital Television Decision

The FCC Inquiry that produced the DTV decision is an example of an attempt to solve a mixed-issue problem of the kind just discussed. It lasted nine years, involving hundreds of television professionals and many companies. The procedure, as in other Inquiries, involved the appointment of an Advisory Committee eventually referred to as the Advisory Committee on Advanced Television Service (ACATS) and the establishment of a complex committee structure to look into various aspects of the problem. The Commission issued a series of Reports representing the FCC's view of the problem and soliciting comments from the public. Submitters could also send in Comments on the submissions of others. The subcommittees undertook various tasks and issued reports. I don't think anyone has calculated the cost, which was borne by the participating companies. My opinion is that the process was a failure, in spite of the very large effort, in that the "standard" that was issued is likely to cause so much confusion as to place the success of the transition to digital broadcasting in danger. This matter is dealt with in more detail in my paper "The FCC Digital Standards Decision," Prometheus, 16, 2, June 1998, pp 155-172, which is also available at http://www.nytimes.com/tech/schreiber/.html.

3.1 The Federal Communications Commission

The FCC issues licenses for the use of radio spectrum and sets transmission standards and conditions of use. It makes both political and sci/tech decisions.

Origin and functions of the FCC. In the infancy of radio broadcasting, there was no regulation; each group that wanted to transmit radio signals for any purpose simply chose a frequency and a modulation scheme and went on the air. It was not long before even the most ardent free-marketeers amongst radio users realized that this chaotic situation was to no one's benefit. Government control of spectrum assignments began with the Wireless Ship Act of 1910. At the behest of the then-existing broadcasters, a various laws were later enacted giving the government authority to issue licenses for the use of specific frequencies. Today's FCC dates from the Communications Act of 1934, which was heavily amended in 1996. Obviously, one of the most important duties of the Commission is to grant licenses for the use of the radio spectrum and to establish, for each licensee, a reception area within which the signals can be received reliably and without serious interference. This task has become much more difficult as the demand for spectrum now greatly exceeds the supply.

Since satellite broadcasting uses radio spectrum, licenses are also required. Cable, which does not use spectrum, does not require a federal license but is regulated so that outward signal leakage does not interfere with terrestrial broadcasting. However, regulation of cable goes well beyond that, which I find surprising. One of the clearest cases of delegating to a regulatory commission a purely political issue is the "must-carry" rule, under which cable companies were once required to carry the signals of all local TV stations. This may or may not be a good idea -- it surely is convenient for subscribers -- but a decision like this, which significantly effects the profits of communication companies, belongs in Congress.

Under its more obvious responsibilities, the FCC established specifications for terrestrial (over-the-air) radio and television transmission so that manufacturers could design receivers guaranteed to work with the signals that were broadcast. The NTSC monochrome standard was adopted in 1941 and the NTSC color standard was adopted in 1953. The DTV "standard" was set in 1996, but, as we shall see, it is incomplete. No standards are in place for satellite and cable broadcasting. The former uses digital transmission, but the various broadcasters use noncompatible standards at present. The latter uses NTSC, some programs being encrypted to prevent reception without payment.

3.2 The HDTV Inquiry.

In 1987, at the request of the TV broadcasters, who allegedly feared that they would need more spectrum to compete with the HDTV system designed in Japan, the FCC initiated an Inquiry and appointed an advisory committee to investigate the effect of this new development on the existing service, which is both popular and profitable. Broadcasters generally regarded HDTV as a threat rather than an opportunity. From the beginning, the FCC acknowledged that the Inquiry was governed by the Federal Advisory Committee Act (FACA) which requires that all meetings be held in public (this was faithfully carried out) and that all interested parties be represented (this was largely disregarded, especially with respect to the representation of the public, women, minorities, labor, and academia.). The change in purpose of the Inquiry to a standard-setting process, first for HDTV, then for advanced television (ATV), and finally for digital TV (DTV) was not the result of any public discussion. FACA does not require the Commission to oversee the actions or decisions of advisory committees, and there was no oversight in this case. The committees were to make their decisions by consensus. There was no basis for voting, since any company that wished to participate was welcome to do so. The intentional lack of oversight is not surprising, since it is the fervent wish of all regulatory agencies that the industries under regulation agree among themselves as to the regulations, in which case the agency can simply adopt the industry position, knowing that there will be no complaints, at least from those regulated.

The extremely complex committee organization and the appointment of particular individuals to key positions, both done in private, led many to conclude that the process would be the means by which the already developed Japanese HDTV system (the NHK system) would be adopted as the US standard, but that is not what happened. The original NHK system had been developed for satellite, not terrestrial, transmission. Narrow MUSE, the version developed by NHK that could be transmitted in the 6-MHz US terrestrial channels, never performed very well. What really caused the death of the NHK system for transmission service, however, was the development in the US of digital systems that had much better performance.

Compatibility vs. simulcasting. At the onset of the Inquiry, almost the entire American TV establishment favored making the HDTV system backward-compatible with NTSC, just as NTSC color was backward-compatible with monochrome NTSC. A noncompatible hybrid analog/digital system developed at MIT that was intended to be used with simulcasting was ridiculed. Later on, it became apparent that a compatible 6-MHz HDTV system was technically impossible, as the required amount of enhancement data could not be hidden within the NTSC signal format. In addition, making the signal usable on NTSC receivers would also preserve the extreme vulnerability of NTSC to interference, together with its very poor spectrum efficiency. (By spectrum efficiency, we mean the amount of service that can be provided within a given spectrum allocation.) Zenith then proposed a simulcast system similar to that of MIT and claimed that its signal could be transmitted in the so-called "taboo" channels that cannot be used in NTSC because of cochannel interference. This turned the tide against compatible systems. Then the FCC ruled against 12-MHz enhancement systems (NTSC in one channel and enhancement data in a second channel) on the grounds of poor spectrum efficiency. Finally, on the last day for submission of proposals in 1990, General Instrument entered an all-digital system, and, within months, three other digital schemes were announced. Note that, in order to use digital coding and transmission, one must accept that the new signals will not be receivable on the 200-million existing TV sets without a set-top converter.

The Submissions. All submissions were read by Commission staff, as evident by the discussions in subsequent Reports. However, they appear not to have been read very critically. The Commission seemed to have forgotten that submissions from organizations hoping to make money from the new system were likely to be self-serving. This was particularly evident in the discussion of interlace, which has no place in any new television system.

3.3 System Testing

The first round of laboratory testing resulted in the withdrawal of NHK's MUSE system as well as a compatible system from the Sarnoff Laboratory, leaving four digital systems whose performance was sufficiently similar that there were no grounds for selecting any one over the others. (Why the FCC thought that the new system, even including the audio coder, had to come from one company is a mystery.) ACATS is then universally believed to have forced the four competitors to join together in "Grand Alliance," (GA) which had more the characteristics of a shotgun wedding. (This must have been done with the tacit agreement of the Commission.) Since no competitor was willing to give up his format, all were retained as variants in the final system. Thus both a 1080-line 30-fps interlaced version and a 720-line 60-fps progressive system, plus a 24-fps systems for film were all included.

In the laboratory testing of the GA system, somewhat better performance was achieved than in the first round. One field test was conducted, with less-than-perfect results, but this did not raise a flag. Although nearly 2/3 of American homes have cable service, more than half of all receivers use antennas. Satisfactory reception of the digital signals on antennas is thus not optional; it is essential to the acceptance of the system by the public.

3.4 Role of the Advanced Television Systems Committee

ATSC is a nongovernment organization of companies in the television industry. Although its name was selected to make it appear to be the current incarnation of NTSC, which developed the existing standard, the decimation of the American electronics industry has had the result that ATSC is dominated by foreign-owned companies. Furthermore, ATSC has a powerful executive committee that campaigned for years to make the NHK system an American standard. At one point, ATSC even convinced the US State Department to support the Japanese system as an international standard, much to the consternation of our European allies.

ATSC was given the function (by what means I never discovered) of producing the formal system proposal to be presented to ACATS, which it did in 1995. ACATS, in turn, presented the proposal to the Commission for final decision. In the meantime, the possibility of transmitting a number of standard-definition (SD) programs in each 6-MHz channel, rather than one high-definition program, had become attractive to some broadcasters. ATSC then held a meeting, which I attended, to choose the scanning format for these SD transmissions. No laboratory or field testing had been done on any of these formats. Nevertheless, the T3/S6 committee chose several standards by ballot, contrary to the procedure mandated by the Commission for the conduct of the Inquiry. (I was outvoted on what I thought was much the best SD format -- 360 lines/frame, progressively scanned.) The number of formats was thus increased to 14. Actually, it is more than that, since both 60 fps and 59.94 fps are allowed. ATSC stated that all receivers on the market would work with all the standards in the list, called Table 3. Not only will this increase the cost of receivers, ATSC does not have the power to enforce this policy, raising the possibility that not all of the receivers to be put on sale will work with all of the formats that will be transmitted.

The Grand Alliance System, as modified by ATSC, contained four main faults: it had no migration path to higher quality, too many formats (including some with interlace), and no provision for inexpensive receivers. The version of the proposal that was adopted by the FCC, as we shall see shortly, added an additional fault: the lack of a fully specified transmission standard.

3.5 Conflict Between Television and Computer Interests

As soon as the ATSC proposal was made public, objections came from the computer industry, whose future profitability depends on displaying TV signals on computer screens. The most important reason for disagreement was the inclusion of interlaced formats, which the computer industry had given up years ago for good reasons, mainly interline flicker. In addition, the computer industry favored "layered," or multiresolution, system, in which a standard-definition baseline signal would be transmitted as well as additional enhancement signals that could be combined with the baseline signal to produce higher definition. This scheme, which I greatly favored, would reduce the cost of the cheapest receivers and provide a clear migration path to higher definition. Nondisruptive improvement over time, a characteristic lacking in NTSC, had been on the FCC list of desiderata from the beginning.

The Commission evidently felt that it would be unwise to proceed without some agreement between the contending parties. Commissioner Susan Ness, in flagrant violation of FACA, then appointed a new advisory committee, composed primarily of executives and lawyers from the television industry -- both broadcasters and receiver manufacturers -- and the computer industry, with no representatives of the public. (Some Hollywood interests were also invited, but declined.) The new committee met in private -- another FACA violation -- and produced an extraordinary proposal, in which the ATSC system would be adopted without Table 3 that listed the scan formats. Before this committee met, there had been very little support for such a bizarre possibility, which would be bound to introduce a totally unnecessary element of uncertainty into the transition. Nevertheless, the Commission approved the proposal after an unprecedented very short comment period. The Commission's approval may well have been influenced by the deregulatory views of Chairman Reed Hundt, who had been voicing the view that the "market," and not the FCC, should decide standards.

My own opinion was that the two positions were irreconcilable, and therefore the Commission should make a decision on its own. I felt that the positions of both parties were superior to the "compromise," but that an even better solution could have been devised that would have met the needs of all the parties, including the public, which will bear nearly the entire cost. Because of the secrecy of the proceedings, it is not possible to state exactly what the participants had in mind, but it appears that both sides were concerned about possible Congressional action should no agreement be forthcoming.

3.6 Final Result

Nearly two years have elapsed since the Fourth Report and Order was issued. Broadcasts are to start in November, 1998, and no receivers are yet in the stores. Many broadcasters have not yet chosen the scanning standards that they will use, and production equipment is only slowly coming to the market. No agreement has been reached as to how to connect digital receivers to set-top boxes used in cable or satellite service. Field tests conducted in the last year have shown that the primitive indoor antennas used by many viewers will not provide reliable reception in many cases. In other words, the "standard" has many problems, most of which could have been avoided had the FCC taken a different route to the final decision.

A most unfortunate but likely result of the current confusion is that, by 2006, when NTSC is to be shut down, many viewers will still be relying on their NTSC receivers. Faced with pending loss of service, viewers will complain to their congressmen, and Congress will order the FCC to keep NTSC on the air. There is no reason at all for making the transition to digital broadcasting unless NTSC can be shut down and the spectrum reassigned to other wealth-creating services. The public is not complaining about NTSC picture quality. On the contrary, the service is very popular. If NTSC stays on the air, or if the shutdown is delayed by a significant period, then DTV will probably fail, and a great deal of money will be lost.

4. An Improved Method of Making Decisions having a Strong Technological Component

In trying to devise a better process for the kinds of decision-making described above, it is useful to examine the FCC process carefully to see which elements were primarily responsible for the faulty result. One obvious problem was that the Commission was called upon to make many political decisions, for which it was ill-equipped. One should expect that, on political questions, the proponents of one policy or another will make arguments that are structured to maximize the benefits of the pending decision to a particular group. That is exactly what was done in many of the submissions of companies hoping for profits, but the arguments masqueraded as objective sci/tech discussions. "Interlace is better for sports." really meant "Interlace is better for my bottom line." The arguments did, of course, have a good deal of technical content, but since it was immersed within a political pleading, it was difficult for the FCC staff, which understands the technology but evidently not the politics, to deal with it scientifically.

Another observation is that the FCC proceedings are much like a civil judicial procedure. In such procedures, the responsibility of the lawyers is not to judge, but to present their client's case as persuasively as possible. The judge or jury then makes the decision, knowing full well that the pleadings are very one-sided. That is not the only way to run a courtroom. At the International Trade Commission, a government agency that enforces American trade laws, the court has its own lawyers who participate in the trial on an equal basis with lawyers for the parties to the suit. Their duty includes representing the public interest, studying all the evidence, questioning the witnesses as seems appropriate, and making an unbiased recommendation on both the law and the facts to the administrative law judge. The judge's decision is automatically reviewed by the full Commission. The equivalent of this, at the agency level, would be a procedure to use the agency staff to study the problem, review submissions, and make a recommendation to the Commission heading the agency.

4.1 Separation of Political and Technological Issues

Since regulatory agencies are well equipped to deal with technological but not with political issues, the issues should be separated. Political questions should be settled by Congress, which is designed specifically for that purpose, and which represents, or should represent, all the people. Technological questions should be dealt with by the cognizant regulatory agency, which should be staffed as needed to carry out this function.

Making the separation. Congressional committees generally exercise oversight over the agencies -- in the case of the FCC, the Subcommittee on Telecommunications and Finance -- so the required apparatus already exists. The congressional oversight committee is in the best position to make a first try at the separation. One possible procedure is for the committee to direct the agency to deal with all questions that the committee does not understand because of their technical content. In the case of television systems, this obviously would include the formulation of transmission standards. A decision concerning the date for turning off NTSC, on the other hand, is certainly political and should be dealt with by the committee. This may involve specific technical opinions, such as the date by which the required products might be available. The committee would ask the agency for this information.

It may well turn out that, during the formulation of a technical standard, an unexpected political issue might surface, such as balancing the costs and benefits affecting competing industries. In that case, the agency may call this issue to the attention of the oversight committee and ask for direction. A degree of cooperation and back-and-forth communication will be required, which should not be hard to implement. In complicated cases, several iterations may be needed to make the proper separation.

4.2 Dealing with the Political Issues

Political decisions, which often affect the division of costs and benefits among the various elements in society due to government actions, should be made by Congress That is the reason why that body exists. It is not to be expected that this Congressional duty can be carried out without rancor. In addition, members of Congress are always, and with good reason, concerned about the effect of their votes on their chances for reelection. In certain cases, especially where there is a conflict between the national interest and the interest of a particular state or district, Congress appoints a nonpartisan commission to take the pressure off individual members. This was done when military bases had to be selected for closing. More importantly, the responsibility for the balance between inflation and unemployment, which affects nearly every citizen, was given, incorrectly in my opinion, to the Federal Reserve System just because it is so contentious.

4.3 Dealing with the Technological Issues

An important precondition for good technological decision-making is the collection of accurate and unbiased technical analyses of the issue at hand, made by persons with no financial interest in the subject under study. The analyses should include, but not be limited to, an evaluation of the proposed technology (Is it ready to go? Is the cost acceptable? Are there better alternatives?) and a prediction of the likely outcome of particular decisions. For example, if the FCC had adopted either the original proposals of the computer industry or those of the broadcasters rather than trying for a compromise, what would have happened?

Submissions by interested parties should be subjected to the same kind of unbiased rigorous analysis as the issue to be decided before they are considered by the commission that makes the decision. It is a law of hierarchical organizations that the decision-makers almost never understand the technology and therefore may be unduly influenced by self-serving submissions. They can be protected against this, in part, by seeing the analysis at the same time.

The FCC staff is the main resource to be used in evaluating sci/tech issues. Unbiased advice can also be requested from the appropriate government or private agency. There is no shortage of places to look to for such analyses: These include the Office of Technology Assessment (OTA, a congressional agency, was unwisely eliminated by the Republicans), National Telecommunications and Information Administration (NTIA is the principal advisor to the president on telecommunications, part of DOC), Natl Acad of Sciences, Natl Acad of Engineering, Natl Inst of Science and Technology (NIST, formerly Natl Bureau of Stds), DOJ (legal aspects), DOD (very strong in telecommunications), Library of Congress, and the Congressional Budget Office (CBO.) In addition, many experts can be found in academia and private think tanks. In coming to a decision, the agency should be required to place the public interest before any other considerations. If any advisory committees are appointed to help with the work of the agency, they must, of course, adhere strictly to the FACA rules, and some oversight should be provided to ensure that this actually happens.

4.4 Automatic Periodic Review

All decisions, both political and technological, must contain within them provision for periodic review of the actual results, since, in spite of one's best efforts, things may not turn out as expected. This is true, for example, of the Telecommunications Act of 1996, which has promoted monopolies rather than competition, as allegedly intended.

4.5 Feasible Half-Measures

In the event that this proposal is seen as too far-reaching for early adoption, there are some useful half-measures that might be taken. Since the separation of political and technological issues is the biggest change suggested, we might consider what improvements might be made without going this far.

Strict adherence to FACA. It would not seem too much to propose that regulatory agencies act in full compliance with FACA, since it is the law of the land. In cases where it appears that exceptions should be made, then these exceptions should themselves be put into law. For example, the Natl Acad of Sciences, last year, obtained partial exemption so that some of its deliberations could be carried out privately. Although I think this was an error, at least it was done properly.

Getting good advice. No change in law is needed for regulatory agencies to make it standard practice to subject all issues to unbiased expert analyses and to go outside, if appropriate, to get analysis beyond the expertise of the staff. These analyses, as well as staff studies of the issues, which no doubt are already being done, at least in some cases, should be made public.

Analysis of submissions of interested parties. One of the most important elements that contributed to the poor quality of the FCC's DTV decision was the apparent acceptance, at face value, of self-serving submissions from groups that expected to profit from the final decision. In the case of the Intl Trade Commission, I believe that the analysis done by staff attorneys has proved an invaluable approach for separating truth from falsehood. All regulatory agencies should adopt this practice, which requires no change in law.

5. Conclusion

Based on the experience of the FCC's DTV case, we have proposed changes in the way in which the federal government makes decisions having a significant technological or scientific component. The most important of these changes is to separate political questions from technological questions. Congress would make the political decisions and regulatory agencies would make the technological decisions. The separation would be done by agreement between the agency and its congressional oversight committee(s). All issues, as well as submissions by interested parties, would be subject to unbiased public analysis by agency staff as well as that of other agencies, both within and without the government. Individuals involved in the review process would be requuired to have no financial interest in the outcome. All decisions would be automatically reviewed on a periodic basis to ensure that the results were those intended.

In the event that the changes discussed are seen to be too far-reaching, then it is proposed that at least the new procedure for analysis be adopted, and that FACA be complied with. Neither step would require any change in law.

It is believed that these proposals would markedly improve the quality of technological decision-making.

Presented at the International Conference on Image Processing Chicago, 5 October 1998

NOTE: The statements in this paper are the opinion of the author, who is not in the pay of any company having an interest in the DTV standard.


Copyright 1998, William F. Schreiber, All Rights Reserved

This article Appeared in the Fall 1998 Issue of 21st

21st, The VXM Network, https://vxm.com

s