The government only paid rewards when the investigators verified there was an actual case of smallpox and made sure villagers did not transport cases of smallpox from other villages just to collect the reward. The initial search focused on four major endemic states of Bihar, Madhya Pradesh, Uttar Pradesh, and West Bengal. Dr. Foege and the smallpox teams found that in the two states of Bihar and Uttar Pradesh, people suffered severe endemic cases of smallpox which infected ninety percent of the population in a span of two thousand villages.[xli] While the information seemed bleak and daunting, Dr. Foege and the smallpox team had the needed valuable information to implement the containment and surveillance strategy. By the end of January 1974, smallpox cases declined to 198 and remained on this trajectory until June 12, 1975, when the WHO sent a letter out to all smallpox workers in India that there were no more new reported cases.
Dr. Foege explained, “In twenty months, the surveillance/containment approach had proved ideally suited for eradicating a virus that had eluded the best efforts of mass vaccination programs for 175 years.”[xlii] Thus, India, one of the final endemic countries of smallpox, had zero smallpox cases. Afterward, twenty-four WHO epidemiologists and thousands of Somalian health workers ended the global transmission chain of naturally occurring smallpox in Somalia in 1977.
The WHO and the international community worked together to finally eradicate a disease that had plagued mankind throughout history. The last country to end the global transmission chain of smallpox was Somalia in 1977. The efforts between the U.S. and USSR, facilitated by the WHO, contributed to the success to the smallpox eradication program. Two bitter enemies in the Cold War found themselves in an alliance that benefited the world. However, Cold War politics still loomed large and prevailed as both superpowers, in secret, undermined the eradication of smallpox by developing extensive biological weapons including smallpox. The U.S. developed a biological weapons program not only for defensive use but also for offensive capabilities, as well.
The U.S. biological weapons (BW) program, which was in operation between 1941 and 1969, was first utilized to deter the use of diseases against the U.S. and secondly to retaliate if deterrence failed.[xliii] The biological weapons program started in the U.S. when Secretary of War Henry L. Stimson requested the National Academy of Science to appoint a committee to research the threat level of a possible biological weapons attack. The committee found that the U.S. was susceptible to attacks from biological weapons and the committee suggested to President Roosevelt that he take the necessary steps to reduce the U.S. vulnerability to biological weapons.
President Roosevelt approved of the formation of the War Reserve Service in August 1942, and this was based at Camp Detrick, Maryland. The War Reserve Service’s first task was to develop defensive measures against biological weapons through the method of research, development, testing, and evaluation (RDTE).[xliv] At its peak, during WWII the U.S. Army led the biological weapons program with a staff of 3,900 personnel and at four different locations, with Maryland as the headquarters, Mississippi and Utah as field testing facilities, and Indiana as a production facility. However, by the end of WWII, activities gradually phased out and were reduced to only research-based.
For example, the production plant in Indiana, Vigo Ordnance Works, ceased retaliatory operations before the production of infectious biological weapons. However, unclassified documents in the U.S. Army Activity in the U.S. Biological Warfare Programs Volume II reveals that Pine Bluff Arsenal, in Arkansas, developed antipersonnel and anti-crop biological weapons between 1954 to 1967.[xlv] The research and development team at Fort Detrick developed munitions of “burster type bombs available from the British and was extended to improved burster type munitions, submunitions, gas explosion bombs, various types of line source spray tanks, and highly specialized projectiles and generators as well as insect vectors” for offensive capabilities.[xlvi] However, the Nixon administration questioned the necessity of the BW program.
In May 1969, President Nixon directed the National Security Council Political-Military Group (PMG) to conduct a study on the current U.S. policy on chemical and biological weapons. The PMG focused their study on the current threat to the U.S., the research and development (R&D) of chemical and biological weapons, the U.S. capabilities of chemical and biological defenses and offensive use, and finally the U.S. position on arms controls including the ratification of the Geneva Protocol in 1965.[xlvii] The PMG study found the U.S. BW program cost $36.4 annually.
The study also found the U.S. only had small quantities of both lethal and incapacitating biological agents that were maintained in special warfare devices.[xlviii] Additionally, the study found that the U.S. remained vulnerable to BW because the U.S. lacked biological detection systems. Thus, the study reported both arguments for and against biological weapons. The PMG reported that funding a BW program would contribute to deterrence by having the capability to retaliate if attacked, R&D of BW would eventually lead to an early detection system and remained as a strategic option.
However, the negatives that the PMG found was that biological agents have an unpredictable effect on an area that cannot be controlled. It seems unnecessary to deter strategic use of biological weapons and limits the U.S. flexibility in supporting any arms control arrangements.[xlix] After the National Security Council and the White House Administration was briefed on the PMG study, President Nixon determined to end the BW program in November 1969.