Thursday, July 25, 2019

Computing Laws Part 1


Over the many years in the computing industry, I have been introduced to several “laws” of computing that seem to fight against computing every reaching its maximum potential. Unless you have a degree in computer science or have studied at least some introductory computer courses you have probably never heard of any of these “laws” of computing.
First, I would like to say that most of them are not really laws, like the law of gravity for instance, but more like observations of phenomenon that seem to guide the computing industry. Most of them are related primarily to computing hardware trends, parallel software performance and computing system efficiency. I will talk the next couple of weeks about various ones of these laws, and introduce a law of my own in the final week.
The first of these laws in Amdahl’s Law. Gene Amdahl observed a phenomenon in parallel computing that allowed him to predict the theoretical speedup of an application when using multiple processors. He summarized the law like this: “The speedup of a program using multiple processors is limited by the time needed for the sequential fraction of the program.” In other words, every computer program, just like every task you complete in a given day, has a serial part that can only be done in a particular order and cannot be split up. A great example is if you are mowing a lawn, you have to first fill the lawn mower up with gas and start the mower before you can mow the lawn. If you have a bunch of friends with a bunch of lawn mowers, you still cannot get any faster than the amount of time to fill the lawnmowers with gas. The mowing can be parallel up to the number of friends with mowers, but you still have the limiting factor of the sequential step. It is obvious from Amdahl’s law that if there is no limit to the number of processors that can complete a given task, you still hit a limit of maximum performance.
A related law to Amdahl’s Law is Gunther’s Law, also known as the Universal Scalability Law. In 1993 Neil Gunther observed and presented a conceptual framework for understanding and evaluating the maximum scalability of a computing system. He understood that for any given task, even one that can be done entirely in parallel, like many friends help mow your lawn, that you still reach a limit at which adding more friends can actually cause mowing your lawn to take longer. The same thing happens in computer algorithms. They all have a sweet spot where the number of processes running balance out with the size of the problem and the hardware available to give the best performance for the program. I have always liked to think of Gunther’s Law as the law of too many friends. If all your friends and their mower cover your entire lawn they get in each other’s way and your lawn never gets mowed.
In 1988 John L. Gustafson and Edwin Basis wrote an article “Reevaluating Amdahl’s Law” in which they show that you can retain scalability by increasing the problem size. In other words, you can overcome the impact of the serial part of a problem by making the problem significantly large, minimizing the impact of the serial component, and along the same lines also improve upon the predicted maximum performance of Gunther’s Law. In other words, if you have many friends with many lawn mowers and a small lawn you are limited to scaling to only a small number of friends, but if you increase the size of the lawn, the serial factor of filling the mowers with gas and starting them becomes significantly smaller than the task of mowing the lawn. You also are able to given all of your friends a significant amount of lawn to mow in order to avoid the problems caused by having too many mowers. 

50th anniversary of lunar landing


In honor of the 50th anniversary of the lunar landing, I am dedicating this week’s column to rerunning the local news items covering the mission. I was disappointed to find that most of our area newspapers ran very little about the missions. I was able to find one article in the archives here at The Licking News.
This article came out of Huntsville, Ala. and appears to be a press release sent to the hometown of people connected with the Apollo 11 missions. I found it interesting to learn that a Licking High School graduate was among the engineers that helped to design the Saturn V rocket that powered the Apollo series of spacecraft.
The headline read, “Connected With Apollo 11 Mission” in the July 17, 1969 edition of The Licking News. The content of the article follows.
HUNTSVILLE, ALA. – Donald E. Routh son of A. C. Routh of R. R., Licking, Mo., is a member of the organization that has played a major role in the Apollo 11 lunar landing mission.
He is an aerospace engineer in the National Aeronautics and Space Administration’s Marshall Space Flight Center, Huntsville, Ala.
The huge Saturn V rocket that lifted Apollo 11 from earth was developed under the direction of the Marshall Center, NASA’s largest organization.
Routh, a graduate of Licking High School, received his B.S. of E.E. degree in 1960 from Washington University in St. Louis.
His wife, Marie, is the daughter of Mr. and Mrs. Howard Quick of R.R., Licking.
I had really hoped for more to share from the local archives, but it seems that only major city newspapers covered the event at any level of detail as it was very heavily covered by television and radio. 
One of the best archives I have been able to find came from the New York Daily News July 21, 1969. In a story written by Mark Bloom. 
“Two men landed on the moon today and for more than two hours walked its forbidding surface in mankind’s first exploration of an alien world.
In the most incredible adventure in human history, these men coolly established earth’s first outpost in the universe, sending back an amazing panorama of views to millions of awed TV viewers.”
It saddens me to realize that much of the history of the event has been lost due to instability of the media used to store video archives and our lack of foresight as a nation to preserve this history in print.  Being a technology guru, you might find it odd for me to state the importance of putting ink to paper.  However, as can be clearly seen though-out history, it is the written word that survives the test of time.

Thursday, July 11, 2019

Apollo: giant leaps in technology


The Apollo lunar missions resulted in what I believe to be some of the giant leaps in technology over the last century. You might be surprised to find out all the everyday things that came out of landing man on the moon.
Gel pens, my favorite writing utensil, came out of the space program. , The astronauts needed a way of recording the events during the mission that would work in low gravity. The gel pens allowed the ink to flow in the low gravity of space. These pens are capable of writing on the ceiling for a reasonable period of time. Prior to the space program, pens only worked from gravity pulling the ink against the ball.
The materials used in the “Moon Boots” revolutionized athletic footwear. They improved shock absorption, provided more stability, and provided better motion control. Al Gross substituted DuPont’s Hytrel plastic for the foam used in the shoe’s midsole to eliminate the cushioning loss caused by body weight in the shoe. He also used the “blow-molding” techniques used to manufacture the space suits to improve the techniques used to manufacture shoes.  
The fabrics developed for the spacesuits was also used to create fabric roofs, like the one in Houston’s Reliant Stadium. The fabric is stronger and lighter than steel and only weighs less than five ounces per square foot. It is translucent, flexible and reflective, causing a reduction in lighting, cooling and maintenance costs. This fabric is also used in temporary military structures. The fabric lasts up to 20-years and is a cheap alternative to steel and concrete structures. You will see many of these dome-like structures in use by Missouri Department of Transportation to house the rock salt mixture used to treat our winter roads.
NASA, along with General Motors, developed technology for moving heavy equipment on cushions of air. Rolair Systems, Inc. commercialized on the technology and it is used today in stadiums around the world to move large equipment, stages, and even stadium seating. Hawaii’s Aloha Stadium uses this technology to rearrange the stadium, moving entire 7,000 seat sections.
After the 1967 Apollo fire disaster, NASA needed to find ways to protect the flight crew in the event of a fire. Monsanto Company developed a chemically treated fire-proof fabric called Durette. Firemen wear the same fabric and utilize the same air tanks to fight fires on Earth.
Along with these high-tech devices designed to protect and entertain, there were also many things invented to just make life easier such as cordless tools, Chlorine-free pools, heart monitors, Black & Decker’s Dustbuster, quartz clocks and watches, and precision medical syringes. The technologies developed by NASA during the Apollo missions crossed all boundaries of our lives. 
The most surprising area to see an impact from the Apollo missions to me is agriculture. Many do not know that the NASA missions helped to develop feeding technologies for pig farmers. Roughly 15-25 percent of piglets die before they are weaned, usually as a result of accidental crushing by the sow. Farmatic, Inc. used NASA miniature electronic heaters to warm the body of a synthetic sow. This synthetic sow can be used to replace the mother in events of over-sized litters, rejected piglets or physical disorders.
-->

Thursday, July 4, 2019

Space travel pre-1969


This week, let’s look back at the rocketry technology that sent us to moon on July 20, 1969. The times were different then as nations all over the globe were racing to be first in the great space race, with people dreaming of reaching space since the turn of the twentieth century. The first realistic means of space travel was first documented by Konstantin Tsiolkovsky in his famous work “The Exploration of Cosmic Space by Means of Reaction Devices,” published in 1903.
It was 16 years later (1919) when Robert H. Goddard, published a paper, “A Method of Reaching Extreme Altitudes,” where he applied the de Lavel nozzle to liquid fuel rockets, making interplanetary travel possible. His paper influenced the key men in space flight, Hermann Oberth and Wernher von Braun.
The first rocket to reach space and put Germany in the lead of the space race was launched in June, 1944. The German V-2 rocket was used to attempt sub-orbital space flight in the British “Operation Backfire,” but did not achieve the altitude necessary. The Backfire report remains to date the most extensive technical document of the V-2 rocket. This triggered the British Interplanetary Society to propose the Megaroc, a manned suborbital flight vehicle. The Megaroc successfully sent pilot Eric Brown on a sub-orbital flight in 1949.
Over a decade later true orbital space flight, both manned and unmanned, took place during the “Space Race,” a fierce competition during the Cold War between Russia and the United States. The race began in 1957 with both nations announcing plans to launch artificial satellites. The U.S. announced a planned launch of Vanguard by spring 1958 and Russia claimed to be able to launch by the fall of 1957.
Russia won the first round with the launch of three successful missions, Sputnik 1 on October 4, 1957; Sputnik 2, the first to carry a living animal, a dog named Laika and Sputnik 3, May 15, 1958, carrying a large array of geophysical research instruments.
The U.S., on the other hand, faced a series of failures until its successful mission with Explorer 1, the first U.S. satellite, on February 1, 1958. Explorer 1 carried instruments that detected the theorized Van Allen radiation belt. The shock over Sputnik 1 triggered the creation of the National Aeronautics and Space Administration (NASA) and gave it responsibility for the nation’s civilian space programs, beginning the race for the first man in space.
Unfortunately the U.S. lost again on April 12, 1961. Yuri Gagarin made a 108-minute single orbit flight on board the Vostok. Between this first flight and June 16, 1962, the USSR launched a total of six men into space, two pairs flying concurrently, resulting in 260 orbits and just over 16-days in space.
The U.S was falling further behind in the race to space. They only had one successful manned flight by Alan Shepard, May 5, 1961, on the Freedom 7 capsule. However, Shepard fell short of reaching space and only achieved a sub-orbital flight. It was not until February 20, 1962, when John Glenn became the first U.S. orbital astronaut, making three orbits on Friendship 7. President John F. Kennedy announced a plan at this time to land a man on the moon by 1970, officially starting Project Apollo.
Not to be outdone, USSR put the first woman in space on June 16, 1963. Valentina Tereshkova flew aboard the Vostok 6. Tereshkova married fellow cosmonaut Andrian Nikolayev and on June 8, 1964, gave birth to the first child conceived by two space travelers.
On July 20, 1969, the U.S. succeeded in achieving President Kennedy’s goal with the landing of Apollo 11. Neil Armstrong and Buzz Aldrin became the first men to set foot on the moon. Six successful moon landings were achieved through 1972, with only one failure on Apollo 13.
Unfortunately, the USSR’s N1 rocket suffered the largest rocket explosion in history just weeks before the first U.S. moon landing. The N1 rocket booster was the most powerful single-stage rocket ever made. All four attempted launches resulted in failures. The largest, on July 3, 1969, destroyed the launch pad. These failures resulted in the USSR government officially ending its manned lunar program on June 24, 1974.

Thursday, June 27, 2019

Apollo 11 Computer System


Many people believe the computer systems of the Apollo 11 space craft are ancient and little can be learned from them. I disagree. The computer systems in the Apollo 11 were, in some regards, more advanced than the computers of the 1990s and early 2000s. There are a couple of things that lead me to that conclusion.
The first is that the computer on the both the Lunar module, “Luminary” and the command module, “Comanche” were networked wirelessly via radio frequency and also were in direct communication with the computer at the mission control, “Tranquility.” This was in July of 1969. We did not develop robust wireless networking technologies for another nearly three decades with the development of the 802.11 “Wi-Fi” protocols in 1997. Taking this a step further, even wired communication between computers was not standardized until four years later with the release of the 802.3 Ethernet standards. Needless to say the communications for the computing systems were decades ahead of their times.
The second major advancement in computers that was ahead of times was the processor itself. Though the system performed around 1000 times slower than a modern computer, it had some features that were not developed commercially in processors until the early 2000s. Among those features was the capability to run multiple threads. Threads are independent processes that can run at the same time; for example it was capable of tracking the exact location of the spacecraft using computer vision to determine the view angle of particular stars at the same time as it was calculating the firing thrust of the engines to properly align the craft for a safe landing. We did not have true multi-thread processor capabilities in home computers until around 2001 with the release of the first multi-core processor by Intel. You will also notice that I mentioned computer vision in the example. You might be surprised to realize that the computer in Luminary did live-image processing to determine the angle and location of the craft both for the lunar landing and the reentry angle to make it safely back to earth.
The third advanced feature of the computer in the Apollo spacecraft was the ability of the system to “self-heal.” During the mission, an unlikely set of circumstances caused the guidance computer to begin throwing alarms. These alarms were caused by the radar system that was tracking the module for recovery in the case of a mission abort. The program started using too much computing power at a critical phase in another thread that was helping to land the craft. The robust design of the system allowed the computer to make the decision to terminate the radar process and focus on landing the craft. Self-healing computer code and systems are still an advanced field of computer science.
I am fascinated by the excellent work done by the team at the Massachusetts Institute of Technology under the direction of Margaret Hamilton. The code was publicly released in July of 2016 and computer scientist all over the world got to see the human-ness shining through in the very complex code. There was a terrible habit of typing “WTIH” for “WITH” about 20 times in the code comments. The routine names in the code were fascinating and made it easy to picture the imaginations of the young engineers, sparked by being on the spacecraft, as they wrote code segments like “LUNAR_LANDING_GUIDANCE_EQUATIONS” and “BURN_BABY_BURN-MASTER_IGNITION_ROUTINE.” There was even apparently a problem in an early version of the code to leave extraneous data in DVTOTAL based on a comment block in the code that stated very clearly, “don’t forget to clean out leftover DVTOTAL data when GROUP 4 RESTARTS and then BURN, BABY!” It is difficult to figure out what DVTOTAL is from reading the code, but, very clearly, it is important to clear it before this routine. I am fairly certain that “BURN BABY!” means let’s get this rocket into space.
-->

Thursday, June 20, 2019

Amateur Radio


In honor of the upcoming Amateur Radio Day celebration Saturday, June 22 in Texas County between noon and 5 p.m. in front of Pizza Express in Houston, amateur radio, its history and future, seems to be a good topic for this week.
Amateur Radio, also known as ham radio, is the use of radio frequency devices for the purpose of exchanging messages, wireless experimentation, self-training, private recreation, and emergency communication, by individuals for non-commercial purposes. The term amateur in this situation means a person who has an interest in radio electric practices with a purely personal interest and no monetary or similar reward expected from the use.
The amateur radio and satellite service is established and controlled by the International Telecommunication Union (ITU) through the Radio Regulations division. ITU regulates all the technical and operational characteristics of radio transmission, both amateur and commercial broadcasting. In order to become an amateur radio operator you must pass a series of tests to show your understanding of the concepts in electronics and the government regulations.
Over two-million people worldwide are amateur radio operators and use their transmission equipment for a variety of tasks including radio communication relays, computer networks over air-waves, and even video broadcasts. Because these radio waves can travel internationally as well as into space, the regulatory board needs to be an international board. Currently that controlling board is the International Amateur Radio Union (IARU), which is in three regions and has member associations in most countries.
My first experience with ham radio was as a teenage boy. One of my neighbors was a ham radio operator with the highest level of license. I remember him saying he could operate at 100,000 watts. He only had a 50,000 watt antenna and one night he just wanted to see what 100,000 watts would do. I remember seeing a blue glow coming off of his antenna tower that night for about ten minutes. He climbed his tower the next day to repair a cable that had melted. I remember sitting in his basement studio and watching him talk to friends in China and thinking how great it would be to become an operator myself. I still have not taken that step more than 30-years later.
I helped my neighbor set up one of the first radioteletype (RTTY) systems; he used his mechanical morse-code relay and controlled it by computer to send digital signals around the globe. The technology behind it is actually still used today for computer wireless networks, though at a much higher frequency and using transistor-based switches rather than mechanical relays. The opportunities available to amateur radio enthusiasts today are endless, and I am sure that any club member would be happy to help you get started. A great place to begin would be the Amateur Radio day coming up this weekend.

Thursday, June 13, 2019

Television broadcast signals


Last week we talked about radio broadcast signals and the difference between AM and FM signals. This week I thought we could take it into talking about television broadcast signals. I am sure many of you in this area have experienced the same things that I have in recent years with broadcast television stations.  If you receive your local news via antenna rather than as a cable subscriber, there is a big difference in the quality of the TV signal since changes in 2009.
I noticed one thing which bothered me a lot; before the digital broadcast switchover in 2009, I could still get KY3 during a heavy storm. The picture had a lot of static and the sound was a little unclear, but I could still hear major weather alerts and be informed. Just recently a small tornado went through the outskirts of Edgar Springs. I heard the tornado warning and then my screen went dark. No signal, and therefore no information. The old analog stations never went completely away like this during a storm.
So what is the difference between analog and digital signals, why does the quality look so great on digital stations all the way up until they just drop off with a no-signal message? With analog it seemed that the quality degraded until you could no longer get the station, but there was never a complete cutoff. It has to do a lot with how the signals are transmitted.
The old analog broadcasts used a dual carrier technique overlaying both AM and FM signals that we talked about last week to transmit both the picture and the audio. The picture is transmitted using AM signals and the sound using FM signals. These signals are prone to noise from interference of other stations or signal bouncing off of walls, tree, and even people. The interference is what caused poor color quality, ghosting, and weak sound quality. The NTSC standard for television broadcast was adopted in 1941 and transmitted 525-lines of image data at 30-frames per second. NTSC worked well and still works today with older analog devices, like VCRs and older DVD players, but because color was not added until 1953 that standard became jokingly referred to by professionals as “Never Twice the Same Color” because of color inconsistencies between broadcast stations.
The new Advanced Television Standard Committee (ATSC) uses the same methods that store video information on DVDs or Blu-ray Discs to transmit the television signal. These methods use a digital signal consisting of a series of ones and zeros, or “on” and “off”. This new standard resulted in better quality images and sound for multiple reasons. The first is that it was designed from the ground up with things like color, surround sound audio, and text transmission taken into consideration.
The digital signal is much smaller now, allowing stations to use the same bandwidth to broadcast multiple stations, or sub-channels in addition to the main channel, using the same broadcast equipment.  Digital signals also allowed for the broadcast of the wide screen format and high definition signals. The only downfall of the digital broadcast is the inability to receive partial information from a weak signal. Digital is an all or nothing type of broadcast, as missing information in a digital signal cannot be interpreted by the receiver, causing errors and the nice “no signal” message to display on your TV.
You can think of a digital transmission as transmitting in code; if a single piece of the code is missing, it cannot be deciphered, resulting in unusable images and sound that cannot be displayed. Analog transmissions transmit the original image and sound, so if pieces are missing, the sound gets static, and the picture gets missing spots, or fuzzy. So even if the picture is clearer with digital, it is less reliable over long distances and in high noise situations like severe storms.

Thursday, June 6, 2019

Radio Signals


My six-year-old son Obie was riding in the car with me Saturday night and asked a question that brought about the topic for this week. He said, “Hey, Dad, I know that FM is the radio stations, and Aux lets us listen to music from your phone, but what is the AM button for?”
Photo by Ivan Akira
FM radio carries the audio signal by 
modifying the frequency of the 
carrier wave proportionally to the 
audio signal’s amplitude. 
So just in case there are others out there that want to know what the AM button on your radio is for, here is a lesson on radio signals. Every radio station in the world operates on one of two different broadcast technologies. They either use amplitude modulation (AM) or frequency modulation (FM). They both use electrical current passed through a broadcast antenna to send an electrical signal through the air, but they carry, or modulate the sound wave in very different ways.
To begin to understand radio broadcast, it is first important to understand that electricity can travel through the air just like sound and light travels through the air using waves. There are two main properties to a wave. The first is the wavelength, which is inversely related to the frequency and tells the amount of distance between individual peaks in the wave. The frequency tells us how many peaks will reach us in a second. It can be thought of as how high or low a sound is, or actually determines the color of light. The second is the amplitude, or height of the wave, it can be thought of as how loud a sound is or how bright a light appears.
FM, which is the most well known radio signal, carries the sound over the air by modification of the frequency of the radio wave. As the sound wave you are broadcasting changes in pitch, the frequency of the wave changes within a given range. FM operates in a frequency range of 88-108 Megahertz (MHz), which means between 88 million and 108 million peaks of a wave hit your antenna every second. Each FM station is assigned a range of roughly 100 kilohertz (kHz), meaning the signal varies by 100 thousand waves per second. You can think of it as carrying the sound by changing the length of the wave. The FM signal will not travel as far as AM signals because of atmospheric effects on the signal.
Photo by Ivan Akira: AM radio carries 
the audio signal by modifying the amplitude
 of the carrier wave proportionally
 to the audio signal’s amplitude. 

AM, which is less well known, also happens to be less expensive to operate and the signals can travel much longer distances. This has to due with the longer wavelengths, which cause the signal to bounce off of the upper atmosphere, whereas FM signals pass through it. AM signals carry the sound wave by varying the amplitude, or height of the wave, in proportion to the height of the sound wave being broadcast. They operate on a much lower frequency than FM, around 540-1600 kHz which means between 540 thousand and 1.6 million waves hit your antenna in a second, roughly 100 times fewer than FM. AM stations are assigned a given frequency that never changes during their broadcast. This allows a receiver to work with a much weaker signal since it does not change, making it possible on a clear night to listen to AM radio stations as far away as northern Canada and southern Mexico.
Due to the higher sound quality of FM over AM broadcast signals, most FM stations are used to broadcast music and most AM stations are used to broadcast talk radio. So now you know what the AM button on your radio means.
Another really neat fact about AM radio is that if you tune to a weak AM station during a thunderstorm, you can hear the lightning on the radio nearly the same time as you see the flash, and it gives you an audio warning of the pending clap of thunder.

Thursday, May 30, 2019

Additive Manufacturing


By 3DPrinthuset (Denmark) The building on demand (BOD)
printer developed by COBOD International
(formerly known as 3DPrinthuset, now its sister company)
 , CC BY-SA 4.0
  This week we are going to review some advanced manufacturing techniques. A majority of parts manufacturing is done with a process known as subtractive manufacturing. This is the process of taking a piece of metal, plastic, wood or other material and removing portions of it in a controlled fashion, resulting in a finished part. This method is used to make parts like gears, screws, engine blocks and crankshafts.   There has been a lot of talk in recent years of additive manufacturing techniques. This is where you start with nothing and build a part by adding material in a controlled fashion. The most popular additive manufacturing process is called 3-D printing.
  The very first 3-D printers used chemical compounds that harden when exposed to ultraviolet (UV) radiation. This is the invisible light that causes sunburns. It was in 1987 that stereolithography (SL) techniques were developed to create acrylic components. A UV laser is aimed at the reactive resin which instantly hardens. The object is then lowered layer by layer into the liquid and the final part is pulled from the vat of liquid. This technique produces the highest level of detail in the parts, but is the most complex process.
  The second method of additive manufacturing is called jetting and is similar to how an inkjet printer works. The same reactive resin is now sprayed from a nozzle onto a surface and exposed to the UV light, hardening it, before another layer is sprayed. There are also similar methods using powder and spraying an adhesive to create the layers. This is heavily used in industrial manufacturing facilities.
By Bre Pettis - Flickr:
 A Makerbot three-dimensional printer
 using PLA extrusion methods.
CC BY 2.0
  A very similar method to jetting and SL is Selective Laser Sintering (SLS). A powered material that can be rapidly fused by laser heat, such as polymides and thermoplastic elastomers, is placed in thin layers on a metal surface. A powerful laser then fuses (not melts) the powders into layers forming a very durable object. These were used in the manufacturing of custom hearing aids molded to fit the individual’s ear canal perfectly.
The final method of readily available 3-D printing technologies is definitely the cheapest method and the most popular among home users. It is called extrusion printing. This method uses strands of PLA or ABS plastics that are run through a temperature controlled nozzle which melts the plastic and creates the objects layer by layer. This extrusion method has been used not only with plastics, but with concrete, metal, ceramics, and even food, such as chocolate.
  Over the last two decades Missouri University of Science and Technology has been involved in bleeding edge research in the field of additive manufacturing. Among their claims to fame is the Freeze Form Extrusion machine. This machine uses extremely low temperatures (-16 to -40 degrees Celsius) to freeze ceramic pastes consisting of Boron, and Aluminum Tri-Oxide to form ceramic components capable of withstanding extreme heat greater than 2400 degrees Celsius. The process is a two-step process where the part is first frozen together and then baked at a very high temperature to rapidly fuse the ceramics.  Their latest research is in extending this process to fabricate titanium alloy components.
Additive manufacturing can be used today to make everything from key-chains to houses and even human skin grafts and organs. We have come a long way in the last 30-years in manufacturing technologies.


Thursday, May 23, 2019

The slow demise of an industry

It is a sad week in the High Performance Computing (HPC) industry as one of the best known giants in the field is acquired. Hewlett Packard Enterprises (HPE) is in the process of closing on a 1.3 billion dollar deal to purchase supercomputer maker Cray. This is the second supercomputing company to be purchased by HPE within 3-years. HPE purchased Silicon Graphics Incorporated in 2017.
This acquisition brings the global total of true supercomputing manufacturers selling product in the U.S. down to three. There are other companies offering HPC platforms, but only HPE, IBM and Atos remain as manufacturers of truly integrated HPC platforms.
I know this raises a question, what is an integrated HPC platform? I will first start by telling you a little history of HPC. In 1995, the biggest computers were very expensive mainframe systems manufactured by only a couple of companies. They could do the equivalent work of 10-12 normal computers, but cost two to three hundred times more. Out of the need for more powerful computing systems at a lower cost, a group of engineers developed a programming method that allowed computers to pass messages between one another. The Message Passing Interface (MPI) was born. I was among the team members that developed the standards behind that interface.
MPI brought about a whole new realm of computing called distributed computing. It is the backbone of almost every HPC system, including some aspects of modern cloud computing. MPI allowed smaller research organizations with a low budget to take even their used desktop computer systems that were being pulled out of service and link them together to create massive computing platforms. They were termed Beowulf clusters, named after the first one created by Thomas Sterling and Donald Becker at NASA in 1994.
This research sparked a brand new industry and a few short years ago there were hundreds of HPC manufacturing companies. Over the years a few big users of HPC felt the need for more robust systems than the hodge-podge, stick-it-together-with-duct-tape-and-network-switches type systems. This birthed companies like Cray and SGI. IBM and Atos/Bull had been around manufacturing the expensive mainframes before entering the HPC market.
The things that set the likes of HPE, IBM, SGI, Cray and Atos apart is the manufacturing of these Beowulf style clusters into integrated racks including customized computer motherboards, cooling methods, and interconnects in order to shrink the size and power utilization, effectively reducing the total cost of ownership of the building-sized computing platforms. The other manufacturers stuck to racking commodity server hardware and manually cabling the networks and infrastructure. The initial cost of these systems is lower, but they cost more to maintain over time.
It saddens me to see us down to three companies working on the bleeding edge of HPC hardware because, much like any other industry, as the competition dies off or merges, the prices increase and the innovation decreases. I miss the days when every researcher in computer science was building their own HPC systems and we all worked together to solve the problems and innovate new solutions. Quantum Computing will probably save the day with innovation, but that’s a topic for another day.