Thursday, September 19, 2019

High Performance Computing


High Performance Computing (HPC) consists of two main types of computing platforms. There are shared memory platforms that run a single operating system and act as a single computer, where each processor in the system has access to all of the memory. The largest of these available on the market today are Atos’ BullSequana S1600 and HPE’s Superdome, which max out at 16-processors sockets and 24TB of memory. Coming out later this year, the BullSequana S3200 will supply 32-processor sockets and 48TB of memory.

The other type of HPC is called a distributed memory system and it links multiple computers together by a software stack that allows the separate systems to pass memory from one to another, utilizing a message passing library. These systems first came about to replace the expensive shared memory systems with commodity computer hardware, like your laptop or desktop computer. Standards for how to share the memory through message passing were first developed about three decades ago and formed a new computing industry. These systems made a shift from commodity hardware to specialized platforms about twenty years ago with companies like Cray, Bull (now a part of Atos), and SGI leading the pack.

Today the main specialized hardware manufactures are Atos with their Direct Liquid Cooled Sequana XH2000 and HPE with a conglomerate of technologies from the acquisition of both SGI and Cray in the last few years. It is unclear in the industry which product line will be kept through the mergers. HPC used to be purely a scientific research platform used by the likes of NASA and university research programs, but in recent years it has made a shift to being used in nearly every industry, from movies, fashion and toiletries to planes, trains, and automobiles.  

The newest use cases for HPC today are in data analytics, machine learning, and artificial intelligence. However, I would say the leading use case for HPC worldwide is still in the fields of computational chemistry and computational fluid dynamics. Computational fluid dynamics studies how fluid or material move or flow through mechanical systems. It is used to model things like how a detergent pours out of the detergent bottle, how a diaper absorbs liquids, and how air flows through a jet turbine for optimal performance. Computational chemistry uses computers to simulate molecular structures and chemical reaction processes, effectively replacing large chemical testing laboratories by large computing platforms.

The two latest innovations that are causing a shift in HPC are cloud computing, Google Cloud Services, Amazon Web Service, Microsoft Azure, and Quantum Computing, which is the next generation of computers that are still under development and are not likely to be easily available for ten years or more.

If you are interested in learning more about HPC, there are a couple of great places to start. The University of Oklahoma hosts a conference that is free and open to the public every September. This year the event is being held on Sept. 24 and 25; more information about the event can be found at http://www.oscer.ou.edu/Symposium2019/agenda.html. There is also professional training available from the Linux Cluster Institute (http://www.linuxclustersinstitute.org). Scott Hamilton, Senior Expert in HPC at Atos has also published a book on the Message Passing Interface designed for beginners in the field. It is available on Amazon through his author page (http://amazon.com/author/techshepherd). There are also several free resources online by searching for HPC or MPI.

Thursday, September 12, 2019

Big Data and data structures


There is a recent term in computer science, “Big Data,” which has a very loose definition and causes a lot of confusion within the industry. Big Data has been in existence since before computers existed. A perfect example of Big Data is ancient history that was recorded on scrolls. A scroll could only hold so much information before it was full and could hold no more. A single scroll was not too much to handle and carry around, but the amount of recorded data quickly expanded to hundreds and thousands of scrolls. This is what we refer to as Big Data, and we are still trying to come up with a solution to handle data that grows too large to be managed easily.

Most companies have an entire division dedicated to the management of data, and they have a big issue to face as we produce and gather more data in a single day than was collected and gathered for all of history prior to the computer age. Big Data is not a problem that will just go away and one of the ways we have begun to manage this landslide of data is to form tighter data structures. 

I know, I have now introduced another new term to talk about an old problem. Don’t worry, data structures are easy. Remember the scroll, it had a linear data structure, things were recorded on the scroll in the order that they happened and stored as characters of a written language. This is a very loose structure that is usually referred to as unstructured data, because you can write anything on a scroll. To have real data structure, you need a set format for recording the data. A great example of a data structure you have all seen is your federal income tax return form. They provide a set number of blocks to record your information on the form and reject the form if you go outside of the boundaries.  This is a data structure in paper format.

So how do data structures help to manage Big Data? The biggest way is by keeping the data in a known order, with a known size and known fields. For example, you might want to keep an address book; it would have all your friends’ names, addresses, phone numbers, and birthdays. What if you just started writing your friends’ information on a blank sheet of paper in a random order?

Bill, 9/1/73, 123 Main Street, Smith, MO, Licking, 65462, John Licking, Stevens, MO, 4/23/85, 573-414-5555, 65462, 573-341-5565, 123 Cedar Street. 

It would become quickly impossible to find anyone’s contact information in your address book, and even with the two friends in my example, you already have a Big Data problem; we don’t know what information belongs together.

If we take the same two people and provide a structure for the data, it suddenly becomes much more usable. 

Bill Smith, 123 Main Street, Licking, MO 65462, 537-414-5555, 9/1/73; John Stevens, 123 Cedar Street, Licking, MO 65462, 573-341-5565, 4/23/85. 

It is still not easily readable by a computer, because even though there is a known order, we have a field separator, the comma, but there is no known length, which complicates things for computer software. A computer likes to store data structures of a known length, so you need to define a size for each data field, and a character to represent empty space. In my example we will use 15 characters for every field and ^ will be an empty space.

Our address book now looks like this:

Bill Smith^^^^^
123 Main Street
Licking,_MO^^^^
65542^^^^^^^^^^
573-414-5555^^^
09/01/1973^^^^^
John Stevens^^^
123 Cedar Stree
Licking,_MO^^^^
65542^^^^^^^^^^
573-341-5565^^^
04/23/1985^^^^^


Thursday, September 5, 2019

Blockchain outstanding questions


Last week we left a few unanswered questions while talking about blockchain technologies. I would like to address those unanswered questions this week. The first one that comes to mind is the problem of double spending. Double spending can occur with a digital currency or any digital payment processing system, unless all payments are authorized by a single, central authority. Blockchain does not have a single central authority so in the early stages of development, the main problem that had to be addressed was the ability for the same currency to be used in two transactions simultaneously, resulting in a double spend.

The double spend problem was solved by only allowing a single path in the chain, and making each link depend on the hash code of the prior link. If two efforts to spend on the same chain occurred at exactly the same time, only one transaction would be processed. The other would have an invalid hash code linking to the previous transaction and be ruled invalid. This solution raised the second question, “What is a hash?”

A hash is created by a computational function, called a hash function. A hash function maps data of any size to a fixed size value. There are three basic rules to a hash function. The first is that each time you encode, or hash, the data you get the same results. The second is that small changes, even a single character change in the data must result in a different hash. The third is that a hash function cannot be reversed, meaning that you cannot use the hash to recreate the original data. Two pieces of data with a large difference can result in the same hash. An example of a trivial hash function is a function that maps names to a two-digit number. John Smith is 02, Lisa Smith is 01, Sam Doe is 04, and Sandra Dee is also 02. We won’t get into exactly how the mapping takes place, because it is a very advanced topic. All we need to know to understand how a hash function works is that it maps input data to a given set of specific values, like the example maps names to numbers between 00 and 99.

We mentioned that the hash function is used to tie the links in the blockchain together. The links form a Merkle tree. In cryptography and computer science, a Merkle tree is a data structure that links data in a single direction from a leaf to parent. In a Merkle tree, each leaf node is labeled with the hash of the data it contains, and every parent node is labeled with the cryptographic hash of the labels of its child’s nodes. Merkle trees allow for efficient and secure verification of the contents of large data structures, like transactional databases used in blockchains.

Merkle trees are named for Ralph Merkle who patented the technology in 1979. They are used to verify any stored data that is handled and transferred in and between computers. They ensure that datablocks are not changed by other peers in a peer-to-peer network, either by accidental corruption or fake blocks created by malicious systems on the network. This makes it difficult, but not impossible, to introduce fake data into a blockchain as it merely requires creating a data block that matches the hash of the block you are replacing, in effect, corrupting the tree. However, generating the fake data block is a time-consuming process and likely to not be completed by the time the next real block is generated, making it impossible to inject your change.

Join me again next week for an overview of data-structures and their applications.


Thursday, August 29, 2019

Blockchain


Anyone who has watched any technology channel news or followed any technology blog in the last few years has probably heard about blockchain. It has been touted as everything from the future of computing to the formation of a global currency and payment method. None of us know what the future of any technology holds, but blockchain has a very promising future.

So what is blockchain? Blockchain is a growing list of related records called blocks that are linked using cryptography. Each block contains an encrypted hash of the previous block, a timestamp and transaction data. A hash is an abbreviated version of the actual data, it perfectly represents the data, but cannot be reversed to display the data. This makes it possible for a block to verify the previous block by the matching hash, without knowing the data stored in the previous block. The timestamp works to track the transaction in time and prevent the block from being modified once written as any change to the data or the timestamp will change the hash, breaking the chain. You can think of a blockchain loosely like a notebook with a carbon copy of the previous page overlaid at the top of the current page, and a page is only a valid part of the book if it matches the previous page.

The high resistance to changing the data makes it a great tool for representing an open ledger, that can be widely distributed across multiple systems. This network of systems can each independently create new blocks in the chain linked to a previous block, and they are all shared across the network to every other system. Once a block is added to the chain, it cannot be modified without altering all the subsequent blocks in the chain. To modify the subsequent blocks, every system in the network must agree to the change. This allows very large groups of people to access the ledger and track transactions without the worry that someone else in the network can modify the data.

Blockchain was invented by an anonymous person or group of people under the penname Satoshi Nakamoto in 2008 to serve as the transactional ledger for bitcoin. This allowed bitcoin to become the first electronic currency to solve the problem of double-spending without the need for a trusted authority or central server. The first work on a secure chain of data blocks was first described in 1991 by Stuart Haber and W. Scott Stornetta. Their work was designed to prevent timestamps of documents from being modified after creation. In 1992, Bayer, Haber and Stornetta incorporated Merkle trees to the design, improving the efficiency by allowing several document certificates to be collected into a single block. Nakamoto improved their design by using a hash function to link the blocks without requiring new blocks to be signed by an authority. This modification allowed blocks to be added by any source and still be considered trusted because they could not be altered.

The words block and chain were used separately in Nakamoto’s original paper but were popularized as a single term in early 2016. What makes blockchain such a powerful tool is that it creates a digital ledger that does not require a central server, can be made entirely public and is shared across many computers. Any involved record cannot be altered retroactively, without altering all the future transactions. This allows any participant in the chain to verify and audit transactions independently and inexpensively. The mass collaboration between the peers within the system creates a robust workflow by its very nature. A blockchain cannot be independently reproduced, enforcing that each unit of value is only able to be transferred once, solving a long-standing problem of double-spending in digital currencies. 

Blockchain is another technology like cryptography that will be impacted by quantum computers, but blockchain, unlike cryptography, will be impacted in a positive manner. Quantum computing will improve the speed of blockchain transactions but will have no impact on the security of the chain.
I realize that there were many technical terms used in this article that may be unfamiliar to you, so in the coming weeks I will cover hashes and Merkle trees in more detail. For now, it is enough to state that they are methods of linking, storing and verifying data heavily used in blockchain technologies.

Thursday, August 22, 2019

Quantum Computing and what it means for you


Do you use online banking? Buy things on Amazon? Or post stories on Facebook? If so, then Quantum Computing can have a major impact on how you do things on the internet. To explain how, we must start with understanding internet security.

Have you ever noticed that some websites, like your banking site for example, start with https and show a little lock next to the address bar, while others start with http and do not have the lock? The https and the lock mean that it is a secure site and any information you enter on a form is guaranteed to only be readable by the official owners of the site. This works through a process of encryption using the secure sockets layer (ssl).

Current encryption techniques work off of a precept that factoring numbers, especially extremely large prime numbers that have been multiplied together, is an extremely hard problem for a computer to solve. Modern encryption techniques multiply two 2048-bit prime numbers together to establish an encryption scheme. The two prime numbers represent the client and server and each one knows the other prime, but no one else listening on the line knows either prime.

To factor such a large prime number will take a nearly infinite time with current computer technology. Every digit must be sampled independently, each taking two steps, and even with the most powerful computers it would take more than the current age of the universe to factor the primes. A good mathematician can factor them by hand in about a decade, by which time your bank transaction information is useless.

This is where quantum computers come into play. A quantum computer works off the concept of a qubit. A qubit can be thought of as a coin flipping in the area, we do not know whether it is heads or tails until we catch it and look. It has equal probability of being either one before it lands. This state is known in the quantum computing world as superposition. Another power of a qubit is called entanglement. Entanglement is where two qubits exactly mirror each other; regardless of what happens, they are always in either equal and opposite states or exactly the same state, depending on how they became entangled.

These two properties of qubits allow a quantum computer to know all the possible outcomes for any possible input instantaneously. As a result, they can factor very large prime numbers very quickly, in effect, breaking modern cryptography. This is bad news for your bank account, but the good news is that this requires a large, fault-tolerant quantum computer.

Today we have small, noisy, intermediate-scale quantum (NISQ) computers. NISQ systems have relatively small qubit counts of less than 50 qubits. The qubits in these systems are noisy and error prone, creating problems with successful results in things like breaking encryption, and they are too small. They can only factor up to 50-bit numbers. 

Current predictions are that we are between 10-20 years away from having a fault-tolerant quantum computer, with 2048 qubits capable of breaking modern encryption, and the banking industry is already working on quantum encryption techniques in preparation for the future. So there is no real need to worry for at least a decade.

Thursday, August 15, 2019

Next generation computing


Over the last few weeks we have discussed several computing laws that have driven computing development over the past decades. We noticed that many of them have slowed down and some of them are dead. This means that our current development trends to build faster, more efficient computers using classic methods have come to an end. So what’s next?

There are several new technologies being developed to overcome the speed barriers we have hit with classical computing. One of those technologies is Quantum Computing, which will be the focus of this week’s article.

Quantum Computing is not as new as people may believe. It was a theoretical concept in 1980 when Richard Feynman and Yuri Manin expressed the idea that a quantum computer had the ability to simulate things that a classical computer could not. A great example of this is the simulation of molecular structures. Molecules increase in their complexity exponentially as the number of electrons in the molecule increase, making it nearly impossible to simulate a large molecule entirely with today’s largest computing platforms.

So how is this possible? To understand the power of a quantum computer, first we need to understand a little about how classical computers work. So here it goes. A classical computer stores information in bits. You can think of a bit as a tiny on/off switch; it is always either on or off, and cannot get stuck somewhere in between. Each additional switch increases the amount of information a computer can process. Modern computers have a 64-bit address space. This allows them to store and process single chunks of information as large as 64 binary digits. This can represent a number large enough to count seconds for 292 billion years. That seems like a lot, but we are currently reaching the computational limits of these systems.

A quantum computer works in a very different way. It utilizes the strange properties of quantum physics to store and manipulate data. There are two properties of quantum bits or qubits that make them ultrapowerful for certain computing problems. The first is superposition, which basically means that a qubit can represent a one, a zero, or an infinite number of states in-between one and zero.  They hold a probability distribution of the likelihood that they are a one. The second is entanglement; this allows two qubits to become linked in a manner that modification to one qubit affects the other.

So what exactly does this mean? Well, a classic computer can only hold a single state, so although it can represent and compute numbers as large as 64-bits, it can only represent one number at a time. The power of superposition allows a quantum computer to hold all the states simultaneously. As an example, the first computers were 8-bit systems. They could hold values up to 256 and would hold, at most, three values simultaneously. An 8-qubit quantum computer can hold all 256 values simultaneously and do operations on all of them in a single operation. It is 256 times more powerful. But it is important to remember that a quantum computer is not really storing all the states, but rather the probabilities of each state occurring. This means it can actually give a wrong answer because it has no definite state. It runs entirely on the probability of being in a particular state.

The most powerful algorithm proven to work today on quantum computers, provided we overcome the stability issues, is Shor’s Algorithm which is designed to factor very large prime numbers. This algorithm would allow someone with access to a quantum computer to break modern cryptography schemes, making current network security impossible to maintain. Quantum Cryptography is becoming a major field of study in figuring out how to make cryptography quantum-proof before the processors become powerful enough to break the encryption. Next week we will talk a little about cryptography and what it means to you.


Thursday, August 8, 2019

The Circle of Life


Three weeks ago, I began a series on computing laws and promised I would propose a law of my own in the final article of the series. The time has come to introduce Hamilton’s Law, “The Circle of Life,” which is meant to show my observation of the cyclical nature of computing platforms and how we have completed what I believe to be the first of many cycles in computer development.
The first computers known in history were probably better classified as memory devices rather than calculation-based devices. The first known was the abacus, which was used primarily for counting and tracking large numbers without having to record them on paper. It could store a single number. The next early computer was the slide rule, which basically was a method of reducing a large table of lookup values like sine and cosine, square roots, and other hard to calculate values in a small form factor. The calculations were still primarily done by people.
In the 1930s computers were people hired to do computation on paper, usually several “computers” worked the same complex problem and the results were compared to check for accuracy. In the 1940’s we began to build very large vacuum tube-based computers the size of houses that could only handle a few bits of information (30-bits), and which could only effectively handle a single 10-digit number at a time. 
The 1950s brought about the first large scale universal computer. Universal, meaning it was not limited to simple arithmetic and logic functions but could solve much more complex problems. The Univac was a vacuum tube-based computer roughly 1000 times more powerful than the code breaking computers of the 1940s. It could handle calculation and storage of up to 1000 12-digit numbers.
The 1960s brought about the first transistor-based computers, which allowed the size to go from the size of a building to the size of refrigerator. We were still far from a portable device, or even a home computer, but we were getting closer. The first transistor-based computers were smaller and more powerful once again by about 1000 times over the prior decade. 
The 1970s brought with it a lot of exciting things for the general public. Before the 1970s only government organizations and large academic institutions had access to computers. In 1970 the internet was born, allowing these institutional computers to communicate with each other. Methods of putting multiple transistors into a single device, called an integrated circuit (IC), came into production, and computers got even smaller. By the mid 1970s you could by an IC-based desk calculator that could do everything the computers in the 1930s could do and then some. These desk calculators were able to be carried in one hand but used too much power to be portable and needed to be plugged in to operate.
The 1980s brought the first true computers into the home. It was the era of the home computer. They had exceeded the computing power of the Univac and were small enough to be placed on or under a desk. It was not until the 1990s that computers got both light enough and efficient enough to become portable. They were still the size of a brief case but affordable enough that most middle-class families could buy one if they were interested. The World Wide Web was born in the 1990s, allowing people to share information openly from their computers.
The 2000s were the beginning of the portable computing era. Computers were finally small enough and efficient enough to be carried in one hand, or even a pocket. They were battery powered and could last a few hours without a charge. Wireless networks were coming about, allowing us to utilize the web without being connected to a wire. Yet there was more to come.
The 2010s have been the era of the ultraportable computers. The iPhone, tablets, Apple watches, Fit-bits, and other wearable computers came out. I remember in the early ‘80s talking about how some day we would have computers everywhere, but I never imagined computers fitting in a watch. 
Looking to the future is where we see the cycle begin again; over the last few years technology to develop quantum computers has taken hold and the 2020s will be the year of the quantum computer. The leading quantum computer has 30-qubits, exactly the same as the 30-bit 1940s system. It also weighs nearly the same, and takes nearly the same amount of space. We are in hopes that the quantum computing cycle will move faster than the digital cycle discussed above, or we will be waiting until the year 2100 for portable quantum computers. We have traveled full circle back to early technology; granted these quantum computing systems are infinitely more capable than the current digital systems, just as the current digital systems are infinitely more capable than the early analog systems. We are just beginning with the technology to build them. Look forward to next week when I talk about quantum computing and what it means for you.

Thursday, August 1, 2019

Computing Laws Part 2


Last week we touched on a few of the laws governing algorithm performance on computers. These laws talked a lot about the nature of computers, how they work and communicate with each other, and the impact that this interaction has on the performance of software. This week we are going to talk about a few of the laws that govern what is possible in the development of computer hardware.
The most well-known of these laws is Moore’s Law, which recently reached an end of its usefulness. In 1965, Gordon Moore, the co-founder and CEO of Intel, observed that the density of transistors on an integrated circuit doubled about every two years. There are several arguments that the usefulness of this in regards to computational power ended in 2001, but Intel as a leader in the processor industry argues that it is still in effect and improving computing hardware today. For a couple of reasons, I am in the group that believes Moore’s Law died in 2001. The first is that in 2001, we produced the fastest single core processor every created, and we have not gotten any faster. In fact, the single core speed is now about half as fast as what it was in 2001. The second is that starting in 2001, the speed benefit of smaller transistors went away; as they got smaller they also got slower, but Intel as well as other processor manufactures started putting more cores on a single chip. We now have processors with 40 or more cores in a chip, but the speed is around half the speed of the single core chips of the 1990s. It has been agreed that Moore’s Law no longer applies as transistors have reached sizes nearing the size of a single atom and can no longer get smaller, so Moore’s Law is in effect dead, but still worthy of mention.
Koomey’s Law is very similar to Moore’s Law but is related to the energy consumption of a processor. Jonathan Koomey observed that as the transistors got smaller, they were getting more energy efficient at an average rate of doubling in efficiency every 1.5 years, but the efficiency has fallen off as Moore’s law slowed, resulting in a current doubling of efficiency only every 2.6 years.
Dennard’s Law, referred to as Dennard Scaling, was an observation in 1974 by Robert H. Dennard and his colleagues. He was attempting to provide an explanation as to how processor manufacturers were able to increase the clock frequency, and thus the speed of the processor, without significantly increasing the power consumption. Dennard basically discovered that, “As transistors get smaller, their power density stays constant, so that the power use stays in proportion with area; both voltage and current scale downward with the length of the transistor.” Dennard scaling broke down in the 2005-2006 era as it was ignoring some key factors to the overall performance of the transistor. These factors include the leakage current, which is the amount of current loss across the gate of the transistor, and the threshold voltage, which is the minimum voltage necessary to open the transistor gate. This established a minimum power consumption per transistor, regardless of the size.
Rock’s Law is an economic factor related to the cost of processor manufacturing, and probably one of the main reasons the scaling of Moore’s law was over years and not months. Arthur Rock, an investor in many early processor companies, observed that the cost of semiconductor fabrication plants doubles every four years. Rock’s Law is also known as Moore’s second law and is the economic flip side of Moore’s Law.
The last law I will talk about on computer processor design is Bell’s Law. In 1972, Gordon Bell observed that over time, low cost, general purpose computing architectures evolve, become mainstream, and eventually die out. You can see this from your cell phone; most people replace their cell phones every two years. However, his law was dealing with larger scale systems. For example, roughly every decade a new class of computers results in new usage and establishes new industry - the 1960s mainframes, 1970s minicomputers, 1980s personal computers, 1990s worldwide web, 2000s cloud computing, and 2010s handheld devices and wireless networks. Predictions indicate that the 2020s will be the decade of quantum computing.

Death of an Interconnect

I was interviewed yesterday about my reaction to the news that Intel is discontinuing development of the next generation of OmniPath.  I was shocked to hear that they canceled the project, but like many others over the years, they lost to market leader Mellanox. Every company I have seen attempt to compete in the high speed interconnect market has folded within 5-years, so I was not surprised.  To read the full article including my comments you can check it out here.
Intel Kills 2nd-Gen Omni-Path Interconnect For HPC, AI Workloads

Thursday, July 25, 2019

Computing Laws Part 1


Over the many years in the computing industry, I have been introduced to several “laws” of computing that seem to fight against computing every reaching its maximum potential. Unless you have a degree in computer science or have studied at least some introductory computer courses you have probably never heard of any of these “laws” of computing.
First, I would like to say that most of them are not really laws, like the law of gravity for instance, but more like observations of phenomenon that seem to guide the computing industry. Most of them are related primarily to computing hardware trends, parallel software performance and computing system efficiency. I will talk the next couple of weeks about various ones of these laws, and introduce a law of my own in the final week.
The first of these laws in Amdahl’s Law. Gene Amdahl observed a phenomenon in parallel computing that allowed him to predict the theoretical speedup of an application when using multiple processors. He summarized the law like this: “The speedup of a program using multiple processors is limited by the time needed for the sequential fraction of the program.” In other words, every computer program, just like every task you complete in a given day, has a serial part that can only be done in a particular order and cannot be split up. A great example is if you are mowing a lawn, you have to first fill the lawn mower up with gas and start the mower before you can mow the lawn. If you have a bunch of friends with a bunch of lawn mowers, you still cannot get any faster than the amount of time to fill the lawnmowers with gas. The mowing can be parallel up to the number of friends with mowers, but you still have the limiting factor of the sequential step. It is obvious from Amdahl’s law that if there is no limit to the number of processors that can complete a given task, you still hit a limit of maximum performance.
A related law to Amdahl’s Law is Gunther’s Law, also known as the Universal Scalability Law. In 1993 Neil Gunther observed and presented a conceptual framework for understanding and evaluating the maximum scalability of a computing system. He understood that for any given task, even one that can be done entirely in parallel, like many friends help mow your lawn, that you still reach a limit at which adding more friends can actually cause mowing your lawn to take longer. The same thing happens in computer algorithms. They all have a sweet spot where the number of processes running balance out with the size of the problem and the hardware available to give the best performance for the program. I have always liked to think of Gunther’s Law as the law of too many friends. If all your friends and their mower cover your entire lawn they get in each other’s way and your lawn never gets mowed.
In 1988 John L. Gustafson and Edwin Basis wrote an article “Reevaluating Amdahl’s Law” in which they show that you can retain scalability by increasing the problem size. In other words, you can overcome the impact of the serial part of a problem by making the problem significantly large, minimizing the impact of the serial component, and along the same lines also improve upon the predicted maximum performance of Gunther’s Law. In other words, if you have many friends with many lawn mowers and a small lawn you are limited to scaling to only a small number of friends, but if you increase the size of the lawn, the serial factor of filling the mowers with gas and starting them becomes significantly smaller than the task of mowing the lawn. You also are able to given all of your friends a significant amount of lawn to mow in order to avoid the problems caused by having too many mowers. 

50th anniversary of lunar landing


In honor of the 50th anniversary of the lunar landing, I am dedicating this week’s column to rerunning the local news items covering the mission. I was disappointed to find that most of our area newspapers ran very little about the missions. I was able to find one article in the archives here at The Licking News.
This article came out of Huntsville, Ala. and appears to be a press release sent to the hometown of people connected with the Apollo 11 missions. I found it interesting to learn that a Licking High School graduate was among the engineers that helped to design the Saturn V rocket that powered the Apollo series of spacecraft.
The headline read, “Connected With Apollo 11 Mission” in the July 17, 1969 edition of The Licking News. The content of the article follows.
HUNTSVILLE, ALA. – Donald E. Routh son of A. C. Routh of R. R., Licking, Mo., is a member of the organization that has played a major role in the Apollo 11 lunar landing mission.
He is an aerospace engineer in the National Aeronautics and Space Administration’s Marshall Space Flight Center, Huntsville, Ala.
The huge Saturn V rocket that lifted Apollo 11 from earth was developed under the direction of the Marshall Center, NASA’s largest organization.
Routh, a graduate of Licking High School, received his B.S. of E.E. degree in 1960 from Washington University in St. Louis.
His wife, Marie, is the daughter of Mr. and Mrs. Howard Quick of R.R., Licking.
I had really hoped for more to share from the local archives, but it seems that only major city newspapers covered the event at any level of detail as it was very heavily covered by television and radio. 
One of the best archives I have been able to find came from the New York Daily News July 21, 1969. In a story written by Mark Bloom. 
“Two men landed on the moon today and for more than two hours walked its forbidding surface in mankind’s first exploration of an alien world.
In the most incredible adventure in human history, these men coolly established earth’s first outpost in the universe, sending back an amazing panorama of views to millions of awed TV viewers.”
It saddens me to realize that much of the history of the event has been lost due to instability of the media used to store video archives and our lack of foresight as a nation to preserve this history in print.  Being a technology guru, you might find it odd for me to state the importance of putting ink to paper.  However, as can be clearly seen though-out history, it is the written word that survives the test of time.

Thursday, July 11, 2019

Apollo: giant leaps in technology


The Apollo lunar missions resulted in what I believe to be some of the giant leaps in technology over the last century. You might be surprised to find out all the everyday things that came out of landing man on the moon.
Gel pens, my favorite writing utensil, came out of the space program. , The astronauts needed a way of recording the events during the mission that would work in low gravity. The gel pens allowed the ink to flow in the low gravity of space. These pens are capable of writing on the ceiling for a reasonable period of time. Prior to the space program, pens only worked from gravity pulling the ink against the ball.
The materials used in the “Moon Boots” revolutionized athletic footwear. They improved shock absorption, provided more stability, and provided better motion control. Al Gross substituted DuPont’s Hytrel plastic for the foam used in the shoe’s midsole to eliminate the cushioning loss caused by body weight in the shoe. He also used the “blow-molding” techniques used to manufacture the space suits to improve the techniques used to manufacture shoes.  
The fabrics developed for the spacesuits was also used to create fabric roofs, like the one in Houston’s Reliant Stadium. The fabric is stronger and lighter than steel and only weighs less than five ounces per square foot. It is translucent, flexible and reflective, causing a reduction in lighting, cooling and maintenance costs. This fabric is also used in temporary military structures. The fabric lasts up to 20-years and is a cheap alternative to steel and concrete structures. You will see many of these dome-like structures in use by Missouri Department of Transportation to house the rock salt mixture used to treat our winter roads.
NASA, along with General Motors, developed technology for moving heavy equipment on cushions of air. Rolair Systems, Inc. commercialized on the technology and it is used today in stadiums around the world to move large equipment, stages, and even stadium seating. Hawaii’s Aloha Stadium uses this technology to rearrange the stadium, moving entire 7,000 seat sections.
After the 1967 Apollo fire disaster, NASA needed to find ways to protect the flight crew in the event of a fire. Monsanto Company developed a chemically treated fire-proof fabric called Durette. Firemen wear the same fabric and utilize the same air tanks to fight fires on Earth.
Along with these high-tech devices designed to protect and entertain, there were also many things invented to just make life easier such as cordless tools, Chlorine-free pools, heart monitors, Black & Decker’s Dustbuster, quartz clocks and watches, and precision medical syringes. The technologies developed by NASA during the Apollo missions crossed all boundaries of our lives. 
The most surprising area to see an impact from the Apollo missions to me is agriculture. Many do not know that the NASA missions helped to develop feeding technologies for pig farmers. Roughly 15-25 percent of piglets die before they are weaned, usually as a result of accidental crushing by the sow. Farmatic, Inc. used NASA miniature electronic heaters to warm the body of a synthetic sow. This synthetic sow can be used to replace the mother in events of over-sized litters, rejected piglets or physical disorders.
-->

Thursday, July 4, 2019

Space travel pre-1969


This week, let’s look back at the rocketry technology that sent us to moon on July 20, 1969. The times were different then as nations all over the globe were racing to be first in the great space race, with people dreaming of reaching space since the turn of the twentieth century. The first realistic means of space travel was first documented by Konstantin Tsiolkovsky in his famous work “The Exploration of Cosmic Space by Means of Reaction Devices,” published in 1903.
It was 16 years later (1919) when Robert H. Goddard, published a paper, “A Method of Reaching Extreme Altitudes,” where he applied the de Lavel nozzle to liquid fuel rockets, making interplanetary travel possible. His paper influenced the key men in space flight, Hermann Oberth and Wernher von Braun.
The first rocket to reach space and put Germany in the lead of the space race was launched in June, 1944. The German V-2 rocket was used to attempt sub-orbital space flight in the British “Operation Backfire,” but did not achieve the altitude necessary. The Backfire report remains to date the most extensive technical document of the V-2 rocket. This triggered the British Interplanetary Society to propose the Megaroc, a manned suborbital flight vehicle. The Megaroc successfully sent pilot Eric Brown on a sub-orbital flight in 1949.
Over a decade later true orbital space flight, both manned and unmanned, took place during the “Space Race,” a fierce competition during the Cold War between Russia and the United States. The race began in 1957 with both nations announcing plans to launch artificial satellites. The U.S. announced a planned launch of Vanguard by spring 1958 and Russia claimed to be able to launch by the fall of 1957.
Russia won the first round with the launch of three successful missions, Sputnik 1 on October 4, 1957; Sputnik 2, the first to carry a living animal, a dog named Laika and Sputnik 3, May 15, 1958, carrying a large array of geophysical research instruments.
The U.S., on the other hand, faced a series of failures until its successful mission with Explorer 1, the first U.S. satellite, on February 1, 1958. Explorer 1 carried instruments that detected the theorized Van Allen radiation belt. The shock over Sputnik 1 triggered the creation of the National Aeronautics and Space Administration (NASA) and gave it responsibility for the nation’s civilian space programs, beginning the race for the first man in space.
Unfortunately the U.S. lost again on April 12, 1961. Yuri Gagarin made a 108-minute single orbit flight on board the Vostok. Between this first flight and June 16, 1962, the USSR launched a total of six men into space, two pairs flying concurrently, resulting in 260 orbits and just over 16-days in space.
The U.S was falling further behind in the race to space. They only had one successful manned flight by Alan Shepard, May 5, 1961, on the Freedom 7 capsule. However, Shepard fell short of reaching space and only achieved a sub-orbital flight. It was not until February 20, 1962, when John Glenn became the first U.S. orbital astronaut, making three orbits on Friendship 7. President John F. Kennedy announced a plan at this time to land a man on the moon by 1970, officially starting Project Apollo.
Not to be outdone, USSR put the first woman in space on June 16, 1963. Valentina Tereshkova flew aboard the Vostok 6. Tereshkova married fellow cosmonaut Andrian Nikolayev and on June 8, 1964, gave birth to the first child conceived by two space travelers.
On July 20, 1969, the U.S. succeeded in achieving President Kennedy’s goal with the landing of Apollo 11. Neil Armstrong and Buzz Aldrin became the first men to set foot on the moon. Six successful moon landings were achieved through 1972, with only one failure on Apollo 13.
Unfortunately, the USSR’s N1 rocket suffered the largest rocket explosion in history just weeks before the first U.S. moon landing. The N1 rocket booster was the most powerful single-stage rocket ever made. All four attempted launches resulted in failures. The largest, on July 3, 1969, destroyed the launch pad. These failures resulted in the USSR government officially ending its manned lunar program on June 24, 1974.

Thursday, June 27, 2019

Apollo 11 Computer System


Many people believe the computer systems of the Apollo 11 space craft are ancient and little can be learned from them. I disagree. The computer systems in the Apollo 11 were, in some regards, more advanced than the computers of the 1990s and early 2000s. There are a couple of things that lead me to that conclusion.
The first is that the computer on the both the Lunar module, “Luminary” and the command module, “Comanche” were networked wirelessly via radio frequency and also were in direct communication with the computer at the mission control, “Tranquility.” This was in July of 1969. We did not develop robust wireless networking technologies for another nearly three decades with the development of the 802.11 “Wi-Fi” protocols in 1997. Taking this a step further, even wired communication between computers was not standardized until four years later with the release of the 802.3 Ethernet standards. Needless to say the communications for the computing systems were decades ahead of their times.
The second major advancement in computers that was ahead of times was the processor itself. Though the system performed around 1000 times slower than a modern computer, it had some features that were not developed commercially in processors until the early 2000s. Among those features was the capability to run multiple threads. Threads are independent processes that can run at the same time; for example it was capable of tracking the exact location of the spacecraft using computer vision to determine the view angle of particular stars at the same time as it was calculating the firing thrust of the engines to properly align the craft for a safe landing. We did not have true multi-thread processor capabilities in home computers until around 2001 with the release of the first multi-core processor by Intel. You will also notice that I mentioned computer vision in the example. You might be surprised to realize that the computer in Luminary did live-image processing to determine the angle and location of the craft both for the lunar landing and the reentry angle to make it safely back to earth.
The third advanced feature of the computer in the Apollo spacecraft was the ability of the system to “self-heal.” During the mission, an unlikely set of circumstances caused the guidance computer to begin throwing alarms. These alarms were caused by the radar system that was tracking the module for recovery in the case of a mission abort. The program started using too much computing power at a critical phase in another thread that was helping to land the craft. The robust design of the system allowed the computer to make the decision to terminate the radar process and focus on landing the craft. Self-healing computer code and systems are still an advanced field of computer science.
I am fascinated by the excellent work done by the team at the Massachusetts Institute of Technology under the direction of Margaret Hamilton. The code was publicly released in July of 2016 and computer scientist all over the world got to see the human-ness shining through in the very complex code. There was a terrible habit of typing “WTIH” for “WITH” about 20 times in the code comments. The routine names in the code were fascinating and made it easy to picture the imaginations of the young engineers, sparked by being on the spacecraft, as they wrote code segments like “LUNAR_LANDING_GUIDANCE_EQUATIONS” and “BURN_BABY_BURN-MASTER_IGNITION_ROUTINE.” There was even apparently a problem in an early version of the code to leave extraneous data in DVTOTAL based on a comment block in the code that stated very clearly, “don’t forget to clean out leftover DVTOTAL data when GROUP 4 RESTARTS and then BURN, BABY!” It is difficult to figure out what DVTOTAL is from reading the code, but, very clearly, it is important to clear it before this routine. I am fairly certain that “BURN BABY!” means let’s get this rocket into space.
-->

Thursday, June 20, 2019

Amateur Radio


In honor of the upcoming Amateur Radio Day celebration Saturday, June 22 in Texas County between noon and 5 p.m. in front of Pizza Express in Houston, amateur radio, its history and future, seems to be a good topic for this week.
Amateur Radio, also known as ham radio, is the use of radio frequency devices for the purpose of exchanging messages, wireless experimentation, self-training, private recreation, and emergency communication, by individuals for non-commercial purposes. The term amateur in this situation means a person who has an interest in radio electric practices with a purely personal interest and no monetary or similar reward expected from the use.
The amateur radio and satellite service is established and controlled by the International Telecommunication Union (ITU) through the Radio Regulations division. ITU regulates all the technical and operational characteristics of radio transmission, both amateur and commercial broadcasting. In order to become an amateur radio operator you must pass a series of tests to show your understanding of the concepts in electronics and the government regulations.
Over two-million people worldwide are amateur radio operators and use their transmission equipment for a variety of tasks including radio communication relays, computer networks over air-waves, and even video broadcasts. Because these radio waves can travel internationally as well as into space, the regulatory board needs to be an international board. Currently that controlling board is the International Amateur Radio Union (IARU), which is in three regions and has member associations in most countries.
My first experience with ham radio was as a teenage boy. One of my neighbors was a ham radio operator with the highest level of license. I remember him saying he could operate at 100,000 watts. He only had a 50,000 watt antenna and one night he just wanted to see what 100,000 watts would do. I remember seeing a blue glow coming off of his antenna tower that night for about ten minutes. He climbed his tower the next day to repair a cable that had melted. I remember sitting in his basement studio and watching him talk to friends in China and thinking how great it would be to become an operator myself. I still have not taken that step more than 30-years later.
I helped my neighbor set up one of the first radioteletype (RTTY) systems; he used his mechanical morse-code relay and controlled it by computer to send digital signals around the globe. The technology behind it is actually still used today for computer wireless networks, though at a much higher frequency and using transistor-based switches rather than mechanical relays. The opportunities available to amateur radio enthusiasts today are endless, and I am sure that any club member would be happy to help you get started. A great place to begin would be the Amateur Radio day coming up this weekend.

Thursday, June 13, 2019

Television broadcast signals


Last week we talked about radio broadcast signals and the difference between AM and FM signals. This week I thought we could take it into talking about television broadcast signals. I am sure many of you in this area have experienced the same things that I have in recent years with broadcast television stations.  If you receive your local news via antenna rather than as a cable subscriber, there is a big difference in the quality of the TV signal since changes in 2009.
I noticed one thing which bothered me a lot; before the digital broadcast switchover in 2009, I could still get KY3 during a heavy storm. The picture had a lot of static and the sound was a little unclear, but I could still hear major weather alerts and be informed. Just recently a small tornado went through the outskirts of Edgar Springs. I heard the tornado warning and then my screen went dark. No signal, and therefore no information. The old analog stations never went completely away like this during a storm.
So what is the difference between analog and digital signals, why does the quality look so great on digital stations all the way up until they just drop off with a no-signal message? With analog it seemed that the quality degraded until you could no longer get the station, but there was never a complete cutoff. It has to do a lot with how the signals are transmitted.
The old analog broadcasts used a dual carrier technique overlaying both AM and FM signals that we talked about last week to transmit both the picture and the audio. The picture is transmitted using AM signals and the sound using FM signals. These signals are prone to noise from interference of other stations or signal bouncing off of walls, tree, and even people. The interference is what caused poor color quality, ghosting, and weak sound quality. The NTSC standard for television broadcast was adopted in 1941 and transmitted 525-lines of image data at 30-frames per second. NTSC worked well and still works today with older analog devices, like VCRs and older DVD players, but because color was not added until 1953 that standard became jokingly referred to by professionals as “Never Twice the Same Color” because of color inconsistencies between broadcast stations.
The new Advanced Television Standard Committee (ATSC) uses the same methods that store video information on DVDs or Blu-ray Discs to transmit the television signal. These methods use a digital signal consisting of a series of ones and zeros, or “on” and “off”. This new standard resulted in better quality images and sound for multiple reasons. The first is that it was designed from the ground up with things like color, surround sound audio, and text transmission taken into consideration.
The digital signal is much smaller now, allowing stations to use the same bandwidth to broadcast multiple stations, or sub-channels in addition to the main channel, using the same broadcast equipment.  Digital signals also allowed for the broadcast of the wide screen format and high definition signals. The only downfall of the digital broadcast is the inability to receive partial information from a weak signal. Digital is an all or nothing type of broadcast, as missing information in a digital signal cannot be interpreted by the receiver, causing errors and the nice “no signal” message to display on your TV.
You can think of a digital transmission as transmitting in code; if a single piece of the code is missing, it cannot be deciphered, resulting in unusable images and sound that cannot be displayed. Analog transmissions transmit the original image and sound, so if pieces are missing, the sound gets static, and the picture gets missing spots, or fuzzy. So even if the picture is clearer with digital, it is less reliable over long distances and in high noise situations like severe storms.

Thursday, June 6, 2019

Radio Signals


My six-year-old son Obie was riding in the car with me Saturday night and asked a question that brought about the topic for this week. He said, “Hey, Dad, I know that FM is the radio stations, and Aux lets us listen to music from your phone, but what is the AM button for?”
Photo by Ivan Akira
FM radio carries the audio signal by 
modifying the frequency of the 
carrier wave proportionally to the 
audio signal’s amplitude. 
So just in case there are others out there that want to know what the AM button on your radio is for, here is a lesson on radio signals. Every radio station in the world operates on one of two different broadcast technologies. They either use amplitude modulation (AM) or frequency modulation (FM). They both use electrical current passed through a broadcast antenna to send an electrical signal through the air, but they carry, or modulate the sound wave in very different ways.
To begin to understand radio broadcast, it is first important to understand that electricity can travel through the air just like sound and light travels through the air using waves. There are two main properties to a wave. The first is the wavelength, which is inversely related to the frequency and tells the amount of distance between individual peaks in the wave. The frequency tells us how many peaks will reach us in a second. It can be thought of as how high or low a sound is, or actually determines the color of light. The second is the amplitude, or height of the wave, it can be thought of as how loud a sound is or how bright a light appears.
FM, which is the most well known radio signal, carries the sound over the air by modification of the frequency of the radio wave. As the sound wave you are broadcasting changes in pitch, the frequency of the wave changes within a given range. FM operates in a frequency range of 88-108 Megahertz (MHz), which means between 88 million and 108 million peaks of a wave hit your antenna every second. Each FM station is assigned a range of roughly 100 kilohertz (kHz), meaning the signal varies by 100 thousand waves per second. You can think of it as carrying the sound by changing the length of the wave. The FM signal will not travel as far as AM signals because of atmospheric effects on the signal.
Photo by Ivan Akira: AM radio carries 
the audio signal by modifying the amplitude
 of the carrier wave proportionally
 to the audio signal’s amplitude. 

AM, which is less well known, also happens to be less expensive to operate and the signals can travel much longer distances. This has to due with the longer wavelengths, which cause the signal to bounce off of the upper atmosphere, whereas FM signals pass through it. AM signals carry the sound wave by varying the amplitude, or height of the wave, in proportion to the height of the sound wave being broadcast. They operate on a much lower frequency than FM, around 540-1600 kHz which means between 540 thousand and 1.6 million waves hit your antenna in a second, roughly 100 times fewer than FM. AM stations are assigned a given frequency that never changes during their broadcast. This allows a receiver to work with a much weaker signal since it does not change, making it possible on a clear night to listen to AM radio stations as far away as northern Canada and southern Mexico.
Due to the higher sound quality of FM over AM broadcast signals, most FM stations are used to broadcast music and most AM stations are used to broadcast talk radio. So now you know what the AM button on your radio means.
Another really neat fact about AM radio is that if you tune to a weak AM station during a thunderstorm, you can hear the lightning on the radio nearly the same time as you see the flash, and it gives you an audio warning of the pending clap of thunder.

Thursday, May 30, 2019

Additive Manufacturing


By 3DPrinthuset (Denmark) The building on demand (BOD)
printer developed by COBOD International
(formerly known as 3DPrinthuset, now its sister company)
 , CC BY-SA 4.0
  This week we are going to review some advanced manufacturing techniques. A majority of parts manufacturing is done with a process known as subtractive manufacturing. This is the process of taking a piece of metal, plastic, wood or other material and removing portions of it in a controlled fashion, resulting in a finished part. This method is used to make parts like gears, screws, engine blocks and crankshafts.   There has been a lot of talk in recent years of additive manufacturing techniques. This is where you start with nothing and build a part by adding material in a controlled fashion. The most popular additive manufacturing process is called 3-D printing.
  The very first 3-D printers used chemical compounds that harden when exposed to ultraviolet (UV) radiation. This is the invisible light that causes sunburns. It was in 1987 that stereolithography (SL) techniques were developed to create acrylic components. A UV laser is aimed at the reactive resin which instantly hardens. The object is then lowered layer by layer into the liquid and the final part is pulled from the vat of liquid. This technique produces the highest level of detail in the parts, but is the most complex process.
  The second method of additive manufacturing is called jetting and is similar to how an inkjet printer works. The same reactive resin is now sprayed from a nozzle onto a surface and exposed to the UV light, hardening it, before another layer is sprayed. There are also similar methods using powder and spraying an adhesive to create the layers. This is heavily used in industrial manufacturing facilities.
By Bre Pettis - Flickr:
 A Makerbot three-dimensional printer
 using PLA extrusion methods.
CC BY 2.0
  A very similar method to jetting and SL is Selective Laser Sintering (SLS). A powered material that can be rapidly fused by laser heat, such as polymides and thermoplastic elastomers, is placed in thin layers on a metal surface. A powerful laser then fuses (not melts) the powders into layers forming a very durable object. These were used in the manufacturing of custom hearing aids molded to fit the individual’s ear canal perfectly.
The final method of readily available 3-D printing technologies is definitely the cheapest method and the most popular among home users. It is called extrusion printing. This method uses strands of PLA or ABS plastics that are run through a temperature controlled nozzle which melts the plastic and creates the objects layer by layer. This extrusion method has been used not only with plastics, but with concrete, metal, ceramics, and even food, such as chocolate.
  Over the last two decades Missouri University of Science and Technology has been involved in bleeding edge research in the field of additive manufacturing. Among their claims to fame is the Freeze Form Extrusion machine. This machine uses extremely low temperatures (-16 to -40 degrees Celsius) to freeze ceramic pastes consisting of Boron, and Aluminum Tri-Oxide to form ceramic components capable of withstanding extreme heat greater than 2400 degrees Celsius. The process is a two-step process where the part is first frozen together and then baked at a very high temperature to rapidly fuse the ceramics.  Their latest research is in extending this process to fabricate titanium alloy components.
Additive manufacturing can be used today to make everything from key-chains to houses and even human skin grafts and organs. We have come a long way in the last 30-years in manufacturing technologies.