Friday, March 27, 2020

Covid-19 impact on the internet

By Scott Hamilton

I would like to open by stating that this week’s article is a lot more opinion and observation than pure facts about the impacts of Covid-19 on the internet. There have been clear impacts to network infrastructure as a result of the social distancing and working from home orders across the country, but there are just not enough facts listed to put a real number on the impact.

Over the past week there have been numerous articles published around the world questioning if the current internet infrastructure could handle the extra traffic from everyone being forced to work from home. The Wall Street Journal predicted that the infrastructure was not ready for such extra workload and we would be looking forward to websites failing to load and online meeting platforms overloading. The New York Times reported exactly the opposite, that we would see minimal impact from the extra loads. As a work from home, high performance computing engineer and cloud architect, I had some questions myself over whether or not the infrastructure could handle the extra load. The European Union requested that Netflix stop streaming high definition videos to reduce the network load across Europe during the outbreak as a precaution to prevent failures.

We are over week into a new society, at least for a short period of time, where a majority of us are working from home. The impact has been minor from my experience, as a user of rural internet service split between a mix of cellular network tethering, satellite service, and microwave-based internet. I have found performance increasing since the onset. I must admit it came as a surprise to me to see performance increase. Here’s the deal: my providers all lifted the imposed bandwidth limits during the outbreak. 

I wonder if they will reimpose the limits following the outbreak. If so, it will lead to a lot of questions from customers. If you ask the reason for the limits, they will say it is because their network cannot handle the full load of all the users. The fact that the limits have been lifted prove that their networks will handle the load of all the users. Now is the perfect time to stream all your favorite television shows, download all your favorite books and use the internet to your fullest ability, because the limits will come back. 

The real impact this virus has had on technology is that the imbalance of network connectivity among students has came into full light. There are students all over Texas and surrounding counties that do not have the necessary bandwidth to stream online classes during the school-from-home period. This prompted many providers, not only in our area, to lift limits to educational sites and cloud based services.

Among the companies to lift the bandwidth restrictions are AT&T, Verizon, T-Mobile, Sprint, Hughesnet, U.S. Cellular and Comcast to name few. Many others are offering steep discounts on new service installations, taking advantage of our need for speed. I would say that now is a great time to look into getting high speed internet service if you do not already have it, mainly because these deals will likely never come around again.

There are also several 30-day free trials for streaming and education services to keep us entertained and educated during the social distancing period. I challenge you to take a look around for special offers and enjoy trying some new things online. Hey, if you can’t explore the community, you may as well get out there and explore the virtual world.

Next week I plan on doing an article on live-streaming and give details as to how many of our local churches are beginning to offer online worship services and Bible teaching. I will provide pointers to those that are looking for ways to continue their services online. I have also seen local dance studios, fitness trainers and others offer online classes in lieu of face-to-face training. Take a moment to enjoy the flexibility technology has brought during this time of crisis, and who knows, maybe you will find something new to enjoy even after this is all over. 

Wednesday, March 18, 2020

Cloud and HPC workloads part 3

   The final reason HPc workloads are slow tui migrate to the could had to do with storage. HPC generates and processes very large quantities of data.  Most have directed capacities well above 1 petabyte.  
  There are three main factors impacting HPC and cloud storage. First is that HPC applications expect a POSIX file system which is usually implemented on block devices where file system objects point to links blocks of information that can be accessed both sequentially,  or randomly.  Many times these applications utilize the fine storage like shared memory locations and modify individual blocks within the file. RFID requires low latency onthe file system as well as organized storage patterns.   Cloud storage utilizes block storage deep under the hood but limits access to the c blocks and instead serves files add objects in there storage platform.  In effect simplifying there structure and returning them as a single stream of data. You cannot easily modify the content of an object so utilizing them as memory addresses due not really work out. 
  The second shortcoming is a limitation of Acess Control in object storage which makes it difficult to secure these large databases at a granular level on the cloud. Controlling access for individual users at the object level becomes extremely challenging in object syste scenarios.  
  The final limitation is the unpredictable performance if object based storage.  The location of the object impacts the speed of retrieval as well as the refresh rate of the object.m, making it impossible to treat the object as a shared memory space for HPC applications because there is no guarentee that the object updates get stored before the file is accessesms by neighboring processes causing some major issues in code performance. 
    There are global experts working on solutions to these and other problems relating to HPc workloads in the cloud,  but I feel we're still a few years away from seeing mainstream HPC use of cloud architectures. 

Tuesday, March 17, 2020

Happy late pi day!

In honor of the never-ending number pi, we have one of a few international holiday celebrations. Pi Day was first celebrated on March 14, 1988, which also happens to coincide with Einstein’s birthday, March 14, 1879. Pi Day was first celebrated as a part of an Exploratorium staff retreat in Monterey, Calif. In March 2009, Pi Day became an official U.S. national holiday.

As part of my recognition of Pi day, I would like to explore the history of the number, who first discovered it, how it was originally estimated, and simple ways you can estimate it yourself. For starters, Pi is the ratio of a circle’s circumference to its diameter, or the length all the way around a circle divided by the distance directly across the circle. No matter how large or small a circle is, its circumference is always Pi times its diameter. Pi = 3.14159265358979323846… (the digits go on forever, never repeating, and so far no one has found a repeating pattern in over 4,000 years of trying.)

One of the most ancient manuscripts from Egypt, an ancient collection of math puzzles, shows Pi to be 3.1. About a thousand years later, the book of 1 Kings in the Bible implies that pi equals 3 (1 Kings 7:23), and around 250 B.C. the greatest ancient mathematician, Archimedes, estimated pi to around 3.141. How did Archimedes attempt to calculated pi? It was really by doing a series of extremely accurate geometric drawings, sandwiching a circle between two straight-edged regular polygons and measuring the polygons. He simply made more and more sides and measured pi-like ratios until he could not draw any more sides to get closer to an actual circle.

Hundreds of years later, Gottfried Leibniz proved through his new processes of Integration that pi/4 was exactly equal to 1 – 1/3 + 1/5 – 1/7 + 1/9 - . . . going on forever, each calculation getting closer to the value of pi. The big problem with this method is that to get just 10 correct digits of pi, you have to follow the sequence for about 5 billion fractions.

It was not until the early 1900s that Srinivasa Ramanujan discovered a very complex formula for calculating pi, but his method adds eight correct digits for each term in his sum. Starting in 1949, calculating pi became a problem for computers and the only computer in the U.S., ENIAC, was used to calculate pi to over 2,000 digits, nearly doubling the pre-computer records.

In the 1990s the first Beowulf style “homebrew” supercomputers came on the scene. The technology was originally developed to calculate pi and other irrational numbers to as much accuracy as possible. Some of these systems ran over several years to reach 4-billion digits. Using the same techniques over the years, we currently are at 22-trillion digits. This is a little overkill considering that, using only 15 digits of pi, you can calculate the circumference of the Milky Way galaxy to within an error of less than the size of a proton. So why do it? President John F. Kennedy said we do things like this, “not because they are easy, but because they are hard; because that goal will serve to organize and measure the best of our energies and skills.”

Attempting to calculate pi to such high accuracy drove the SuperComputing industry, and as a result, we have the likes of Google’s search engine that indexes trillions of webpages every day, computers that can replace physics research labs by simulating the real world and artificial intelligence systems that can beat the world’s best chess players. Where would we be today without the history of this number?

Now as I promised, there is a way you can estimate pi with very simple math. You play a simple game called “Pi Toss.” You will need a sheet of paper, a pencil and a bunch of toothpicks; the more toothpicks, the closer your estimate will be. Step 1: Turn the paper landscape orientation. Draw two vertical lines on the paper, top to bottom, exactly twice the length of your toothpicks apart. Step 2: Randomly toss toothpicks, one at a time, onto the lined paper. Keep tossing them until you are out of toothpicks. Make sure to count them as you toss them on the paper. Don’t count any that miss or stick off the edge of the paper, those don’t count. Step 3: Count all the toothpicks that touch or cross one of your lines. Step 4: Divide the number of toothpicks you tossed by the number that touched a line and this will be approximately equal to pi. How close did you come? To find out how this works, read more about Pi Toss at https://www.exploratorium.edu/snacks/pi-toss.

Cloud and HPC workloads part 2

    The second reason HPC workloads are slow to migrate too the cloud is related to cloud networks. A majority of cloud service providers designed the network infrastructure for individual compute resources common in IT services data centers. /for example your corporate file servers do not need a high speed network between them as long as they each have good connectivity to the client systems. It is the same for web servers, database servers, and most other it workloads.
    HPC systems rely heavily on high speed, low latency network connections between te individual servers for optimal performance. This is because of how they share memory resources across processors in other systems. They utilize a library called MPI (message passing interface) to share information between processes. The faster this information can be shared the higher the performance of the overall system.
   HPC systems use networks that are non-blocking, meaning that every system has 100% of the available network bandwidth between every other system in the network. They also use extremely low latency networks reducing the delay from a packet being sent from one system to another to as low as possible.
    In cloud based systems there is usually a high blocking factor between racks and low within a rack, resulting in a very unbalanced network creating increased latency for high perfance workloads, so poor that some HPC application will not execute to completion.  In recent months some cloud providers have made efforts to redesign network infrastructure to support HPC applications, but there is more work to be done.

Monday, March 16, 2020

Cloud and HPC workloads

    Companies are making rapid migrations to cloud based resources across industry,  except for a lag in one area, high performance computing. There are a few reasons for this lag.              First, most companies utilize their HPC systems near 90 percent making for much less cost savings in migration. Heavily utilized system are usually less expensive to run on your own data center than pay the premium for utilizing the cloud.  The exception is in the high speed storage.  Cloud storage turns out to be fairly performant and much less costly than on premises storage, but the latency between it and the compute power makes it unusable with inprem compute.
    Tomorrow I will talk about the second reason HPC migrations to the cloud are lagging behind conventional datacenter migrations.

Wednesday, March 11, 2020

A brief history of computer encryption

     Encryption has existed longer than computers have been around. In short encryption is a secure method of communication between two parties. They both must know some “secret” that allows them to share messages no one else can read. The simplest form of encryption is letter substitution, for example, shifting letters. A becomes c and z becomes b, each letter becomes three letters ahead, starting over at a when you reach z. The secret in this case would be the number three. The sender and receiver would both know that the letters were shifted three characters to the right allowing them to communicate without someone else easily reading the message.
     In June 1944, Bailey Whitfield Diffie was born. Diffie was always very independent; he did not learn to read until age 10. He didn’t have any disability; he just preferred that his parents read to him. They followed his wishes and patiently waited for him to learn. In the fifth grade he started reading, above grade level. Mary Collins, his teacher at P.S. 178, spent an afternoon with Diffie explaining the basics of cryptography. He never forgot the lessons learned that day.
     Diffie loved cryptography and took an interest in learning more about the topic. He learned that those with the secret keys practice decryption, and those who don’t have the secret key but try to access the secret information are practicing cryptanalysis. In order to avoid the draft, Diffie took up computer programming and went to work at the Mitre Corporation. He shifted to working with the MIT AI lab in 1966 and began the first discussions on using cryptography to protect computer software and digital information.
     Diffie’s research contradicted the National Security Agency and work being done by IBM in conjunction with the National Bureau of Standards to institute the Data Encryption Standard (DES). Diffie and his Stanford colleague, Marty Hellman, regarded DES as tainted and potentially fraudulent due to the possibility of an NSA trapdoor which would allow the NSA and conceivably IBM to decrypt messages without knowing the secret. This brought about further research into the difficult problem of allowing two people or devices that had never communicated before to communicate securely. They could not exchange secret keys if they had never communicated, so how could they share these keys in a secure way? How do you create a system where all conversations could be protected with cryptography? How can you get a message from someone you never meet and ensure that they were the sender and no one else could read the message? This is the conundrum of secure computer communications. 
     This is where our current public key encryption infrastructure was born. Keeping keys secret was difficult; the very thing needed to eavesdrop on secure communications had to be passed unencrypted between two people, increasing the chances of compromise. Diffie came up with the idea of using a key pair instead of a single key. It took more than half a decade for him to perfect the technology, but he eventually solved the issue. Here is how it worked.
     Let’s say Alice wants to send a secret message to Bob. She simply asks Bob for his public key, or looks it up in a “phone directory” of public keys. Alice then uses Bob’s public key to scramble the message; now only Bob’s private key can decrypt the message. Let’s say George intercepts the message; without Bob’s private key, George only gets a scrambled mash of data. Bob can read the message because he is the only person in the world with both halves of the key (public and private). Alice can also encrypt a small part of the message with her private key that can only be decrypted with her public key, so Bob can know for certain the message came from Alice. This is the key to all modern secure communication, including secure phone conversations, and was the result of the research of one key individual, Whit Diffie. 

Gresik official release.

GNOME 3.36: "Gresik" was officially released today after six months of development effort. The part that excites me the most is the massive performance improvements included in this release. In the past I have felt that GNOME was a resource drain on overall system performance and can not wait to test out.  
I'll be posting results of the tests in the next few days. For more details please visit http://www.gnome.org/news

Monday, March 9, 2020

1-23-2020 Signals Around Us

You may not realize just how many electromagnetic signals pass through your body every day. It’s not really known if there are health effects from these signals or not, but I find it very interesting to see how much they have increased in the last decade. Just looking at cellular towers, in 2010 there were 250,000 and today there are 350,000 towers in the U.S., and this number will increase to over one million in the coming few years to support 5g technology.

So, what do these signals look like to our body? The human body has an electrical frequency that it likes to operate within and this matches the natural frequency of the earth. The earth and its atmosphere vibrate at the fundamental frequency of 7.83Hz. If you encounter signals at harmonic frequencies with the earth, it amplifies the impact of the signal.

What is a harmonic frequency? It is a signal that has peaks and valleys that overlap the peaks and valleys of the fundamental frequency. The harmonic frequencies of earth and the human body are known as the Schuman Resonance and are 14.3, 20.8, 27.3 and 33.8 Hz. Our electric grid in the U.S. operates at a frequency of 60Hz, which falls outside the natural harmonic range.

We create signals every day that impact the natural resonance of earth and this is a good thing. We would not want to match the resonant frequencies, as it would cause some dangerous situations. Let me give you an example. Have you ever seen videos where an opera singer breaks a glass with their voice? This happens because their voice matches the resonant frequency of the glass, causing vibrations in the glass to build on each other and eventually vibrating the glass apart. Imagine what might happen if we produced enough vibrations at resonant frequencies of the earth. Would it be possible to shatter the planet? Let’s hope we never find out.

So, is it a bad thing to be impacted by non-resonant frequencies as well? There is another frequency interaction called the damping frequencies. This is a frequency that can cause vibrations to decrease in amplitude. We use these damping techniques to create things like noise canceling head phones and engine mufflers. If you can create a damping frequency at the same amplitude as the resonance frequency of an object, you can completely stop all natural vibrations.

This means that it is possible to stop or amplify the natural frequency vibration of your body by exposure to radio waves. We know that increasing this vibration by applying resonant frequency harmonics can cause damage, but we don’t really know if stopping the vibration can cause damage or not.

It just so happens that our cellular phones operate in the 800-megahertz range, which just happens to be very close to the 800.8-megahertz harmonic frequency of our bodies. So every time our cell phone sends a message, we make a call or we upload a video or picture, we amplify the natural vibrations of our body. Granted it is by a tiny amount, but is there an impact to our health? There are a few studies that indicate extreme exposure to these frequencies can cause mental health issues.

Although most of these studies have been dismissed as having no basis in science, there is still a question of how these magnetic fields impact our well-being, and much research is continuing to better understand the impacts. If you want to see the radio signals around you, there is an application for your cell phone that monitors electromagnetic signals. There are several out there; if you search for EMF Meter, you will find a wide range of them. Some claim to be “ghost” hunters, but really they just measure the amount of electrical energy in the air around you. My favorite is the EMF Radiation Meter by IS Solution, though it has quite a few ads. There is also a paid application for $2.99 that provides a map of where the radio signal is coming from called “Architecture of Radio.” If you are interested in studying the radio signals, it is worth the price.

Friday, March 6, 2020

Katherine Johnson, human computer

Katherine Johnson was instrumental in make the lunar landing mission a success through her work with NASA as a mathematician. She died on Feb. 24, 2020, at the age of 101. It is very fitting to me that we honor her, not only because of the loss of a great mathematician, but also for her contributions to society in general. As Black History month came to a close over the weekend, it was another reminder of her contributions and those of others.
Johnson was born in White Sulphur Springs, W.Va. in 1918. Her “intense curiosity and brilliance with numbers” allowed her to skip several grade levels in school and she attended high-school on the campus of the historically black West Virginia State College. In 1937, she graduated with the highest honors and degrees in mathematics and French. She was among the first three students of color who were offered admission to West Virginia University’s graduate program in 1939; she never completed her graduate studies but went on to become a wife and mother. 
In 1952, after her children were grown, she and her husband moved to Newport News, Va., where she pursued a position in the all-black West Area Computing section at NASA. Last July the NASA Independent Verification and Validation Facility in Fairmont, W.Va., was renamed in her honor as the Katherine Johnson Independent Verification and Validation Facility. I find it interesting that very facility is where I began my career in computing.
Johnson not only drove our nation’s space program to new frontiers, but she blazed the trail for women or color to enter very scientific fields that are dominated by men. It is unfortunate that Johnson, as well as the women who worked alongside her, Dorothy Vaughan and Mary Jackson to name a couple, were relatively unknown until the release of the movie, “Hidden Figures,” in 2016. Jackson and Vaughan did not live long enough to see the well-deserved film honoring their work at NASA as Jackson died in 2005 and Vaughan in 2008. 
Johnson stated that her greatest contribution to space exploration was “the calculations that helped synchronize Project Apollo’s Lunar Lander with the moon-orbiting Command and Service Module." Her work was instrumental in putting men on the moon in 1969. More accurately, she helped get men back safely from the moon, as docking with the Command and Service module was required for a safe flight back home.
Ted Skopinski, along with other male lead scientists at NASA, would have normally taken full credit for the work of the “computers,” but Skopinski shared the credit with Johnson, making her the first female to receive credit as an author on a research report detailing the equations describing an orbital space flight.
Her first major work in orbital space flight was running the trajectory analysis for Alan Shepard’s 1961 mission Freedom 7, the first American manned spaceflight. She also contributed to John Glenn’s first American orbital space flight. In 1962 space flight trajectory tacking required the construction of a “worldwide communications network” linking computers around the world back to NASA mission control in Washington, D.C., Cape Canaveral, and Bermuda. 
Electronic computers were new to the scene and there was not much trust in their accuracy or reliability, so in the case of Glenn’s space flight, he refused to fly until Johnson ran the calculations by hand. Throughout her career at NASA, Johnson authored or co-authored 25 research reports contributing to NASA programs as recent as the Space Shuttle and Earth Resource Satellites. She retired from NASA after more than three decades of work in 1986.

Thursday, March 5, 2020

1-16-2020 The Hype of 5G

  Cellular network providers have been touting their new technology for over a year now, including promoting the fact that they are first to provide 5G service in the country. The question is, how much of what is being said about 5G is accurate and how much of it is marketing hype? I plan to address the facts of 5G in this week’s article and let the reader decide.

First of all there are three different types of 5G being built in the U.S., including low-band, mid-band, and high-band mmWave implementations. The term mmWave refers to the radio frequency band between 30 Ghz and 300 GHz, which falls right in the middle of microwave signals. This radio frequency band is used primarily for satellite communications and infrared signals used for short distance communications like your TV remote control. This frequency band provides a solid carrier signal for high speed internet communications, but the frequency is too high to carry audio signals.

The first technology, which provides the fastest service, is the high-band 5G adopted primarily by AT&T and Verizon, with a few market areas from T-Mobile. This technology is about ten times faster than the current 4G technology in widespread use today. It also has very low latency, which means the message gets sent nearly instantaneously, but the downfall is that for the maximum speed out of the network, you have to be standing very near a cellular tower. In the best case scenario, you could download a standard definition full length movie in 32 seconds, compared to over five minutes on today’s networks. However, you would have to be within 80 feet of a tower to achieve those transfer speeds.

The mid-band technology in use by Sprint is about six times faster than 4G; it has a longer range than high-band 5G but is still much smaller than 4G. What this means is that there will need to be nearly twice as many towers installed to provide 5G service to all the same areas that receive 4G today, increasing the overall power consumption of cellular providers.

The low-band 5G in use by T-Mobile and AT&T only achieves 20 percent performance increase over 4G technologies. The low-band solution has nearly the same coverage area as 4G per tower, making the expense of rolling out low-band 5G much less expensive. This is likely the type of 5G networks we will see in our area.

Secondly, you cannot purchase a phone today that will support all three technologies, so your awesome new 5G cellular phone is likely to only work on your provider’s towers at 5G speeds and prevent data roaming due to incompatibilities in the technologies. This turns out to be a problem not only for you as an end user of the technology, but also for the providers of the technology. The only way to keep compatibility for roaming is to keep 4G transmitters and receivers in operation, increasing the cost of both the provider gear and consumer cellular phones.

Lastly, every provider has their own approach to providing 5G services, using a mix of technologies. This creates problems both for companies in regards to data roaming and for end users in regards to being locked in to not only a provider, but also a geographical area.

T-Mobile has a nationwide low-band 5G network and a smaller but much faster high-band 5G network in six major U.S. cities. There currently is only one phone that works on the high-band network, and it will not work on the low-band network. The two phones they release for low-band also will not work on the high-band network, so their service is localized based on the 5G phone model that you own.

Sprint is in the process of building their mid-band network in parts of nine U.S. cities. You are stuck with a limited choice of four devices that will operate on their network, one data device and three phones.

AT&T has low-band networks in 20 markets for “consumers” and high-band networks in small areas of 35 markets focused primarily on providing service to businesses. AT&T currently sells only two phone models and a single wifi hotspot device that can utilize this new network. AT&T is claiming to offer 5G today in all markets, but is actually just upgrading its existing 4G networks.

Verizon is working on building out the largest high-band 5G network. It is currently providing service in 40 markets, but you need to be within 80 feet of a tower to get a 5G connection, and they charge extra for 5G access.

I guess ultimately what I am saying is that 5G is currently a mess for consumers, leaving us somewhat in the dark as to the best choices for future cellular phones and plans. Both Samsung and Apple have planned releases as early as February 11, 2020, that are expected to support all three network types and configuration. These new 5G phones will solve a lot of the consumer based issues with 5G, and we can expect wireless network speed to improve drastically in the coming months.

Monday, March 2, 2020

12-26-19 Tracking Santa

 
Wikimedia by user Bukvoed, used under Creative Commons CC0 License. 
The type of radar used by NORAD to track Santa rotates 
steadily, sweeping the sky with a narrow beam searching 
for aircraft.
Every year NORAD brings up their official Santa tracking radar. You can track Santa’s location for yourself at www.noradsanta.org. In light of the Christmas season, I wanted to share a few facts about Santa’s travels around the globe and the technology behind NORAD.
Santa travels about 56 million miles to reach every home in the world. Not counting the time he is in your house, he would have to travel at 560 miles a second to make his global trek. This is 3000 times faster than the speed of sound, but still 300 times slower than light. This is a good thing for NORAD since they use radio waves that travel at the speed of light to track moving objects. This also means that you might see Santa fly overhead, but you will never hear him coming.
So how does NORAD work? NORAD uses a network of satellites, ground-based radar, airborne radar, and gather jets to detect, intercept, and if necessary, engage any threat to Canada and the United States. Lucky for Santa, he is too fast to intercept. The fastest man-made vehicle is the Ulysses space probe which is not even able to achieve top speeds (only 27.4 miles per second) in the earth’s atmosphere, more than ten times slower than Santa’s sleigh.
NORAD tracks Santa because of how the radar systems work. In the simplest of explanations, you can think of a radar station as throwing electrons in the form of light at the sky in a known pattern. If the electron hits something, it bounces straight back to the radar unit. The radar unit knows how long it takes an electron to travel a certain distance and can then determine where the object was when the electron hit it.
The crazy part is they can only track where Santa was and maybe guess where he is going, but not where he is currently. The speed at which he moves means that if he is 3000 miles from the radar unit, it will take 0.01 seconds for the radar to bounce from his sleigh. In that time he will have traveled 10.5 miles. This means that we never know if Santa is in town but we know he was here, because by the time we detect him on radar he is already gone.
So can we tell exactly where Santa is with satellite imagery? Actually, it gets much worse with satellite. Since the satellites orbit at around 22,000 miles above the surface, by the time the satellite sees Santa he has had 0.12 seconds to travel and has cleared a distance of 44 miles. In this case the satellite can tell if he has been in the county, but not until long after he has left.
So kids, let’s just say your chances of catching Santa in person on Christmas night are very poor. You might catch him on a very high-speed camera as a blur, but even with a camera a few feet away, in the fractions of a second it takes the camera to capture the image, he will have moved across the room.