If you are looking for fairly random ramblings of a rather average guy who happens to be a Grandfather, Soccer Dad, Pastor, and Expert in Hardware and Firmware, you have found the right blog. This blog will have a variety of posts from how my kids performed in their soccer games, ballet, basketball, or acting; to how I fixed a problem on a server, or repaired my car, or played with my kids, or sermon notes.
Wednesday, March 18, 2020
Cloud and HPC workloads part 3
Tuesday, March 17, 2020
Happy late pi day!
In honor of the never-ending number pi, we have one of a few international holiday celebrations. Pi Day was first celebrated on March 14, 1988, which also happens to coincide with Einstein’s birthday, March 14, 1879. Pi Day was first celebrated as a part of an Exploratorium staff retreat in Monterey, Calif. In March 2009, Pi Day became an official U.S. national holiday.
As part of my recognition of Pi day, I would like to explore the history of the number, who first discovered it, how it was originally estimated, and simple ways you can estimate it yourself. For starters, Pi is the ratio of a circle’s circumference to its diameter, or the length all the way around a circle divided by the distance directly across the circle. No matter how large or small a circle is, its circumference is always Pi times its diameter. Pi = 3.14159265358979323846… (the digits go on forever, never repeating, and so far no one has found a repeating pattern in over 4,000 years of trying.)
One of the most ancient manuscripts from Egypt, an ancient collection of math puzzles, shows Pi to be 3.1. About a thousand years later, the book of 1 Kings in the Bible implies that pi equals 3 (1 Kings 7:23), and around 250 B.C. the greatest ancient mathematician, Archimedes, estimated pi to around 3.141. How did Archimedes attempt to calculated pi? It was really by doing a series of extremely accurate geometric drawings, sandwiching a circle between two straight-edged regular polygons and measuring the polygons. He simply made more and more sides and measured pi-like ratios until he could not draw any more sides to get closer to an actual circle.
Hundreds of years later, Gottfried Leibniz proved through his new processes of Integration that pi/4 was exactly equal to 1 – 1/3 + 1/5 – 1/7 + 1/9 - . . . going on forever, each calculation getting closer to the value of pi. The big problem with this method is that to get just 10 correct digits of pi, you have to follow the sequence for about 5 billion fractions.
It was not until the early 1900s that Srinivasa Ramanujan discovered a very complex formula for calculating pi, but his method adds eight correct digits for each term in his sum. Starting in 1949, calculating pi became a problem for computers and the only computer in the U.S., ENIAC, was used to calculate pi to over 2,000 digits, nearly doubling the pre-computer records.
In the 1990s the first Beowulf style “homebrew” supercomputers came on the scene. The technology was originally developed to calculate pi and other irrational numbers to as much accuracy as possible. Some of these systems ran over several years to reach 4-billion digits. Using the same techniques over the years, we currently are at 22-trillion digits. This is a little overkill considering that, using only 15 digits of pi, you can calculate the circumference of the Milky Way galaxy to within an error of less than the size of a proton. So why do it? President John F. Kennedy said we do things like this, “not because they are easy, but because they are hard; because that goal will serve to organize and measure the best of our energies and skills.”
Attempting to calculate pi to such high accuracy drove the SuperComputing industry, and as a result, we have the likes of Google’s search engine that indexes trillions of webpages every day, computers that can replace physics research labs by simulating the real world and artificial intelligence systems that can beat the world’s best chess players. Where would we be today without the history of this number?
Now as I promised, there is a way you can estimate pi with very simple math. You play a simple game called “Pi Toss.” You will need a sheet of paper, a pencil and a bunch of toothpicks; the more toothpicks, the closer your estimate will be. Step 1: Turn the paper landscape orientation. Draw two vertical lines on the paper, top to bottom, exactly twice the length of your toothpicks apart. Step 2: Randomly toss toothpicks, one at a time, onto the lined paper. Keep tossing them until you are out of toothpicks. Make sure to count them as you toss them on the paper. Don’t count any that miss or stick off the edge of the paper, those don’t count. Step 3: Count all the toothpicks that touch or cross one of your lines. Step 4: Divide the number of toothpicks you tossed by the number that touched a line and this will be approximately equal to pi. How close did you come? To find out how this works, read more about Pi Toss at https://www.exploratorium.edu/snacks/pi-toss.
Cloud and HPC workloads part 2
HPC systems rely heavily on high speed, low latency network connections between te individual servers for optimal performance. This is because of how they share memory resources across processors in other systems. They utilize a library called MPI (message passing interface) to share information between processes. The faster this information can be shared the higher the performance of the overall system.
HPC systems use networks that are non-blocking, meaning that every system has 100% of the available network bandwidth between every other system in the network. They also use extremely low latency networks reducing the delay from a packet being sent from one system to another to as low as possible.
In cloud based systems there is usually a high blocking factor between racks and low within a rack, resulting in a very unbalanced network creating increased latency for high perfance workloads, so poor that some HPC application will not execute to completion. In recent months some cloud providers have made efforts to redesign network infrastructure to support HPC applications, but there is more work to be done.
Monday, March 16, 2020
Cloud and HPC workloads
Wednesday, March 11, 2020
A brief history of computer encryption
Gresik official release.
Monday, March 9, 2020
1-23-2020 Signals Around Us
You may not realize just how many electromagnetic signals pass through your body every day. It’s not really known if there are health effects from these signals or not, but I find it very interesting to see how much they have increased in the last decade. Just looking at cellular towers, in 2010 there were 250,000 and today there are 350,000 towers in the U.S., and this number will increase to over one million in the coming few years to support 5g technology.
So, what do these signals look like to our body? The human body has an electrical frequency that it likes to operate within and this matches the natural frequency of the earth. The earth and its atmosphere vibrate at the fundamental frequency of 7.83Hz. If you encounter signals at harmonic frequencies with the earth, it amplifies the impact of the signal.
What is a harmonic frequency? It is a signal that has peaks and valleys that overlap the peaks and valleys of the fundamental frequency. The harmonic frequencies of earth and the human body are known as the Schuman Resonance and are 14.3, 20.8, 27.3 and 33.8 Hz. Our electric grid in the U.S. operates at a frequency of 60Hz, which falls outside the natural harmonic range.
We create signals every day that impact the natural resonance of earth and this is a good thing. We would not want to match the resonant frequencies, as it would cause some dangerous situations. Let me give you an example. Have you ever seen videos where an opera singer breaks a glass with their voice? This happens because their voice matches the resonant frequency of the glass, causing vibrations in the glass to build on each other and eventually vibrating the glass apart. Imagine what might happen if we produced enough vibrations at resonant frequencies of the earth. Would it be possible to shatter the planet? Let’s hope we never find out.
So, is it a bad thing to be impacted by non-resonant frequencies as well? There is another frequency interaction called the damping frequencies. This is a frequency that can cause vibrations to decrease in amplitude. We use these damping techniques to create things like noise canceling head phones and engine mufflers. If you can create a damping frequency at the same amplitude as the resonance frequency of an object, you can completely stop all natural vibrations.
This means that it is possible to stop or amplify the natural frequency vibration of your body by exposure to radio waves. We know that increasing this vibration by applying resonant frequency harmonics can cause damage, but we don’t really know if stopping the vibration can cause damage or not.
It just so happens that our cellular phones operate in the 800-megahertz range, which just happens to be very close to the 800.8-megahertz harmonic frequency of our bodies. So every time our cell phone sends a message, we make a call or we upload a video or picture, we amplify the natural vibrations of our body. Granted it is by a tiny amount, but is there an impact to our health? There are a few studies that indicate extreme exposure to these frequencies can cause mental health issues.
Although most of these studies have been dismissed as having no basis in science, there is still a question of how these magnetic fields impact our well-being, and much research is continuing to better understand the impacts. If you want to see the radio signals around you, there is an application for your cell phone that monitors electromagnetic signals. There are several out there; if you search for EMF Meter, you will find a wide range of them. Some claim to be “ghost” hunters, but really they just measure the amount of electrical energy in the air around you. My favorite is the EMF Radiation Meter by IS Solution, though it has quite a few ads. There is also a paid application for $2.99 that provides a map of where the radio signal is coming from called “Architecture of Radio.” If you are interested in studying the radio signals, it is worth the price.
Friday, March 6, 2020
Katherine Johnson, human computer
Thursday, March 5, 2020
1-16-2020 The Hype of 5G
Cellular network providers have been touting their new technology for over a year now, including promoting the fact that they are first to provide 5G service in the country. The question is, how much of what is being said about 5G is accurate and how much of it is marketing hype? I plan to address the facts of 5G in this week’s article and let the reader decide.
First of all there are three different types of 5G being built in the U.S., including low-band, mid-band, and high-band mmWave implementations. The term mmWave refers to the radio frequency band between 30 Ghz and 300 GHz, which falls right in the middle of microwave signals. This radio frequency band is used primarily for satellite communications and infrared signals used for short distance communications like your TV remote control. This frequency band provides a solid carrier signal for high speed internet communications, but the frequency is too high to carry audio signals.
The first technology, which provides the fastest service, is the high-band 5G adopted primarily by AT&T and Verizon, with a few market areas from T-Mobile. This technology is about ten times faster than the current 4G technology in widespread use today. It also has very low latency, which means the message gets sent nearly instantaneously, but the downfall is that for the maximum speed out of the network, you have to be standing very near a cellular tower. In the best case scenario, you could download a standard definition full length movie in 32 seconds, compared to over five minutes on today’s networks. However, you would have to be within 80 feet of a tower to achieve those transfer speeds.
The mid-band technology in use by Sprint is about six times faster than 4G; it has a longer range than high-band 5G but is still much smaller than 4G. What this means is that there will need to be nearly twice as many towers installed to provide 5G service to all the same areas that receive 4G today, increasing the overall power consumption of cellular providers.
The low-band 5G in use by T-Mobile and AT&T only achieves 20 percent performance increase over 4G technologies. The low-band solution has nearly the same coverage area as 4G per tower, making the expense of rolling out low-band 5G much less expensive. This is likely the type of 5G networks we will see in our area.
Secondly, you cannot purchase a phone today that will support all three technologies, so your awesome new 5G cellular phone is likely to only work on your provider’s towers at 5G speeds and prevent data roaming due to incompatibilities in the technologies. This turns out to be a problem not only for you as an end user of the technology, but also for the providers of the technology. The only way to keep compatibility for roaming is to keep 4G transmitters and receivers in operation, increasing the cost of both the provider gear and consumer cellular phones.
Lastly, every provider has their own approach to providing 5G services, using a mix of technologies. This creates problems both for companies in regards to data roaming and for end users in regards to being locked in to not only a provider, but also a geographical area.
T-Mobile has a nationwide low-band 5G network and a smaller but much faster high-band 5G network in six major U.S. cities. There currently is only one phone that works on the high-band network, and it will not work on the low-band network. The two phones they release for low-band also will not work on the high-band network, so their service is localized based on the 5G phone model that you own.
Sprint is in the process of building their mid-band network in parts of nine U.S. cities. You are stuck with a limited choice of four devices that will operate on their network, one data device and three phones.
AT&T has low-band networks in 20 markets for “consumers” and high-band networks in small areas of 35 markets focused primarily on providing service to businesses. AT&T currently sells only two phone models and a single wifi hotspot device that can utilize this new network. AT&T is claiming to offer 5G today in all markets, but is actually just upgrading its existing 4G networks.
Verizon is working on building out the largest high-band 5G network. It is currently providing service in 40 markets, but you need to be within 80 feet of a tower to get a 5G connection, and they charge extra for 5G access.
I guess ultimately what I am saying is that 5G is currently a mess for consumers, leaving us somewhat in the dark as to the best choices for future cellular phones and plans. Both Samsung and Apple have planned releases as early as February 11, 2020, that are expected to support all three network types and configuration. These new 5G phones will solve a lot of the consumer based issues with 5G, and we can expect wireless network speed to improve drastically in the coming months.
Monday, March 2, 2020
12-26-19 Tracking Santa
![]() | |||
Wikimedia by user Bukvoed, used under Creative Commons CC0 License.
The type of radar used by NORAD to track Santa rotates
steadily, sweeping the sky with a narrow beam searching
for aircraft.
|