Wednesday, May 6, 2020

Digital Censorship

I saw something today that I thought I would never see in my lifetime. A doctor in Madagascar developed a homeopathic remedy for COVID-19 which apparently is being used by the nation and has allowed them to reopen and go back to life as normal. However, if you try to search for pricing or availability of this drink Covid-organics on Google in the shopping category you get blocked.

It even blocked my ability to screen shot the block, thus the reason for the low quality photo.  I wanted to make my readers aware of what is starting to happen in regards to free speech in our country and the world.

Blocked Facebook post.

This is not the only thing being censored. The website www.plandemicmovie.com is asking for help getting their video documentary on the major platforms. It gers blocked within minutes of being posted because it is speaking what I believe to be the truth about COVID-19 and vaccines in general.  I will be posting their video here later tonight as well as sharing across my social media platforms and ask my readers to do the same. Even if you don't agree with the views in the video as this is more about protecting free speech and freedom of the press than a shared opnion on medical care.
In fact, if you have a link to videos that are being blocked on the other side of the issue leave a comment here and I will be glad to attempt to get the word out about them as well, even though I may not agree with the content.
We need to flood the gatekeepers.of internet censorship and speak out for pur freedom.

Just found out that a joke I posted yesterday was censored by Facebook as well so much for free speech.


Thursday, April 23, 2020

Which is more important to you, health or privacy?

In the light of the “new normal” brought about with the COVID-19 virus, there are some technically possible, but ethically questionable ideas on how to track and prevent the virus. I am talking about the utilization of the blue tooth chip in your cell phone to track other people, or rather other cell phones that you have come within a given radius.  

The technology has existed for a long time and many of us use it daily. Bluetooth is a short-distance, high-frequency radio transmitter and receiver pair in a majority of cell phones, radios, computer, televisions and vehicles manufactured in the last 10 years. The technology was introduced by Ericsson in 2001 in their T36 phone. The technology allows devices to communicate with each other across distances of 33-feet or less. The crazy part is that the latency of the Bluetooth signal, which is the time it takes the signal to make a round trip between two devices, can very accurately determine the distance between the devices.

Both Apple and Google are working together to engineer a supposedly anonymous method of using this signal to determine when two devices come in close proximity to each other. Currently they are engineering around a distance of six feet to match the World Health Organization’s social distancing recommendations. The plan is to send tracking information back to Google and Apple in a centralized, cloud-based database. Google is likely to host the data for both parties. A COVID-19 patient would use their device to scan a QR-code supplied by their doctor. This device then becomes a trigger mechanism and notifies every device that has been within six feet of it that the owner may have come in contact with the virus and should be tested.

I believe it is a technologically feasible use case for Bluetooth and a great way to notify people of supposed contact with the virus, but what about the privacy aspects? There are claims that they will use randomized Bluetooth IDs that change hourly on every device participating the program. These random IDs will be stored in a central database, but never cross-referenced to the previous ID. Each device will keep a list of every ID it has broadcast for a period of 14-days based on the incubation period of the disease. If a device owner is infected, the list of all its IDs are cross-referenced in the database to notify devices holding the IDs that were contacted. The big problem is the extremely large size of the data transfer required to keep the privacy.  

Another major problem is, as I have mentioned in previous articles on internet safety, once something is on the internet, it is there forever. This includes the private random IDs and every other random ID they have contacted. If you believe for a minute that data processing giant Google does not have a market plan for this data, you are fooled. They claim they will keep the information completely private and only authorized health care organizations would be able to access the tracking data. I take issue with them gathering the data in the first place. Even knowing how many devices I come in contact within a given two-week period is a big enough invasion of privacy to give me concern.

I also worry that if Google and Apple are able to create this level of tracking, they are not the only ones capable of creating such a system. Your private life is no longer private if you carry a cell phone. They will be able to track where you are, when you got there and who is there with you. So my question to you is, how much privacy are you willing to give up for protection from an illness. I know the question is going to be asked of us all soon. You can read full details about the plans and technology on The Verge https://www.theverge.com/2020/4/10/21216484/google-apple-coronavirus-contract-tracing-bluetooth-location-tracking-data-app.

Tuesday, April 21, 2020

Drones using sonar for sight

One of the most interesting innovations in robotics in the last year was the development of a sonar system for robots that utilized a simple system of a single speaker and two microphones to mimic the sonar of bats. These “Robats” were developed at Tel Aviv University in Israel early last year.

You might ask what the advantage is of using sonar for robotics when we already have excellent computer vision techniques and LiDaR. There are two main factors to take into consideration. The first is that LiDaR units are both heavy and expensive, though we can achieve a more detailed model of the environment with LiDaR at around a resolution of one centimeter. Measurements are accurate to one centimeter versus the five-centimeter resolution of sonar. These are both problems for developing flight worthy drones.

A typical LiDaR unit bounces a laser beam off of materials in the environment several times a seconds resulting in a lot of data to process in order to map the environment. The sonar unit developed in Tel Aviv sends three pulses every 30 seconds and processes the data to result in an accurate map for navigation with much less data.  

A typical LiDaR unit costs around $150 dollars for a low-end unit that has a resolution similar to the Sonar used in the Robat. The students at Tel Aviv replaced this unit with $15 in parts, showing an overall cost reduction of 90 percent. They also reduced the weight of the mapping unit substantially, from around five pounds to around five ounces.  

Just last month they modified the system from using a single speaker and two microphones to using two speakers and four microphones to allow for 3-D mapping from a flying drone, versus the walking Robat introduced in January 2019.  

I took great interest in this project as I have a lot of junk electronic components lying around and wondered if it would be possible to reproduce their results with simple electronics found around the house. I discovered that not only is it possible, but it is basically how they started the project.  

If you would like to learn how to do something like this, you can follow a project on instructibles.com to build a spinning SODAR (blend between Sonar and LiDaR) to keep from being confused with a real Radar or Sonar system. I plan on building one as soon as I can gather all the components. You can find the project at https://www.instructables.com/id/DIY-360-Degree-SODAR-Device/.

I foresee a lot of people tinkering around with robotics as we are ordered to stay at home. I recommend picking up an ardunio kit that comes with a fairly large number of electronic components and taking some time to learn how to build your own robot. Who knows? You might make the next big robotics discovery in your own home. 

Friday, April 10, 2020

Supercomputing and COVID-19

You might think that computers and the COVID-19 virus have very little to do with one another. However, there are several things that have happened in the computing industry as a result of COVID-19. The first was the massive increase in working from home, which caused an unexpected spike in the amount of video streaming traffic on the internet. The second was in the supercomputing industry.

The largest supercomputers in the world, usually used by government agencies for defense and energy research, have been repurposed for medical research. The Summit Supercomputer at Oak Ridge National Lab had a primary purpose of designing next generation batteries, nuclear power reactors and nuclear weapons systems; it has been completed retooled in the last weeks to search for drug treatments.

How exactly does a supercomputer search for drugs that can help fight viral infections? It is a very interesting science. Every virus has a unique chemical shape, as does every drug. Through a history of clinical trials and laboratory testing, scientists now know how to determine if drugs will help to treat viruses by looking at how the “shapes” of the chemical in the drug and virus fit together. It’s like trying to put together a trillion-piece puzzle. 

As it turns out, drugs that fit tight to the surface of the virus close the virus off from the body and cause it to die from lack of interaction. They also block the virus from causing symptoms in the carrier. Since we have a very large database of the chemical compounds in every manufactured drug and have the chemical profile of the virus, it is just a matter of finding the compounds that will be effective in sealing off the virus.

To find these compounds requires a very deep search through millions of drugs that can be combined in more than several trillion possible combinations to react with the virus in different ways. These computational reactions require an extreme amount of computing power, and most molecular biologists do not have access to the necessary computing power to simulate models of this size. They normally run these models over the course of several months to come up with a weighted list of possible treatments. They then have to manually test these compounds in a lab before coming to a useful conclusion. The process of drug matching and design for each new disease is normally a year or more.

Summit is not the only supercomputer being used to research drug treatments for COVID-19. In fact, if the virus is under control before November, I expect there will be a large number of research papers presented at the conference based around COVID-19 research. The US has opened 30 supercomputers, or at least portions of them, to global researchers for use in finding a treatment for COVID-19. Europe, Russia, China and Japan are all doing the same.

If you want to know more about supercomputing and drug research, just hit Google and you will find countless articles on supercomputers by IBM, HPE, Hitachi, Dell, Atos and Penguin Computing, to name a few, being utilized for drug research. If only the power of quantum computing were ready for real computational chemistry and biology research, we might have a cure already.

Stay safe and use the time away from friends and work to learn something new. I highly recommend picking up a programming language. Python is fairly easy to learn and there are a lot of free tutorials for all ages. 

Saturday, April 4, 2020

Livestreaming for churches


By Scott Hamilton


Last week I wrote about the impact of COVID-19 on Internet service providers. I had promised an article this week regarding the impact on local churches and their need for live streaming their services. Many of your churches may already be using Facebook live to keep in touch. That seems to be the easiest way for people to gain access to video content. It is also quite simple for the churches to use. However, if you want a more professional look for your online service, I have some recommended tools. 

The first software I would recommend is called Open Broadcast Software, or OBS, for short. This is a piece of software that allows you to connect to any IP-based web camera including the one integrated into most cell phones. This will allow you to use multiple camera angles as well as stream content direct from your computer to give your service a more personal feel. The best part about OBS is it is open source software, meaning that it is free. You can safely download it for yourself for MacOS, Windows or Linux at https://obsproject.com.

I downloaded OBS last week and began to play online myself. I discovered it is as simple to use as PowerPoint. You simply set up your cameras, screen capture and audio devices, then drag and drop  any of the devices from the list to a display window. You can display one at a time or overlay them with each other. It is a drag and drop, “what you see is what you get” interface for video broadcast. You can stream videos to Facebook Live, YouTube, Twitter or any number of free, online streaming services. 

The second thing I would recommend is a high-quality HDMI capture card. This will allow you to use a secondary computer to display lyrics during worship, Bible verses during teaching, or any other text or image content, just like you use on your current projector in the sanctuary. This greatly simplifies the process of overlaying visual content with live video streams of the speaker or the worship team. 

The third recommendation is the proper cable for connecting your soundboard directly to the computer. This will create higher quality sound than relying on the microphone on your cell phone, camera or computer. It is probably best to feed the sound signal from a separate channel on your existing soundboard; this way you can adjust the audio levels for high quality broadcast. Otherwise you may pick up background noise, echoes and other acoustical problems arising from the shape, size and acoustical properties of your sanctuary. 

Lastly, I would recommend one of the top sites for both supporting and streaming Christian content for churches. Life Church, at http://www.lifechurch.tv, offers free websites and streaming services for churches regardless of denomination; they also offer technical support via e-mail or chat to help you get things going. Head on over and check them out.

I would like to thank all the area churches for the efforts they have put forward to continue the positive message of Christ during the crisis of COVID-19. I especially applaud the churches in the area with very little technical expertise. They have made the effort to learn technology in order to continue reaching their community. In conclusion, I am willing to help if you are stuck with a technological question. Feel free to email me at 3kforme@gmail.com. Enjoy your week and stay safe.

Friday, March 27, 2020

Covid-19 impact on the internet

By Scott Hamilton

I would like to open by stating that this week’s article is a lot more opinion and observation than pure facts about the impacts of Covid-19 on the internet. There have been clear impacts to network infrastructure as a result of the social distancing and working from home orders across the country, but there are just not enough facts listed to put a real number on the impact.

Over the past week there have been numerous articles published around the world questioning if the current internet infrastructure could handle the extra traffic from everyone being forced to work from home. The Wall Street Journal predicted that the infrastructure was not ready for such extra workload and we would be looking forward to websites failing to load and online meeting platforms overloading. The New York Times reported exactly the opposite, that we would see minimal impact from the extra loads. As a work from home, high performance computing engineer and cloud architect, I had some questions myself over whether or not the infrastructure could handle the extra load. The European Union requested that Netflix stop streaming high definition videos to reduce the network load across Europe during the outbreak as a precaution to prevent failures.

We are over week into a new society, at least for a short period of time, where a majority of us are working from home. The impact has been minor from my experience, as a user of rural internet service split between a mix of cellular network tethering, satellite service, and microwave-based internet. I have found performance increasing since the onset. I must admit it came as a surprise to me to see performance increase. Here’s the deal: my providers all lifted the imposed bandwidth limits during the outbreak. 

I wonder if they will reimpose the limits following the outbreak. If so, it will lead to a lot of questions from customers. If you ask the reason for the limits, they will say it is because their network cannot handle the full load of all the users. The fact that the limits have been lifted prove that their networks will handle the load of all the users. Now is the perfect time to stream all your favorite television shows, download all your favorite books and use the internet to your fullest ability, because the limits will come back. 

The real impact this virus has had on technology is that the imbalance of network connectivity among students has came into full light. There are students all over Texas and surrounding counties that do not have the necessary bandwidth to stream online classes during the school-from-home period. This prompted many providers, not only in our area, to lift limits to educational sites and cloud based services.

Among the companies to lift the bandwidth restrictions are AT&T, Verizon, T-Mobile, Sprint, Hughesnet, U.S. Cellular and Comcast to name few. Many others are offering steep discounts on new service installations, taking advantage of our need for speed. I would say that now is a great time to look into getting high speed internet service if you do not already have it, mainly because these deals will likely never come around again.

There are also several 30-day free trials for streaming and education services to keep us entertained and educated during the social distancing period. I challenge you to take a look around for special offers and enjoy trying some new things online. Hey, if you can’t explore the community, you may as well get out there and explore the virtual world.

Next week I plan on doing an article on live-streaming and give details as to how many of our local churches are beginning to offer online worship services and Bible teaching. I will provide pointers to those that are looking for ways to continue their services online. I have also seen local dance studios, fitness trainers and others offer online classes in lieu of face-to-face training. Take a moment to enjoy the flexibility technology has brought during this time of crisis, and who knows, maybe you will find something new to enjoy even after this is all over. 

Wednesday, March 18, 2020

Cloud and HPC workloads part 3

   The final reason HPc workloads are slow tui migrate to the could had to do with storage. HPC generates and processes very large quantities of data.  Most have directed capacities well above 1 petabyte.  
  There are three main factors impacting HPC and cloud storage. First is that HPC applications expect a POSIX file system which is usually implemented on block devices where file system objects point to links blocks of information that can be accessed both sequentially,  or randomly.  Many times these applications utilize the fine storage like shared memory locations and modify individual blocks within the file. RFID requires low latency onthe file system as well as organized storage patterns.   Cloud storage utilizes block storage deep under the hood but limits access to the c blocks and instead serves files add objects in there storage platform.  In effect simplifying there structure and returning them as a single stream of data. You cannot easily modify the content of an object so utilizing them as memory addresses due not really work out. 
  The second shortcoming is a limitation of Acess Control in object storage which makes it difficult to secure these large databases at a granular level on the cloud. Controlling access for individual users at the object level becomes extremely challenging in object syste scenarios.  
  The final limitation is the unpredictable performance if object based storage.  The location of the object impacts the speed of retrieval as well as the refresh rate of the object.m, making it impossible to treat the object as a shared memory space for HPC applications because there is no guarentee that the object updates get stored before the file is accessesms by neighboring processes causing some major issues in code performance. 
    There are global experts working on solutions to these and other problems relating to HPc workloads in the cloud,  but I feel we're still a few years away from seeing mainstream HPC use of cloud architectures. 

Tuesday, March 17, 2020

Happy late pi day!

In honor of the never-ending number pi, we have one of a few international holiday celebrations. Pi Day was first celebrated on March 14, 1988, which also happens to coincide with Einstein’s birthday, March 14, 1879. Pi Day was first celebrated as a part of an Exploratorium staff retreat in Monterey, Calif. In March 2009, Pi Day became an official U.S. national holiday.

As part of my recognition of Pi day, I would like to explore the history of the number, who first discovered it, how it was originally estimated, and simple ways you can estimate it yourself. For starters, Pi is the ratio of a circle’s circumference to its diameter, or the length all the way around a circle divided by the distance directly across the circle. No matter how large or small a circle is, its circumference is always Pi times its diameter. Pi = 3.14159265358979323846… (the digits go on forever, never repeating, and so far no one has found a repeating pattern in over 4,000 years of trying.)

One of the most ancient manuscripts from Egypt, an ancient collection of math puzzles, shows Pi to be 3.1. About a thousand years later, the book of 1 Kings in the Bible implies that pi equals 3 (1 Kings 7:23), and around 250 B.C. the greatest ancient mathematician, Archimedes, estimated pi to around 3.141. How did Archimedes attempt to calculated pi? It was really by doing a series of extremely accurate geometric drawings, sandwiching a circle between two straight-edged regular polygons and measuring the polygons. He simply made more and more sides and measured pi-like ratios until he could not draw any more sides to get closer to an actual circle.

Hundreds of years later, Gottfried Leibniz proved through his new processes of Integration that pi/4 was exactly equal to 1 – 1/3 + 1/5 – 1/7 + 1/9 - . . . going on forever, each calculation getting closer to the value of pi. The big problem with this method is that to get just 10 correct digits of pi, you have to follow the sequence for about 5 billion fractions.

It was not until the early 1900s that Srinivasa Ramanujan discovered a very complex formula for calculating pi, but his method adds eight correct digits for each term in his sum. Starting in 1949, calculating pi became a problem for computers and the only computer in the U.S., ENIAC, was used to calculate pi to over 2,000 digits, nearly doubling the pre-computer records.

In the 1990s the first Beowulf style “homebrew” supercomputers came on the scene. The technology was originally developed to calculate pi and other irrational numbers to as much accuracy as possible. Some of these systems ran over several years to reach 4-billion digits. Using the same techniques over the years, we currently are at 22-trillion digits. This is a little overkill considering that, using only 15 digits of pi, you can calculate the circumference of the Milky Way galaxy to within an error of less than the size of a proton. So why do it? President John F. Kennedy said we do things like this, “not because they are easy, but because they are hard; because that goal will serve to organize and measure the best of our energies and skills.”

Attempting to calculate pi to such high accuracy drove the SuperComputing industry, and as a result, we have the likes of Google’s search engine that indexes trillions of webpages every day, computers that can replace physics research labs by simulating the real world and artificial intelligence systems that can beat the world’s best chess players. Where would we be today without the history of this number?

Now as I promised, there is a way you can estimate pi with very simple math. You play a simple game called “Pi Toss.” You will need a sheet of paper, a pencil and a bunch of toothpicks; the more toothpicks, the closer your estimate will be. Step 1: Turn the paper landscape orientation. Draw two vertical lines on the paper, top to bottom, exactly twice the length of your toothpicks apart. Step 2: Randomly toss toothpicks, one at a time, onto the lined paper. Keep tossing them until you are out of toothpicks. Make sure to count them as you toss them on the paper. Don’t count any that miss or stick off the edge of the paper, those don’t count. Step 3: Count all the toothpicks that touch or cross one of your lines. Step 4: Divide the number of toothpicks you tossed by the number that touched a line and this will be approximately equal to pi. How close did you come? To find out how this works, read more about Pi Toss at https://www.exploratorium.edu/snacks/pi-toss.

Cloud and HPC workloads part 2

    The second reason HPC workloads are slow to migrate too the cloud is related to cloud networks. A majority of cloud service providers designed the network infrastructure for individual compute resources common in IT services data centers. /for example your corporate file servers do not need a high speed network between them as long as they each have good connectivity to the client systems. It is the same for web servers, database servers, and most other it workloads.
    HPC systems rely heavily on high speed, low latency network connections between te individual servers for optimal performance. This is because of how they share memory resources across processors in other systems. They utilize a library called MPI (message passing interface) to share information between processes. The faster this information can be shared the higher the performance of the overall system.
   HPC systems use networks that are non-blocking, meaning that every system has 100% of the available network bandwidth between every other system in the network. They also use extremely low latency networks reducing the delay from a packet being sent from one system to another to as low as possible.
    In cloud based systems there is usually a high blocking factor between racks and low within a rack, resulting in a very unbalanced network creating increased latency for high perfance workloads, so poor that some HPC application will not execute to completion.  In recent months some cloud providers have made efforts to redesign network infrastructure to support HPC applications, but there is more work to be done.

Monday, March 16, 2020

Cloud and HPC workloads

    Companies are making rapid migrations to cloud based resources across industry,  except for a lag in one area, high performance computing. There are a few reasons for this lag.              First, most companies utilize their HPC systems near 90 percent making for much less cost savings in migration. Heavily utilized system are usually less expensive to run on your own data center than pay the premium for utilizing the cloud.  The exception is in the high speed storage.  Cloud storage turns out to be fairly performant and much less costly than on premises storage, but the latency between it and the compute power makes it unusable with inprem compute.
    Tomorrow I will talk about the second reason HPC migrations to the cloud are lagging behind conventional datacenter migrations.

Wednesday, March 11, 2020

A brief history of computer encryption

     Encryption has existed longer than computers have been around. In short encryption is a secure method of communication between two parties. They both must know some “secret” that allows them to share messages no one else can read. The simplest form of encryption is letter substitution, for example, shifting letters. A becomes c and z becomes b, each letter becomes three letters ahead, starting over at a when you reach z. The secret in this case would be the number three. The sender and receiver would both know that the letters were shifted three characters to the right allowing them to communicate without someone else easily reading the message.
     In June 1944, Bailey Whitfield Diffie was born. Diffie was always very independent; he did not learn to read until age 10. He didn’t have any disability; he just preferred that his parents read to him. They followed his wishes and patiently waited for him to learn. In the fifth grade he started reading, above grade level. Mary Collins, his teacher at P.S. 178, spent an afternoon with Diffie explaining the basics of cryptography. He never forgot the lessons learned that day.
     Diffie loved cryptography and took an interest in learning more about the topic. He learned that those with the secret keys practice decryption, and those who don’t have the secret key but try to access the secret information are practicing cryptanalysis. In order to avoid the draft, Diffie took up computer programming and went to work at the Mitre Corporation. He shifted to working with the MIT AI lab in 1966 and began the first discussions on using cryptography to protect computer software and digital information.
     Diffie’s research contradicted the National Security Agency and work being done by IBM in conjunction with the National Bureau of Standards to institute the Data Encryption Standard (DES). Diffie and his Stanford colleague, Marty Hellman, regarded DES as tainted and potentially fraudulent due to the possibility of an NSA trapdoor which would allow the NSA and conceivably IBM to decrypt messages without knowing the secret. This brought about further research into the difficult problem of allowing two people or devices that had never communicated before to communicate securely. They could not exchange secret keys if they had never communicated, so how could they share these keys in a secure way? How do you create a system where all conversations could be protected with cryptography? How can you get a message from someone you never meet and ensure that they were the sender and no one else could read the message? This is the conundrum of secure computer communications. 
     This is where our current public key encryption infrastructure was born. Keeping keys secret was difficult; the very thing needed to eavesdrop on secure communications had to be passed unencrypted between two people, increasing the chances of compromise. Diffie came up with the idea of using a key pair instead of a single key. It took more than half a decade for him to perfect the technology, but he eventually solved the issue. Here is how it worked.
     Let’s say Alice wants to send a secret message to Bob. She simply asks Bob for his public key, or looks it up in a “phone directory” of public keys. Alice then uses Bob’s public key to scramble the message; now only Bob’s private key can decrypt the message. Let’s say George intercepts the message; without Bob’s private key, George only gets a scrambled mash of data. Bob can read the message because he is the only person in the world with both halves of the key (public and private). Alice can also encrypt a small part of the message with her private key that can only be decrypted with her public key, so Bob can know for certain the message came from Alice. This is the key to all modern secure communication, including secure phone conversations, and was the result of the research of one key individual, Whit Diffie. 

Gresik official release.

GNOME 3.36: "Gresik" was officially released today after six months of development effort. The part that excites me the most is the massive performance improvements included in this release. In the past I have felt that GNOME was a resource drain on overall system performance and can not wait to test out.  
I'll be posting results of the tests in the next few days. For more details please visit http://www.gnome.org/news

Monday, March 9, 2020

1-23-2020 Signals Around Us

You may not realize just how many electromagnetic signals pass through your body every day. It’s not really known if there are health effects from these signals or not, but I find it very interesting to see how much they have increased in the last decade. Just looking at cellular towers, in 2010 there were 250,000 and today there are 350,000 towers in the U.S., and this number will increase to over one million in the coming few years to support 5g technology.

So, what do these signals look like to our body? The human body has an electrical frequency that it likes to operate within and this matches the natural frequency of the earth. The earth and its atmosphere vibrate at the fundamental frequency of 7.83Hz. If you encounter signals at harmonic frequencies with the earth, it amplifies the impact of the signal.

What is a harmonic frequency? It is a signal that has peaks and valleys that overlap the peaks and valleys of the fundamental frequency. The harmonic frequencies of earth and the human body are known as the Schuman Resonance and are 14.3, 20.8, 27.3 and 33.8 Hz. Our electric grid in the U.S. operates at a frequency of 60Hz, which falls outside the natural harmonic range.

We create signals every day that impact the natural resonance of earth and this is a good thing. We would not want to match the resonant frequencies, as it would cause some dangerous situations. Let me give you an example. Have you ever seen videos where an opera singer breaks a glass with their voice? This happens because their voice matches the resonant frequency of the glass, causing vibrations in the glass to build on each other and eventually vibrating the glass apart. Imagine what might happen if we produced enough vibrations at resonant frequencies of the earth. Would it be possible to shatter the planet? Let’s hope we never find out.

So, is it a bad thing to be impacted by non-resonant frequencies as well? There is another frequency interaction called the damping frequencies. This is a frequency that can cause vibrations to decrease in amplitude. We use these damping techniques to create things like noise canceling head phones and engine mufflers. If you can create a damping frequency at the same amplitude as the resonance frequency of an object, you can completely stop all natural vibrations.

This means that it is possible to stop or amplify the natural frequency vibration of your body by exposure to radio waves. We know that increasing this vibration by applying resonant frequency harmonics can cause damage, but we don’t really know if stopping the vibration can cause damage or not.

It just so happens that our cellular phones operate in the 800-megahertz range, which just happens to be very close to the 800.8-megahertz harmonic frequency of our bodies. So every time our cell phone sends a message, we make a call or we upload a video or picture, we amplify the natural vibrations of our body. Granted it is by a tiny amount, but is there an impact to our health? There are a few studies that indicate extreme exposure to these frequencies can cause mental health issues.

Although most of these studies have been dismissed as having no basis in science, there is still a question of how these magnetic fields impact our well-being, and much research is continuing to better understand the impacts. If you want to see the radio signals around you, there is an application for your cell phone that monitors electromagnetic signals. There are several out there; if you search for EMF Meter, you will find a wide range of them. Some claim to be “ghost” hunters, but really they just measure the amount of electrical energy in the air around you. My favorite is the EMF Radiation Meter by IS Solution, though it has quite a few ads. There is also a paid application for $2.99 that provides a map of where the radio signal is coming from called “Architecture of Radio.” If you are interested in studying the radio signals, it is worth the price.

Friday, March 6, 2020

Katherine Johnson, human computer

Katherine Johnson was instrumental in make the lunar landing mission a success through her work with NASA as a mathematician. She died on Feb. 24, 2020, at the age of 101. It is very fitting to me that we honor her, not only because of the loss of a great mathematician, but also for her contributions to society in general. As Black History month came to a close over the weekend, it was another reminder of her contributions and those of others.
Johnson was born in White Sulphur Springs, W.Va. in 1918. Her “intense curiosity and brilliance with numbers” allowed her to skip several grade levels in school and she attended high-school on the campus of the historically black West Virginia State College. In 1937, she graduated with the highest honors and degrees in mathematics and French. She was among the first three students of color who were offered admission to West Virginia University’s graduate program in 1939; she never completed her graduate studies but went on to become a wife and mother. 
In 1952, after her children were grown, she and her husband moved to Newport News, Va., where she pursued a position in the all-black West Area Computing section at NASA. Last July the NASA Independent Verification and Validation Facility in Fairmont, W.Va., was renamed in her honor as the Katherine Johnson Independent Verification and Validation Facility. I find it interesting that very facility is where I began my career in computing.
Johnson not only drove our nation’s space program to new frontiers, but she blazed the trail for women or color to enter very scientific fields that are dominated by men. It is unfortunate that Johnson, as well as the women who worked alongside her, Dorothy Vaughan and Mary Jackson to name a couple, were relatively unknown until the release of the movie, “Hidden Figures,” in 2016. Jackson and Vaughan did not live long enough to see the well-deserved film honoring their work at NASA as Jackson died in 2005 and Vaughan in 2008. 
Johnson stated that her greatest contribution to space exploration was “the calculations that helped synchronize Project Apollo’s Lunar Lander with the moon-orbiting Command and Service Module." Her work was instrumental in putting men on the moon in 1969. More accurately, she helped get men back safely from the moon, as docking with the Command and Service module was required for a safe flight back home.
Ted Skopinski, along with other male lead scientists at NASA, would have normally taken full credit for the work of the “computers,” but Skopinski shared the credit with Johnson, making her the first female to receive credit as an author on a research report detailing the equations describing an orbital space flight.
Her first major work in orbital space flight was running the trajectory analysis for Alan Shepard’s 1961 mission Freedom 7, the first American manned spaceflight. She also contributed to John Glenn’s first American orbital space flight. In 1962 space flight trajectory tacking required the construction of a “worldwide communications network” linking computers around the world back to NASA mission control in Washington, D.C., Cape Canaveral, and Bermuda. 
Electronic computers were new to the scene and there was not much trust in their accuracy or reliability, so in the case of Glenn’s space flight, he refused to fly until Johnson ran the calculations by hand. Throughout her career at NASA, Johnson authored or co-authored 25 research reports contributing to NASA programs as recent as the Space Shuttle and Earth Resource Satellites. She retired from NASA after more than three decades of work in 1986.

Thursday, March 5, 2020

1-16-2020 The Hype of 5G

  Cellular network providers have been touting their new technology for over a year now, including promoting the fact that they are first to provide 5G service in the country. The question is, how much of what is being said about 5G is accurate and how much of it is marketing hype? I plan to address the facts of 5G in this week’s article and let the reader decide.

First of all there are three different types of 5G being built in the U.S., including low-band, mid-band, and high-band mmWave implementations. The term mmWave refers to the radio frequency band between 30 Ghz and 300 GHz, which falls right in the middle of microwave signals. This radio frequency band is used primarily for satellite communications and infrared signals used for short distance communications like your TV remote control. This frequency band provides a solid carrier signal for high speed internet communications, but the frequency is too high to carry audio signals.

The first technology, which provides the fastest service, is the high-band 5G adopted primarily by AT&T and Verizon, with a few market areas from T-Mobile. This technology is about ten times faster than the current 4G technology in widespread use today. It also has very low latency, which means the message gets sent nearly instantaneously, but the downfall is that for the maximum speed out of the network, you have to be standing very near a cellular tower. In the best case scenario, you could download a standard definition full length movie in 32 seconds, compared to over five minutes on today’s networks. However, you would have to be within 80 feet of a tower to achieve those transfer speeds.

The mid-band technology in use by Sprint is about six times faster than 4G; it has a longer range than high-band 5G but is still much smaller than 4G. What this means is that there will need to be nearly twice as many towers installed to provide 5G service to all the same areas that receive 4G today, increasing the overall power consumption of cellular providers.

The low-band 5G in use by T-Mobile and AT&T only achieves 20 percent performance increase over 4G technologies. The low-band solution has nearly the same coverage area as 4G per tower, making the expense of rolling out low-band 5G much less expensive. This is likely the type of 5G networks we will see in our area.

Secondly, you cannot purchase a phone today that will support all three technologies, so your awesome new 5G cellular phone is likely to only work on your provider’s towers at 5G speeds and prevent data roaming due to incompatibilities in the technologies. This turns out to be a problem not only for you as an end user of the technology, but also for the providers of the technology. The only way to keep compatibility for roaming is to keep 4G transmitters and receivers in operation, increasing the cost of both the provider gear and consumer cellular phones.

Lastly, every provider has their own approach to providing 5G services, using a mix of technologies. This creates problems both for companies in regards to data roaming and for end users in regards to being locked in to not only a provider, but also a geographical area.

T-Mobile has a nationwide low-band 5G network and a smaller but much faster high-band 5G network in six major U.S. cities. There currently is only one phone that works on the high-band network, and it will not work on the low-band network. The two phones they release for low-band also will not work on the high-band network, so their service is localized based on the 5G phone model that you own.

Sprint is in the process of building their mid-band network in parts of nine U.S. cities. You are stuck with a limited choice of four devices that will operate on their network, one data device and three phones.

AT&T has low-band networks in 20 markets for “consumers” and high-band networks in small areas of 35 markets focused primarily on providing service to businesses. AT&T currently sells only two phone models and a single wifi hotspot device that can utilize this new network. AT&T is claiming to offer 5G today in all markets, but is actually just upgrading its existing 4G networks.

Verizon is working on building out the largest high-band 5G network. It is currently providing service in 40 markets, but you need to be within 80 feet of a tower to get a 5G connection, and they charge extra for 5G access.

I guess ultimately what I am saying is that 5G is currently a mess for consumers, leaving us somewhat in the dark as to the best choices for future cellular phones and plans. Both Samsung and Apple have planned releases as early as February 11, 2020, that are expected to support all three network types and configuration. These new 5G phones will solve a lot of the consumer based issues with 5G, and we can expect wireless network speed to improve drastically in the coming months.

Monday, March 2, 2020

12-26-19 Tracking Santa

 
Wikimedia by user Bukvoed, used under Creative Commons CC0 License. 
The type of radar used by NORAD to track Santa rotates 
steadily, sweeping the sky with a narrow beam searching 
for aircraft.
Every year NORAD brings up their official Santa tracking radar. You can track Santa’s location for yourself at www.noradsanta.org. In light of the Christmas season, I wanted to share a few facts about Santa’s travels around the globe and the technology behind NORAD.
Santa travels about 56 million miles to reach every home in the world. Not counting the time he is in your house, he would have to travel at 560 miles a second to make his global trek. This is 3000 times faster than the speed of sound, but still 300 times slower than light. This is a good thing for NORAD since they use radio waves that travel at the speed of light to track moving objects. This also means that you might see Santa fly overhead, but you will never hear him coming.
So how does NORAD work? NORAD uses a network of satellites, ground-based radar, airborne radar, and gather jets to detect, intercept, and if necessary, engage any threat to Canada and the United States. Lucky for Santa, he is too fast to intercept. The fastest man-made vehicle is the Ulysses space probe which is not even able to achieve top speeds (only 27.4 miles per second) in the earth’s atmosphere, more than ten times slower than Santa’s sleigh.
NORAD tracks Santa because of how the radar systems work. In the simplest of explanations, you can think of a radar station as throwing electrons in the form of light at the sky in a known pattern. If the electron hits something, it bounces straight back to the radar unit. The radar unit knows how long it takes an electron to travel a certain distance and can then determine where the object was when the electron hit it.
The crazy part is they can only track where Santa was and maybe guess where he is going, but not where he is currently. The speed at which he moves means that if he is 3000 miles from the radar unit, it will take 0.01 seconds for the radar to bounce from his sleigh. In that time he will have traveled 10.5 miles. This means that we never know if Santa is in town but we know he was here, because by the time we detect him on radar he is already gone.
So can we tell exactly where Santa is with satellite imagery? Actually, it gets much worse with satellite. Since the satellites orbit at around 22,000 miles above the surface, by the time the satellite sees Santa he has had 0.12 seconds to travel and has cleared a distance of 44 miles. In this case the satellite can tell if he has been in the county, but not until long after he has left.
So kids, let’s just say your chances of catching Santa in person on Christmas night are very poor. You might catch him on a very high-speed camera as a blur, but even with a camera a few feet away, in the fractions of a second it takes the camera to capture the image, he will have moved across the room.

Friday, February 28, 2020

12-12-19 Airbags

A couple of times over the last month I have been asked about airbags. Questions have usually been along the lines of, “I hit a deer at 65 miles per hour and my airbag didn’t deploy, but I hit a curb at 15 mph and my airbags deployed. Why?” It has everything to do with how the airbag sensors work. This week’s article explains the sensor, what it does and why sometimes it sets off the airbags with low impact and not with high impact accidents.
The first thing you need to know is that airbag sensors are not impact sensors, but rather shock sensors. The difference is that impact sensors work like a push button switch; when an impact moves the sensor far enough, it turns on a switch. A shock sensor, on the other hand, is based on a change in momentum rather than the movement of a mechanism. In other words, your bumper can move several inches, and as long as it is moving at a steady speed, the sensor will not detect an accident, even if your bumper is completely crushed.
Momentum defines the motion of a moving body and is calculated by multiplying the object’s mass by its velocity. The faster you are moving, the more momentum you have. The heavier you are, the more momentum you have. The airbag sensor measures momentum and triggers the airbag if there is a large enough change in momentum.
The sensor does not exactly measure the momentum of the car, but rather the momentum of the sensor itself. It works by using a suspended mass and some impact sensors. The suspended mass will not touch the impact sensors unless a large change in momentum occurs, causing it to hit one of the impact sensors. You can think of it as a marble in a shallow dip in the center of a box. If the box stays level and moves steadily, the marble stays in the middle; if you bump the box, the marble will roll to one of the sides. If the marble touches the side of the box, the airbag deploys.
Now let’s look at the two scenarios. First you hit a deer at 65 mph and the airbags do not deploy. Imaging the marble in the box is setting on your dashboard. You strike the deer but the car does not slow down or swerve. The marble and the box stay firmly on the dashboard. This is because your momentum did not change. Remember, to change momentum, your speed must change. You struck the deer and maintained your speed. The car took major damage but the airbag never deployed because your momentum didn’t change.
Now for the second case, you curb the tire and the airbag deploys, causing you to lose control of the car. How did this happen and why? Think again about the marble in the box on your dashboard. You are taking the corner, and you hit the curb. The car bounces and instantly slows down from 20 miles per hour to five miles per hour. The marble rolls forward and hits the front of the box. This causes the airbags to deploy. The change in momentum because of the rapid slowing sets off the airbag.
This is both good news and bad news. The good news is that it is the change in momentum is what causes your injuries in the event of an accident and the airbag usually deploys when there is danger of your being tossed around or out of the car. The bad news is that sometimes rapid slow down due to striking a pot hole or curb, or even just hitting the sensor while the engine is running can cause the airbags to deploy, causing more damage to the car than the damage from the incident.

Wednesday, February 26, 2020

12-5-19 Supercomputing: a year in review

This year has been an exciting year in the supercomputing industry as there have been many new developments. Two weeks after the world’s largest conference in the industry, SC19, all the announcements have made it to the internet and opinions have been shared, which makes it the perfect time to highlight the developments of the year.
In my opinion, the top two developments in supercomputing have been two new players in the processor market finally taking a significant share in the industry. The past few years have been dominated by Intel and Nvidia. However, this year two new players began to take some of the glory. AMD and ARM have both designed new processors and seem to be taking a market share from Intel with computing systems on the TOP500 list, a list of the top 500 most powerful computers in the world.
AMD brought products to the market competing with both Intel and Nvidia. The AMD EPYC processors offer a higher memory bandwidth than Intel processors and push the envelope of performance with an average of 30 percent higher performance on supercomputing workloads than Intel. The AMD Radeon Instinct beat Nvidia to the punch, releasing the world’s first PCIe4.0 bases graphics processing units. The increased communication bandwidth coupled with the optimized software stack and virtualization technologies drive higher efficiency and datacenter optimization than current technologies by Nvidia. I would say the biggest obstacle to AMD is the fear of adopting new technologies in research-based workloads.
ARM made its entrance into supercomputing in 2011 as part of the European Mont-Blanc project. ARM had already dominated the mobile market; chances are you have an ARM processor in your pocket right now powering your smart phone. However, making the leap from mobile to the datacenter was a challenging proposition. This year they made a big splash with introduction of a product in partnership with NVIDIA, and various hardware vendors to build GPU-accelerated Arm-based servers. These servers hit the strongest area of the HPC market as a majority of the TOP500 supercomputers are powered primarily by Nvidia GPUs. Coupling the energy efficiency of GPUs and Arm CPUs results in some of the most energy efficient computing platforms available today.
The other big change in supercomputing are the new breakthroughs in machine learning and artificial intelligence. These breakthroughs are redefining scientific methods and enabling exciting opportunities for new architectures. The floodgates are open for new technologies that accelerate the machine learning methods, which are less computationally intensive and more memory intensive. Driven by these breakthroughs, ARM and AMD both bring higher memory performance than we have ever seen before into the market.
The final new technology breakthrough in supercomputing in 2019 was the proof of quantum supremacy, which we have talked about before. It does not mean a lot right now and will probably steal the show in 2030 as the adoption of new technologies in supercomputing seems to move slow.
Why is any of this important to you? Today’s supercomputers are used for weather forecasting, new product development, drug research and scientific modeling. Supercomputers have replaced the laboratory in many industries, reducing the total cost of product development thus reducing the cost of the product. As supercomputing power increases, the cost of consumer goods should decrease and the level of public safety should increase.

Tuesday, February 25, 2020

11-28-19 How to find a meteorite

  In last week’s article I wrote about using hints from social media to track meteors. In that article there was a mention of a reward for finding parts of the meteor that flew over Missouri. I thought maybe someone wanted to hunt for it and did not know where to start, so here are some tips to meteorite hunting.
First I want to clear up the difference between a meteor and a meteorite. It is a meteor when it is flying overhead and on fire from entering the atmosphere. It becomes a meteorite once it hits the ground and cools. So you don’t go hunting meteors, you would never catch one.
Earth is under constant attack by space rocks, most of which either burn up completely or get lost to the depths of the oceans. However, there have been more than 40,000 catalogued discoveries of meteorites on the surface, and countless more are out there waiting to be found. Space rocks can be as valuable as $1,000 a gram, which is about the weight of the average paperclip. However, this treasure hunt requires hard work and dedication.
As always when searching for something, especially on public land or land owned by someone else, ask for permission. For example, space rocks found in national parks belong to the federal government and cannot be legally kept, but it varies from park to park. The federal land managed by the Bureau of Land Management consists of 264 million acres and is different. It is a pretty safe bet that what you find there is yours to keep, but you should still ask. Meteorites, like other artifacts, belong to the land owner, so even searching on private land requires that you ask permission to keep what you find.
The second step is to pick a good spot to hunt. “Meteorites fall anywhere, but they are easiest to spot where there are few terrestrial rocks,” says Alan Rubin, a geochemist at the University of California, Los Angeles. Most meteorites are dark, so white sand deserts, icy regions and plains are ideal. In our area in particular, the plains of Kansas are a great place to look because of the limited terrestrial rock in the area. Any new rocks farmers dig up have a very good chance of being a meteorite. More than one meteorite has been found in a farmer’s rock pile, or even propping open a screen door.
It is also a promising idea to search for the new arrivals. Bill Cooke, head of NASA’s Meteoroid Environment Office, suggests finding the ground below a meteor’s “dark flight.” This is the part of the flight where it slows down to below 6000 miles per hour and stops burning. When there is an accurate trajectory of a meteor, these “dark flight” calculations are posted on the Internet across various sites.
You can also search for magnetic objects as meteorites are usually magnetic, but not always. Don’t rely too much on metal detectors either, as most meteorites are actually discovered by sight and not by the use of special equipment. Finally, if you do happen to find a meteorite, please share your find with the world.

Monday, February 24, 2020

11.21.2019 Tracking Meteors


Last week I reviewed three social networks and wanted to mention a fourth. You might ask, what does this have to do with meteors? It might seem strange at first, but actually there is a lot that link the two together. We will get into that after I talk about Reddit.
Reddit is the self-proclaimed “front page of the internet;” it was formed in 2005. Registered users (redditors) submit content in a variety of subjects: news, science, music, books, video games, technology, etc. Redditors get to vote the content up or down and Reddit sorts the content based on these votes. Content that receives enough up votes gets displayed on the site’s front page. There is no real need for censorship by the site owners because the up-or-down voting system can raise posts to full visibility or make them disappear completely. The only problem with the system is that people can be paid to “troll” a post, falsely making a post more popular or removing it all together. Recently Reddit moderators have been accused of censorship, just like Facebook, but there is slightly more freedom on Reddit for the moment.
Now for the link to meteors. On Nov. 15, amid the Mercury transit and the Taurid meteor shower, a rogue meteor stole the show over Missouri. This lone meteor was not a member of the Taurid meteor shower. The brightness of the fireball and direction of its orbit indicated that it was a fragment of an asteroid. 
The space rock was about the size of a basketball and weighed about 200 pounds, according to the NASA Meteor Watch. The meteor traveled northwest at 33,500 miles per hour and broke apart about 12 miles above Bridgeport, Mo. More than 300 people reported to the American Meteor Society (AMS) they had seen the meteor.  This is the link between social media and meteors. The AMS can gather more information on meteors, their paths, speed, size, and potential landing sites from information posted on social media than ever before possible. The AMS can take information from photos of the meteor along with data gathered from satellite and ground based measurement systems to piece together the full story. Every photo or video of the event adds more datapoints to triangulate the trajectory of the meteor, giving a higher probability of locating any debris that reached the ground. So if you happen to get a picture of a meteor, please share it online to help the efforts of locating debris. There happens to be a $25,000 cash reward for finding a fragment of this meteor that is at least one kilogram in mass.
The formula to determine the landing site of meteorites is a series of calculations using angles and known distances to determine the exact location of an object. In the case of a meteor, if you have the exact location of at least three photos of the meteor, you can use trigonometry to determine the precise location. You do this at least two times and you then have a rough flight path. The more pictures with location information you have available, the better model of a path you can get. Next week I will show you where to get dark flight paths for meteors and share some tips on meteorite hunting. The plains of Missouri and Kansas are prime hunting grounds for meteorites.

Sunday, February 23, 2020

2-6-2020 Impacting technologies of the 2010’s

I always find it interesting to review technology developments at the end of a decade. This week I plan on not only listing, but talking a little bit about the top new technological developments in the last decade. The years between 2010 and 2020 brought about some of the most amazing technology in history.

For the first time in history we have a generation of people that never knew life without a computer or the internet.  Everyone under the age of thirty has probably had access to a computer of some kind for their entire lives. Over the last decade the computer has not only been a device that most people have access too, but a device that most of us depend on for everyday life.

The new technology has been surprising with advancements in artificial intelligence, autonomous vehicles, robotic lawyers and real time language translation, just to name a few. The technologies in this article are ones that I feel we will still be using and talking about well into the next decade.

In the 2000s Facebook started the social media trend and held the top of the social networks until the early 2010s. in 2010 Instagram became the most popular among the Gen-Zers, mainly due to the large influx of the older generation onto Facebook, bringing with them new media, marketing and politics. Instagram became the preferred method for sharing personal lives on social networks. In 2012 Facebook purchased Instagram and the network has grown to over a billion registered users.

In 2011, Spotify took the market by storm, offering the largest collection of music for streaming on demand. This brought about a massive decline in online music piracy. With free services that stream all your favorite music and collect revenue through advertising to pay the music producers, the need to pirate music dropped tremendously.

In 2012, there was the introduction of the Tesla Model S electric car. This seemed like a major release in transportation technology, but the most impactful change wasn’t a new car. It was car sharing through Uber. Uber rolled out its ride sharing service across every major U.S. city and around the world, fundamentally changing how people get around the city. Lyft also launched their ride-sharing service in 2012, making this the year of shared transportation.

In 2013, Chromecast came to the scene, allowing people to stream video services to any device with an HDMI port. Chromecast is not really all that useful today as the technology is integrated into almost every device with a screen, but it was a top selling product in 2013.

2014 was the year of the smart watch, with the release of the Apple Watch, which in all respects was an iPad in a tiny watch form factor. This first model had all kinds of issues, but as Apple has worked to resolve them it has become the best smartwatch on the market today.  

Amazon Echo started the smart speaker trend in 2015 as online voice assistants moved from the phone to the home. This device incorporated Bluetooth technology as well as machine learning and speech recognition. The Echo held the market share in smart speakers until nearly the end of the decade when Google released a competing device, Google Home.

Airpods came on the scene in 2016, releasing us from wired ear buds and allowing freedom to move away from our audio devices. There was worry of them getting lost easily when they were first released in 2016, but the freedom they give to the user has greatly decreased that fear and they are now nearly as popular was wired ear buds.

The Nintendo Switch changed the gaming console forever, with the ability to use the system both as a stationary game system tied to a large screen TV and take it along with you on the road. The main controller included a screen that allowed game play to continue on the road. The release of the Switch in 2017, brought a new approach to gaming hardware.

2018 was the year of the shareable electric scooters that have seemed to become a permanent fixture in many major cities. They have had an impact on city ordinances and been removed by law in some cities. As a result of these legislations, the technology has lost some of its staying power, but the tech in vehicle sharing has spread to the sharing of electric cars in a similar manner across several U.S. cities.

Last, but not least, is the release of Tik Tok in 2019. As Gen Z kids break into adulthood, this is the most likely platform to become the main method of communication among their peers. This short video sharing service is currently the top contributor to internet content storage and results in close to 15 percent of all the data on the internet today. It is expected to grow beyond 50 percent of all online data within the next couple of years.







1-30-2020 Server Processors in 2020

Every time you log into Facebook or search for something on Google, you access servers. Servers are remote computer systems that “serve” information of services to other computer systems. Years ago these servers used completely different technology than the computer on your desk. However, like all things, technology has changed.

Servers used to be room-sized computers running specialized processors for rapid data access. These mainframes used a parallel memory access system and contained multiple linked processors in a single system to allow the server to talk to many computer terminals at the same time. As technology has advanced even the processor in your cellular phone has the same capabilities as the mainframes from 30 years ago. Yes, your phone is a computer.

What this means is that it is possible for every computer to run the same type of processor today. You might ask how this affects the companies that design and build both servers and processors. Interestingly, it keeps the competition very exciting. In the last couple of years the landscape of server technology business has changed dramatically.

The big players like IBM and Intel are, of course, still in the game and still control most server platforms, but there are a couple of lesser known giants in the game. Among them is AMD, which in the last two years made a major comeback to control 40 percent of the server processor market. Merely a year ago they only controlled 12 percent, and two years ago it was less than five percent.

How does a smaller company like AMD take on giants like IBM and Intel to create a landslide victory in just a year? There are three factors that play a major role in selecting a server processor: price, performance, and availability. Two years ago, AMD released a new processor that was about 40 percent better performing and 20 percent cheaper than anything Intel had available. The demand for this new processor quickly began to outpace the supply and AMD’s market share suffered from a supply and demand issue. The landfall sales of 2018 allowed AMD to ramp up production and as a result take over a large portion of the processor market.

I mentioned three factors above and there is another player in the market that sells more processors than all the other designers combined. This player is the developer of an open standard for microprocessors called ARM. What makes ARM unique is that any manufacturer can take an ARM design, extend it to embed their own components and build their own unique processors. Today ARM processors completely dominate the overall processor market with over 100 billion processors produced and sold annually.

ARM is a low power consumption processor with a simpler instruction set and a similar computing power level of lower end Intel and AMD processors. They are primarily used in cell phones, tablets, watches, calculators, and other small electronic devices. However, there has recently been a strong push to build ARM based servers for general computing. The price of ARM processors is lower, power consumption is lower, and performance is similar to the top selling Intel and AMD processors, but the difference is their instruction sets limit their capabilities, which causes some headaches to software vendors in the adoption of the technology.

There are many market analysts that say ARM is the future of server processor technologies, and I confirm their belief especially with the latest announcement from Amazon. Amazon has recently announced not only the availability of new ARM processors in their cloud service, but a shift to using their own ARM processor as the default for the service. Amazon’s Graviton Processors now run a majority of their web services workloads at a fraction of the cost, and Amazon is passing the savings on to their customers. ARM has all three factors rooting for it: price, performance and availability, to become the top processors in the coming decade.

1-9-2020 Computer Images

  Have you ever wondered how computers store, print, and display images? Well that’s the topic for this week’s article. Computers are actually not all that smart when it comes to storing image information. They really only store numbers, as a sequence of ones and zeros. So how do they keep a photo?

A photo, as far as a computer is concerned, is a grid of numbers that represent the colors in the image. Each spot on the grid holds four numbers between 0 and 256, one for each of four colors. If the image is meant for viewing on a screen, it usually uses Red, Green, Blue, and Alpha (RGBa). The higher the number, the darker the color. Alpha is the transparency of the image; since images can be overlaid on a computer screen, it is necessary to tell what happens when they overlap. If the image is meant to be printed, then it is usually stored using Cyan, Magenta, Yellow, and Black (CMYK). This has to do with the differences between how colors of light mix, versus the colors of ink.

There are two things needed to tell the computer the proper size to display an image. The first is the physical size of the image, for example, a standard photo is four by six inches. The second number tells how many pixels, or rows and columns of numbers, to store to represent the image. This can be defined as either the total width and height in pixels, or as a ratio of pixels per inch. Many common modern digital cameras capture images in the rough size of eight megapixels, or about eight million pixels. This is a grid that is 3266 by 2450 pixels, which gives you 8,001,700 pixels. Notice that this megapixel definition does not provide a size in inches, so the pixels per inch can be changed to make an image much larger or much smaller.

How big can you print the photo? As big as you want, but if there are not enough pixels it will start looking like a mosaic of little squares instead of a smooth image. The general rule is not fewer than 300 pixels per inch. So in the case of an eight megapixel image, this is 3266/300 (or 10.89) inches by 2450/300 (or 8.17) inches. You see each pixel is a box of color; the more boxes you have per inch, the clearer the image. This is true in both print and on your screen.

Pixelation is used by many websites to make it nearly impossible to print their photos. How can a picture on a website look great on the screen and bad when it is printed? Because it has too few pixels. Most websites display images less than five inches wide, at 100 pixels per inch. This makes an image 500 pixels wide. From the math above, the largest image you can print clearly is only 500/300 (or 1.6) inches wide. If you try to print an eight by ten photo from this web image, you will only get 62 pixels per inch, which means you will easily see the square shapes of the pixels and have a very poor print.

You can sometimes fix low-resolution images with photo editing software like Photoshop, by using their resize options. You can usually double the size of the image by re-sampling the image at the higher resolution before causing it to start losing quality. Basically what the computer does is makes a new pixel for each pixel. Based on the color of the copied pixel, it will match the color, blend the color with that of the neighboring pixel, or match the neighboring pixel’s color. This magnifies any issues with the original photo so you cannot go much bigger.

That is a little bit about how computers store, display and print images. If you see a picture in The Licking News that looks pixelated, you know that the publisher likely started with a low resolution image and did their best to bring it to print quality. Submissions many times are fine quality for the internet but too small for good printing. The difference between screen resolutions is 72 pixels per inch on most phones and 300 pixels per inch in newspaper print. What looks fine on your phone may look bad on paper and now you know why.