If you are looking for fairly random ramblings of a rather average guy who happens to be a Grandfather, Soccer Dad, Pastor, and Expert in Hardware and Firmware, you have found the right blog. This blog will have a variety of posts from how my kids performed in their soccer games, ballet, basketball, or acting; to how I fixed a problem on a server, or repaired my car, or played with my kids, or sermon notes.
Wednesday, May 6, 2020
Digital Censorship
Thursday, April 23, 2020
Which is more important to you, health or privacy?
In the light of the “new normal” brought about with the COVID-19 virus, there are some technically possible, but ethically questionable ideas on how to track and prevent the virus. I am talking about the utilization of the blue tooth chip in your cell phone to track other people, or rather other cell phones that you have come within a given radius.
The technology has existed for a long time and many of us use it daily. Bluetooth is a short-distance, high-frequency radio transmitter and receiver pair in a majority of cell phones, radios, computer, televisions and vehicles manufactured in the last 10 years. The technology was introduced by Ericsson in 2001 in their T36 phone. The technology allows devices to communicate with each other across distances of 33-feet or less. The crazy part is that the latency of the Bluetooth signal, which is the time it takes the signal to make a round trip between two devices, can very accurately determine the distance between the devices.
Both Apple and Google are working together to engineer a supposedly anonymous method of using this signal to determine when two devices come in close proximity to each other. Currently they are engineering around a distance of six feet to match the World Health Organization’s social distancing recommendations. The plan is to send tracking information back to Google and Apple in a centralized, cloud-based database. Google is likely to host the data for both parties. A COVID-19 patient would use their device to scan a QR-code supplied by their doctor. This device then becomes a trigger mechanism and notifies every device that has been within six feet of it that the owner may have come in contact with the virus and should be tested.
I believe it is a technologically feasible use case for Bluetooth and a great way to notify people of supposed contact with the virus, but what about the privacy aspects? There are claims that they will use randomized Bluetooth IDs that change hourly on every device participating the program. These random IDs will be stored in a central database, but never cross-referenced to the previous ID. Each device will keep a list of every ID it has broadcast for a period of 14-days based on the incubation period of the disease. If a device owner is infected, the list of all its IDs are cross-referenced in the database to notify devices holding the IDs that were contacted. The big problem is the extremely large size of the data transfer required to keep the privacy.
Another major problem is, as I have mentioned in previous articles on internet safety, once something is on the internet, it is there forever. This includes the private random IDs and every other random ID they have contacted. If you believe for a minute that data processing giant Google does not have a market plan for this data, you are fooled. They claim they will keep the information completely private and only authorized health care organizations would be able to access the tracking data. I take issue with them gathering the data in the first place. Even knowing how many devices I come in contact within a given two-week period is a big enough invasion of privacy to give me concern.
I also worry that if Google and Apple are able to create this level of tracking, they are not the only ones capable of creating such a system. Your private life is no longer private if you carry a cell phone. They will be able to track where you are, when you got there and who is there with you. So my question to you is, how much privacy are you willing to give up for protection from an illness. I know the question is going to be asked of us all soon. You can read full details about the plans and technology on The Verge https://www.theverge.com/2020/4/10/21216484/google-apple-coronavirus-contract-tracing-bluetooth-location-tracking-data-app.
Tuesday, April 21, 2020
Drones using sonar for sight
One of the most interesting innovations in robotics in the last year was the development of a sonar system for robots that utilized a simple system of a single speaker and two microphones to mimic the sonar of bats. These “Robats” were developed at Tel Aviv University in Israel early last year.
You might ask what the advantage is of using sonar for robotics when we already have excellent computer vision techniques and LiDaR. There are two main factors to take into consideration. The first is that LiDaR units are both heavy and expensive, though we can achieve a more detailed model of the environment with LiDaR at around a resolution of one centimeter. Measurements are accurate to one centimeter versus the five-centimeter resolution of sonar. These are both problems for developing flight worthy drones.
A typical LiDaR unit bounces a laser beam off of materials in the environment several times a seconds resulting in a lot of data to process in order to map the environment. The sonar unit developed in Tel Aviv sends three pulses every 30 seconds and processes the data to result in an accurate map for navigation with much less data.
A typical LiDaR unit costs around $150 dollars for a low-end unit that has a resolution similar to the Sonar used in the Robat. The students at Tel Aviv replaced this unit with $15 in parts, showing an overall cost reduction of 90 percent. They also reduced the weight of the mapping unit substantially, from around five pounds to around five ounces.
Just last month they modified the system from using a single speaker and two microphones to using two speakers and four microphones to allow for 3-D mapping from a flying drone, versus the walking Robat introduced in January 2019.
I took great interest in this project as I have a lot of junk electronic components lying around and wondered if it would be possible to reproduce their results with simple electronics found around the house. I discovered that not only is it possible, but it is basically how they started the project.
If you would like to learn how to do something like this, you can follow a project on instructibles.com to build a spinning SODAR (blend between Sonar and LiDaR) to keep from being confused with a real Radar or Sonar system. I plan on building one as soon as I can gather all the components. You can find the project at https://www.instructables.com/id/DIY-360-Degree-SODAR-Device/.
I foresee a lot of people tinkering around with robotics as we are ordered to stay at home. I recommend picking up an ardunio kit that comes with a fairly large number of electronic components and taking some time to learn how to build your own robot. Who knows? You might make the next big robotics discovery in your own home.
Friday, April 10, 2020
Supercomputing and COVID-19
You might think that computers and the COVID-19 virus have very little to do with one another. However, there are several things that have happened in the computing industry as a result of COVID-19. The first was the massive increase in working from home, which caused an unexpected spike in the amount of video streaming traffic on the internet. The second was in the supercomputing industry.
The largest supercomputers in the world, usually used by government agencies for defense and energy research, have been repurposed for medical research. The Summit Supercomputer at Oak Ridge National Lab had a primary purpose of designing next generation batteries, nuclear power reactors and nuclear weapons systems; it has been completed retooled in the last weeks to search for drug treatments.
How exactly does a supercomputer search for drugs that can help fight viral infections? It is a very interesting science. Every virus has a unique chemical shape, as does every drug. Through a history of clinical trials and laboratory testing, scientists now know how to determine if drugs will help to treat viruses by looking at how the “shapes” of the chemical in the drug and virus fit together. It’s like trying to put together a trillion-piece puzzle.
As it turns out, drugs that fit tight to the surface of the virus close the virus off from the body and cause it to die from lack of interaction. They also block the virus from causing symptoms in the carrier. Since we have a very large database of the chemical compounds in every manufactured drug and have the chemical profile of the virus, it is just a matter of finding the compounds that will be effective in sealing off the virus.
To find these compounds requires a very deep search through millions of drugs that can be combined in more than several trillion possible combinations to react with the virus in different ways. These computational reactions require an extreme amount of computing power, and most molecular biologists do not have access to the necessary computing power to simulate models of this size. They normally run these models over the course of several months to come up with a weighted list of possible treatments. They then have to manually test these compounds in a lab before coming to a useful conclusion. The process of drug matching and design for each new disease is normally a year or more.
Summit is not the only supercomputer being used to research drug treatments for COVID-19. In fact, if the virus is under control before November, I expect there will be a large number of research papers presented at the conference based around COVID-19 research. The US has opened 30 supercomputers, or at least portions of them, to global researchers for use in finding a treatment for COVID-19. Europe, Russia, China and Japan are all doing the same.
If you want to know more about supercomputing and drug research, just hit Google and you will find countless articles on supercomputers by IBM, HPE, Hitachi, Dell, Atos and Penguin Computing, to name a few, being utilized for drug research. If only the power of quantum computing were ready for real computational chemistry and biology research, we might have a cure already.
Stay safe and use the time away from friends and work to learn something new. I highly recommend picking up a programming language. Python is fairly easy to learn and there are a lot of free tutorials for all ages.
Saturday, April 4, 2020
Livestreaming for churches
By Scott Hamilton
Last week I wrote about the impact of COVID-19 on Internet service providers. I had promised an article this week regarding the impact on local churches and their need for live streaming their services. Many of your churches may already be using Facebook live to keep in touch. That seems to be the easiest way for people to gain access to video content. It is also quite simple for the churches to use. However, if you want a more professional look for your online service, I have some recommended tools.
The first software I would recommend is called Open Broadcast Software, or OBS, for short. This is a piece of software that allows you to connect to any IP-based web camera including the one integrated into most cell phones. This will allow you to use multiple camera angles as well as stream content direct from your computer to give your service a more personal feel. The best part about OBS is it is open source software, meaning that it is free. You can safely download it for yourself for MacOS, Windows or Linux at https://obsproject.com.
I downloaded OBS last week and began to play online myself. I discovered it is as simple to use as PowerPoint. You simply set up your cameras, screen capture and audio devices, then drag and drop any of the devices from the list to a display window. You can display one at a time or overlay them with each other. It is a drag and drop, “what you see is what you get” interface for video broadcast. You can stream videos to Facebook Live, YouTube, Twitter or any number of free, online streaming services.
The second thing I would recommend is a high-quality HDMI capture card. This will allow you to use a secondary computer to display lyrics during worship, Bible verses during teaching, or any other text or image content, just like you use on your current projector in the sanctuary. This greatly simplifies the process of overlaying visual content with live video streams of the speaker or the worship team.
The third recommendation is the proper cable for connecting your soundboard directly to the computer. This will create higher quality sound than relying on the microphone on your cell phone, camera or computer. It is probably best to feed the sound signal from a separate channel on your existing soundboard; this way you can adjust the audio levels for high quality broadcast. Otherwise you may pick up background noise, echoes and other acoustical problems arising from the shape, size and acoustical properties of your sanctuary.
Lastly, I would recommend one of the top sites for both supporting and streaming Christian content for churches. Life Church, at http://www.lifechurch.tv, offers free websites and streaming services for churches regardless of denomination; they also offer technical support via e-mail or chat to help you get things going. Head on over and check them out.
I would like to thank all the area churches for the efforts they have put forward to continue the positive message of Christ during the crisis of COVID-19. I especially applaud the churches in the area with very little technical expertise. They have made the effort to learn technology in order to continue reaching their community. In conclusion, I am willing to help if you are stuck with a technological question. Feel free to email me at 3kforme@gmail.com. Enjoy your week and stay safe.
Friday, March 27, 2020
Covid-19 impact on the internet
By Scott Hamilton
I would like to open by stating that this week’s article is a lot more opinion and observation than pure facts about the impacts of Covid-19 on the internet. There have been clear impacts to network infrastructure as a result of the social distancing and working from home orders across the country, but there are just not enough facts listed to put a real number on the impact.
Over the past week there have been numerous articles published around the world questioning if the current internet infrastructure could handle the extra traffic from everyone being forced to work from home. The Wall Street Journal predicted that the infrastructure was not ready for such extra workload and we would be looking forward to websites failing to load and online meeting platforms overloading. The New York Times reported exactly the opposite, that we would see minimal impact from the extra loads. As a work from home, high performance computing engineer and cloud architect, I had some questions myself over whether or not the infrastructure could handle the extra load. The European Union requested that Netflix stop streaming high definition videos to reduce the network load across Europe during the outbreak as a precaution to prevent failures.
We are over week into a new society, at least for a short period of time, where a majority of us are working from home. The impact has been minor from my experience, as a user of rural internet service split between a mix of cellular network tethering, satellite service, and microwave-based internet. I have found performance increasing since the onset. I must admit it came as a surprise to me to see performance increase. Here’s the deal: my providers all lifted the imposed bandwidth limits during the outbreak.
I wonder if they will reimpose the limits following the outbreak. If so, it will lead to a lot of questions from customers. If you ask the reason for the limits, they will say it is because their network cannot handle the full load of all the users. The fact that the limits have been lifted prove that their networks will handle the load of all the users. Now is the perfect time to stream all your favorite television shows, download all your favorite books and use the internet to your fullest ability, because the limits will come back.
The real impact this virus has had on technology is that the imbalance of network connectivity among students has came into full light. There are students all over Texas and surrounding counties that do not have the necessary bandwidth to stream online classes during the school-from-home period. This prompted many providers, not only in our area, to lift limits to educational sites and cloud based services.
Among the companies to lift the bandwidth restrictions are AT&T, Verizon, T-Mobile, Sprint, Hughesnet, U.S. Cellular and Comcast to name few. Many others are offering steep discounts on new service installations, taking advantage of our need for speed. I would say that now is a great time to look into getting high speed internet service if you do not already have it, mainly because these deals will likely never come around again.
There are also several 30-day free trials for streaming and education services to keep us entertained and educated during the social distancing period. I challenge you to take a look around for special offers and enjoy trying some new things online. Hey, if you can’t explore the community, you may as well get out there and explore the virtual world.
Next week I plan on doing an article on live-streaming and give details as to how many of our local churches are beginning to offer online worship services and Bible teaching. I will provide pointers to those that are looking for ways to continue their services online. I have also seen local dance studios, fitness trainers and others offer online classes in lieu of face-to-face training. Take a moment to enjoy the flexibility technology has brought during this time of crisis, and who knows, maybe you will find something new to enjoy even after this is all over.
Wednesday, March 18, 2020
Cloud and HPC workloads part 3
Tuesday, March 17, 2020
Happy late pi day!
In honor of the never-ending number pi, we have one of a few international holiday celebrations. Pi Day was first celebrated on March 14, 1988, which also happens to coincide with Einstein’s birthday, March 14, 1879. Pi Day was first celebrated as a part of an Exploratorium staff retreat in Monterey, Calif. In March 2009, Pi Day became an official U.S. national holiday.
As part of my recognition of Pi day, I would like to explore the history of the number, who first discovered it, how it was originally estimated, and simple ways you can estimate it yourself. For starters, Pi is the ratio of a circle’s circumference to its diameter, or the length all the way around a circle divided by the distance directly across the circle. No matter how large or small a circle is, its circumference is always Pi times its diameter. Pi = 3.14159265358979323846… (the digits go on forever, never repeating, and so far no one has found a repeating pattern in over 4,000 years of trying.)
One of the most ancient manuscripts from Egypt, an ancient collection of math puzzles, shows Pi to be 3.1. About a thousand years later, the book of 1 Kings in the Bible implies that pi equals 3 (1 Kings 7:23), and around 250 B.C. the greatest ancient mathematician, Archimedes, estimated pi to around 3.141. How did Archimedes attempt to calculated pi? It was really by doing a series of extremely accurate geometric drawings, sandwiching a circle between two straight-edged regular polygons and measuring the polygons. He simply made more and more sides and measured pi-like ratios until he could not draw any more sides to get closer to an actual circle.
Hundreds of years later, Gottfried Leibniz proved through his new processes of Integration that pi/4 was exactly equal to 1 – 1/3 + 1/5 – 1/7 + 1/9 - . . . going on forever, each calculation getting closer to the value of pi. The big problem with this method is that to get just 10 correct digits of pi, you have to follow the sequence for about 5 billion fractions.
It was not until the early 1900s that Srinivasa Ramanujan discovered a very complex formula for calculating pi, but his method adds eight correct digits for each term in his sum. Starting in 1949, calculating pi became a problem for computers and the only computer in the U.S., ENIAC, was used to calculate pi to over 2,000 digits, nearly doubling the pre-computer records.
In the 1990s the first Beowulf style “homebrew” supercomputers came on the scene. The technology was originally developed to calculate pi and other irrational numbers to as much accuracy as possible. Some of these systems ran over several years to reach 4-billion digits. Using the same techniques over the years, we currently are at 22-trillion digits. This is a little overkill considering that, using only 15 digits of pi, you can calculate the circumference of the Milky Way galaxy to within an error of less than the size of a proton. So why do it? President John F. Kennedy said we do things like this, “not because they are easy, but because they are hard; because that goal will serve to organize and measure the best of our energies and skills.”
Attempting to calculate pi to such high accuracy drove the SuperComputing industry, and as a result, we have the likes of Google’s search engine that indexes trillions of webpages every day, computers that can replace physics research labs by simulating the real world and artificial intelligence systems that can beat the world’s best chess players. Where would we be today without the history of this number?
Now as I promised, there is a way you can estimate pi with very simple math. You play a simple game called “Pi Toss.” You will need a sheet of paper, a pencil and a bunch of toothpicks; the more toothpicks, the closer your estimate will be. Step 1: Turn the paper landscape orientation. Draw two vertical lines on the paper, top to bottom, exactly twice the length of your toothpicks apart. Step 2: Randomly toss toothpicks, one at a time, onto the lined paper. Keep tossing them until you are out of toothpicks. Make sure to count them as you toss them on the paper. Don’t count any that miss or stick off the edge of the paper, those don’t count. Step 3: Count all the toothpicks that touch or cross one of your lines. Step 4: Divide the number of toothpicks you tossed by the number that touched a line and this will be approximately equal to pi. How close did you come? To find out how this works, read more about Pi Toss at https://www.exploratorium.edu/snacks/pi-toss.
Cloud and HPC workloads part 2
HPC systems rely heavily on high speed, low latency network connections between te individual servers for optimal performance. This is because of how they share memory resources across processors in other systems. They utilize a library called MPI (message passing interface) to share information between processes. The faster this information can be shared the higher the performance of the overall system.
HPC systems use networks that are non-blocking, meaning that every system has 100% of the available network bandwidth between every other system in the network. They also use extremely low latency networks reducing the delay from a packet being sent from one system to another to as low as possible.
In cloud based systems there is usually a high blocking factor between racks and low within a rack, resulting in a very unbalanced network creating increased latency for high perfance workloads, so poor that some HPC application will not execute to completion. In recent months some cloud providers have made efforts to redesign network infrastructure to support HPC applications, but there is more work to be done.
Monday, March 16, 2020
Cloud and HPC workloads
Wednesday, March 11, 2020
A brief history of computer encryption
Gresik official release.
Monday, March 9, 2020
1-23-2020 Signals Around Us
You may not realize just how many electromagnetic signals pass through your body every day. It’s not really known if there are health effects from these signals or not, but I find it very interesting to see how much they have increased in the last decade. Just looking at cellular towers, in 2010 there were 250,000 and today there are 350,000 towers in the U.S., and this number will increase to over one million in the coming few years to support 5g technology.
So, what do these signals look like to our body? The human body has an electrical frequency that it likes to operate within and this matches the natural frequency of the earth. The earth and its atmosphere vibrate at the fundamental frequency of 7.83Hz. If you encounter signals at harmonic frequencies with the earth, it amplifies the impact of the signal.
What is a harmonic frequency? It is a signal that has peaks and valleys that overlap the peaks and valleys of the fundamental frequency. The harmonic frequencies of earth and the human body are known as the Schuman Resonance and are 14.3, 20.8, 27.3 and 33.8 Hz. Our electric grid in the U.S. operates at a frequency of 60Hz, which falls outside the natural harmonic range.
We create signals every day that impact the natural resonance of earth and this is a good thing. We would not want to match the resonant frequencies, as it would cause some dangerous situations. Let me give you an example. Have you ever seen videos where an opera singer breaks a glass with their voice? This happens because their voice matches the resonant frequency of the glass, causing vibrations in the glass to build on each other and eventually vibrating the glass apart. Imagine what might happen if we produced enough vibrations at resonant frequencies of the earth. Would it be possible to shatter the planet? Let’s hope we never find out.
So, is it a bad thing to be impacted by non-resonant frequencies as well? There is another frequency interaction called the damping frequencies. This is a frequency that can cause vibrations to decrease in amplitude. We use these damping techniques to create things like noise canceling head phones and engine mufflers. If you can create a damping frequency at the same amplitude as the resonance frequency of an object, you can completely stop all natural vibrations.
This means that it is possible to stop or amplify the natural frequency vibration of your body by exposure to radio waves. We know that increasing this vibration by applying resonant frequency harmonics can cause damage, but we don’t really know if stopping the vibration can cause damage or not.
It just so happens that our cellular phones operate in the 800-megahertz range, which just happens to be very close to the 800.8-megahertz harmonic frequency of our bodies. So every time our cell phone sends a message, we make a call or we upload a video or picture, we amplify the natural vibrations of our body. Granted it is by a tiny amount, but is there an impact to our health? There are a few studies that indicate extreme exposure to these frequencies can cause mental health issues.
Although most of these studies have been dismissed as having no basis in science, there is still a question of how these magnetic fields impact our well-being, and much research is continuing to better understand the impacts. If you want to see the radio signals around you, there is an application for your cell phone that monitors electromagnetic signals. There are several out there; if you search for EMF Meter, you will find a wide range of them. Some claim to be “ghost” hunters, but really they just measure the amount of electrical energy in the air around you. My favorite is the EMF Radiation Meter by IS Solution, though it has quite a few ads. There is also a paid application for $2.99 that provides a map of where the radio signal is coming from called “Architecture of Radio.” If you are interested in studying the radio signals, it is worth the price.
Friday, March 6, 2020
Katherine Johnson, human computer
Thursday, March 5, 2020
1-16-2020 The Hype of 5G
Cellular network providers have been touting their new technology for over a year now, including promoting the fact that they are first to provide 5G service in the country. The question is, how much of what is being said about 5G is accurate and how much of it is marketing hype? I plan to address the facts of 5G in this week’s article and let the reader decide.
First of all there are three different types of 5G being built in the U.S., including low-band, mid-band, and high-band mmWave implementations. The term mmWave refers to the radio frequency band between 30 Ghz and 300 GHz, which falls right in the middle of microwave signals. This radio frequency band is used primarily for satellite communications and infrared signals used for short distance communications like your TV remote control. This frequency band provides a solid carrier signal for high speed internet communications, but the frequency is too high to carry audio signals.
The first technology, which provides the fastest service, is the high-band 5G adopted primarily by AT&T and Verizon, with a few market areas from T-Mobile. This technology is about ten times faster than the current 4G technology in widespread use today. It also has very low latency, which means the message gets sent nearly instantaneously, but the downfall is that for the maximum speed out of the network, you have to be standing very near a cellular tower. In the best case scenario, you could download a standard definition full length movie in 32 seconds, compared to over five minutes on today’s networks. However, you would have to be within 80 feet of a tower to achieve those transfer speeds.
The mid-band technology in use by Sprint is about six times faster than 4G; it has a longer range than high-band 5G but is still much smaller than 4G. What this means is that there will need to be nearly twice as many towers installed to provide 5G service to all the same areas that receive 4G today, increasing the overall power consumption of cellular providers.
The low-band 5G in use by T-Mobile and AT&T only achieves 20 percent performance increase over 4G technologies. The low-band solution has nearly the same coverage area as 4G per tower, making the expense of rolling out low-band 5G much less expensive. This is likely the type of 5G networks we will see in our area.
Secondly, you cannot purchase a phone today that will support all three technologies, so your awesome new 5G cellular phone is likely to only work on your provider’s towers at 5G speeds and prevent data roaming due to incompatibilities in the technologies. This turns out to be a problem not only for you as an end user of the technology, but also for the providers of the technology. The only way to keep compatibility for roaming is to keep 4G transmitters and receivers in operation, increasing the cost of both the provider gear and consumer cellular phones.
Lastly, every provider has their own approach to providing 5G services, using a mix of technologies. This creates problems both for companies in regards to data roaming and for end users in regards to being locked in to not only a provider, but also a geographical area.
T-Mobile has a nationwide low-band 5G network and a smaller but much faster high-band 5G network in six major U.S. cities. There currently is only one phone that works on the high-band network, and it will not work on the low-band network. The two phones they release for low-band also will not work on the high-band network, so their service is localized based on the 5G phone model that you own.
Sprint is in the process of building their mid-band network in parts of nine U.S. cities. You are stuck with a limited choice of four devices that will operate on their network, one data device and three phones.
AT&T has low-band networks in 20 markets for “consumers” and high-band networks in small areas of 35 markets focused primarily on providing service to businesses. AT&T currently sells only two phone models and a single wifi hotspot device that can utilize this new network. AT&T is claiming to offer 5G today in all markets, but is actually just upgrading its existing 4G networks.
Verizon is working on building out the largest high-band 5G network. It is currently providing service in 40 markets, but you need to be within 80 feet of a tower to get a 5G connection, and they charge extra for 5G access.
I guess ultimately what I am saying is that 5G is currently a mess for consumers, leaving us somewhat in the dark as to the best choices for future cellular phones and plans. Both Samsung and Apple have planned releases as early as February 11, 2020, that are expected to support all three network types and configuration. These new 5G phones will solve a lot of the consumer based issues with 5G, and we can expect wireless network speed to improve drastically in the coming months.
Monday, March 2, 2020
12-26-19 Tracking Santa
Wikimedia by user Bukvoed, used under Creative Commons CC0 License.
The type of radar used by NORAD to track Santa rotates
steadily, sweeping the sky with a narrow beam searching
for aircraft.
|
Friday, February 28, 2020
12-12-19 Airbags
Wednesday, February 26, 2020
12-5-19 Supercomputing: a year in review
Tuesday, February 25, 2020
11-28-19 How to find a meteorite
Monday, February 24, 2020
11.21.2019 Tracking Meteors
Sunday, February 23, 2020
2-6-2020 Impacting technologies of the 2010’s
I always find it interesting to review technology developments at the end of a decade. This week I plan on not only listing, but talking a little bit about the top new technological developments in the last decade. The years between 2010 and 2020 brought about some of the most amazing technology in history.
For the first time in history we have a generation of people that never knew life without a computer or the internet. Everyone under the age of thirty has probably had access to a computer of some kind for their entire lives. Over the last decade the computer has not only been a device that most people have access too, but a device that most of us depend on for everyday life.
The new technology has been surprising with advancements in artificial intelligence, autonomous vehicles, robotic lawyers and real time language translation, just to name a few. The technologies in this article are ones that I feel we will still be using and talking about well into the next decade.
In the 2000s Facebook started the social media trend and held the top of the social networks until the early 2010s. in 2010 Instagram became the most popular among the Gen-Zers, mainly due to the large influx of the older generation onto Facebook, bringing with them new media, marketing and politics. Instagram became the preferred method for sharing personal lives on social networks. In 2012 Facebook purchased Instagram and the network has grown to over a billion registered users.
In 2011, Spotify took the market by storm, offering the largest collection of music for streaming on demand. This brought about a massive decline in online music piracy. With free services that stream all your favorite music and collect revenue through advertising to pay the music producers, the need to pirate music dropped tremendously.
In 2012, there was the introduction of the Tesla Model S electric car. This seemed like a major release in transportation technology, but the most impactful change wasn’t a new car. It was car sharing through Uber. Uber rolled out its ride sharing service across every major U.S. city and around the world, fundamentally changing how people get around the city. Lyft also launched their ride-sharing service in 2012, making this the year of shared transportation.
In 2013, Chromecast came to the scene, allowing people to stream video services to any device with an HDMI port. Chromecast is not really all that useful today as the technology is integrated into almost every device with a screen, but it was a top selling product in 2013.
2014 was the year of the smart watch, with the release of the Apple Watch, which in all respects was an iPad in a tiny watch form factor. This first model had all kinds of issues, but as Apple has worked to resolve them it has become the best smartwatch on the market today.
Amazon Echo started the smart speaker trend in 2015 as online voice assistants moved from the phone to the home. This device incorporated Bluetooth technology as well as machine learning and speech recognition. The Echo held the market share in smart speakers until nearly the end of the decade when Google released a competing device, Google Home.
Airpods came on the scene in 2016, releasing us from wired ear buds and allowing freedom to move away from our audio devices. There was worry of them getting lost easily when they were first released in 2016, but the freedom they give to the user has greatly decreased that fear and they are now nearly as popular was wired ear buds.
The Nintendo Switch changed the gaming console forever, with the ability to use the system both as a stationary game system tied to a large screen TV and take it along with you on the road. The main controller included a screen that allowed game play to continue on the road. The release of the Switch in 2017, brought a new approach to gaming hardware.
2018 was the year of the shareable electric scooters that have seemed to become a permanent fixture in many major cities. They have had an impact on city ordinances and been removed by law in some cities. As a result of these legislations, the technology has lost some of its staying power, but the tech in vehicle sharing has spread to the sharing of electric cars in a similar manner across several U.S. cities.
Last, but not least, is the release of Tik Tok in 2019. As Gen Z kids break into adulthood, this is the most likely platform to become the main method of communication among their peers. This short video sharing service is currently the top contributor to internet content storage and results in close to 15 percent of all the data on the internet today. It is expected to grow beyond 50 percent of all online data within the next couple of years.
1-30-2020 Server Processors in 2020
Every time you log into Facebook or search for something on Google, you access servers. Servers are remote computer systems that “serve” information of services to other computer systems. Years ago these servers used completely different technology than the computer on your desk. However, like all things, technology has changed.
Servers used to be room-sized computers running specialized processors for rapid data access. These mainframes used a parallel memory access system and contained multiple linked processors in a single system to allow the server to talk to many computer terminals at the same time. As technology has advanced even the processor in your cellular phone has the same capabilities as the mainframes from 30 years ago. Yes, your phone is a computer.
What this means is that it is possible for every computer to run the same type of processor today. You might ask how this affects the companies that design and build both servers and processors. Interestingly, it keeps the competition very exciting. In the last couple of years the landscape of server technology business has changed dramatically.
The big players like IBM and Intel are, of course, still in the game and still control most server platforms, but there are a couple of lesser known giants in the game. Among them is AMD, which in the last two years made a major comeback to control 40 percent of the server processor market. Merely a year ago they only controlled 12 percent, and two years ago it was less than five percent.
How does a smaller company like AMD take on giants like IBM and Intel to create a landslide victory in just a year? There are three factors that play a major role in selecting a server processor: price, performance, and availability. Two years ago, AMD released a new processor that was about 40 percent better performing and 20 percent cheaper than anything Intel had available. The demand for this new processor quickly began to outpace the supply and AMD’s market share suffered from a supply and demand issue. The landfall sales of 2018 allowed AMD to ramp up production and as a result take over a large portion of the processor market.
I mentioned three factors above and there is another player in the market that sells more processors than all the other designers combined. This player is the developer of an open standard for microprocessors called ARM. What makes ARM unique is that any manufacturer can take an ARM design, extend it to embed their own components and build their own unique processors. Today ARM processors completely dominate the overall processor market with over 100 billion processors produced and sold annually.
ARM is a low power consumption processor with a simpler instruction set and a similar computing power level of lower end Intel and AMD processors. They are primarily used in cell phones, tablets, watches, calculators, and other small electronic devices. However, there has recently been a strong push to build ARM based servers for general computing. The price of ARM processors is lower, power consumption is lower, and performance is similar to the top selling Intel and AMD processors, but the difference is their instruction sets limit their capabilities, which causes some headaches to software vendors in the adoption of the technology.
There are many market analysts that say ARM is the future of server processor technologies, and I confirm their belief especially with the latest announcement from Amazon. Amazon has recently announced not only the availability of new ARM processors in their cloud service, but a shift to using their own ARM processor as the default for the service. Amazon’s Graviton Processors now run a majority of their web services workloads at a fraction of the cost, and Amazon is passing the savings on to their customers. ARM has all three factors rooting for it: price, performance and availability, to become the top processors in the coming decade.
1-9-2020 Computer Images
Have you ever wondered how computers store, print, and display images? Well that’s the topic for this week’s article. Computers are actually not all that smart when it comes to storing image information. They really only store numbers, as a sequence of ones and zeros. So how do they keep a photo?
A photo, as far as a computer is concerned, is a grid of numbers that represent the colors in the image. Each spot on the grid holds four numbers between 0 and 256, one for each of four colors. If the image is meant for viewing on a screen, it usually uses Red, Green, Blue, and Alpha (RGBa). The higher the number, the darker the color. Alpha is the transparency of the image; since images can be overlaid on a computer screen, it is necessary to tell what happens when they overlap. If the image is meant to be printed, then it is usually stored using Cyan, Magenta, Yellow, and Black (CMYK). This has to do with the differences between how colors of light mix, versus the colors of ink.
There are two things needed to tell the computer the proper size to display an image. The first is the physical size of the image, for example, a standard photo is four by six inches. The second number tells how many pixels, or rows and columns of numbers, to store to represent the image. This can be defined as either the total width and height in pixels, or as a ratio of pixels per inch. Many common modern digital cameras capture images in the rough size of eight megapixels, or about eight million pixels. This is a grid that is 3266 by 2450 pixels, which gives you 8,001,700 pixels. Notice that this megapixel definition does not provide a size in inches, so the pixels per inch can be changed to make an image much larger or much smaller.
How big can you print the photo? As big as you want, but if there are not enough pixels it will start looking like a mosaic of little squares instead of a smooth image. The general rule is not fewer than 300 pixels per inch. So in the case of an eight megapixel image, this is 3266/300 (or 10.89) inches by 2450/300 (or 8.17) inches. You see each pixel is a box of color; the more boxes you have per inch, the clearer the image. This is true in both print and on your screen.
Pixelation is used by many websites to make it nearly impossible to print their photos. How can a picture on a website look great on the screen and bad when it is printed? Because it has too few pixels. Most websites display images less than five inches wide, at 100 pixels per inch. This makes an image 500 pixels wide. From the math above, the largest image you can print clearly is only 500/300 (or 1.6) inches wide. If you try to print an eight by ten photo from this web image, you will only get 62 pixels per inch, which means you will easily see the square shapes of the pixels and have a very poor print.
You can sometimes fix low-resolution images with photo editing software like Photoshop, by using their resize options. You can usually double the size of the image by re-sampling the image at the higher resolution before causing it to start losing quality. Basically what the computer does is makes a new pixel for each pixel. Based on the color of the copied pixel, it will match the color, blend the color with that of the neighboring pixel, or match the neighboring pixel’s color. This magnifies any issues with the original photo so you cannot go much bigger.
That is a little bit about how computers store, display and print images. If you see a picture in The Licking News that looks pixelated, you know that the publisher likely started with a low resolution image and did their best to bring it to print quality. Submissions many times are fine quality for the internet but too small for good printing. The difference between screen resolutions is 72 pixels per inch on most phones and 300 pixels per inch in newspaper print. What looks fine on your phone may look bad on paper and now you know why.