Friday, February 28, 2020

12-12-19 Airbags

A couple of times over the last month I have been asked about airbags. Questions have usually been along the lines of, “I hit a deer at 65 miles per hour and my airbag didn’t deploy, but I hit a curb at 15 mph and my airbags deployed. Why?” It has everything to do with how the airbag sensors work. This week’s article explains the sensor, what it does and why sometimes it sets off the airbags with low impact and not with high impact accidents.
The first thing you need to know is that airbag sensors are not impact sensors, but rather shock sensors. The difference is that impact sensors work like a push button switch; when an impact moves the sensor far enough, it turns on a switch. A shock sensor, on the other hand, is based on a change in momentum rather than the movement of a mechanism. In other words, your bumper can move several inches, and as long as it is moving at a steady speed, the sensor will not detect an accident, even if your bumper is completely crushed.
Momentum defines the motion of a moving body and is calculated by multiplying the object’s mass by its velocity. The faster you are moving, the more momentum you have. The heavier you are, the more momentum you have. The airbag sensor measures momentum and triggers the airbag if there is a large enough change in momentum.
The sensor does not exactly measure the momentum of the car, but rather the momentum of the sensor itself. It works by using a suspended mass and some impact sensors. The suspended mass will not touch the impact sensors unless a large change in momentum occurs, causing it to hit one of the impact sensors. You can think of it as a marble in a shallow dip in the center of a box. If the box stays level and moves steadily, the marble stays in the middle; if you bump the box, the marble will roll to one of the sides. If the marble touches the side of the box, the airbag deploys.
Now let’s look at the two scenarios. First you hit a deer at 65 mph and the airbags do not deploy. Imaging the marble in the box is setting on your dashboard. You strike the deer but the car does not slow down or swerve. The marble and the box stay firmly on the dashboard. This is because your momentum did not change. Remember, to change momentum, your speed must change. You struck the deer and maintained your speed. The car took major damage but the airbag never deployed because your momentum didn’t change.
Now for the second case, you curb the tire and the airbag deploys, causing you to lose control of the car. How did this happen and why? Think again about the marble in the box on your dashboard. You are taking the corner, and you hit the curb. The car bounces and instantly slows down from 20 miles per hour to five miles per hour. The marble rolls forward and hits the front of the box. This causes the airbags to deploy. The change in momentum because of the rapid slowing sets off the airbag.
This is both good news and bad news. The good news is that it is the change in momentum is what causes your injuries in the event of an accident and the airbag usually deploys when there is danger of your being tossed around or out of the car. The bad news is that sometimes rapid slow down due to striking a pot hole or curb, or even just hitting the sensor while the engine is running can cause the airbags to deploy, causing more damage to the car than the damage from the incident.

Wednesday, February 26, 2020

12-5-19 Supercomputing: a year in review

This year has been an exciting year in the supercomputing industry as there have been many new developments. Two weeks after the world’s largest conference in the industry, SC19, all the announcements have made it to the internet and opinions have been shared, which makes it the perfect time to highlight the developments of the year.
In my opinion, the top two developments in supercomputing have been two new players in the processor market finally taking a significant share in the industry. The past few years have been dominated by Intel and Nvidia. However, this year two new players began to take some of the glory. AMD and ARM have both designed new processors and seem to be taking a market share from Intel with computing systems on the TOP500 list, a list of the top 500 most powerful computers in the world.
AMD brought products to the market competing with both Intel and Nvidia. The AMD EPYC processors offer a higher memory bandwidth than Intel processors and push the envelope of performance with an average of 30 percent higher performance on supercomputing workloads than Intel. The AMD Radeon Instinct beat Nvidia to the punch, releasing the world’s first PCIe4.0 bases graphics processing units. The increased communication bandwidth coupled with the optimized software stack and virtualization technologies drive higher efficiency and datacenter optimization than current technologies by Nvidia. I would say the biggest obstacle to AMD is the fear of adopting new technologies in research-based workloads.
ARM made its entrance into supercomputing in 2011 as part of the European Mont-Blanc project. ARM had already dominated the mobile market; chances are you have an ARM processor in your pocket right now powering your smart phone. However, making the leap from mobile to the datacenter was a challenging proposition. This year they made a big splash with introduction of a product in partnership with NVIDIA, and various hardware vendors to build GPU-accelerated Arm-based servers. These servers hit the strongest area of the HPC market as a majority of the TOP500 supercomputers are powered primarily by Nvidia GPUs. Coupling the energy efficiency of GPUs and Arm CPUs results in some of the most energy efficient computing platforms available today.
The other big change in supercomputing are the new breakthroughs in machine learning and artificial intelligence. These breakthroughs are redefining scientific methods and enabling exciting opportunities for new architectures. The floodgates are open for new technologies that accelerate the machine learning methods, which are less computationally intensive and more memory intensive. Driven by these breakthroughs, ARM and AMD both bring higher memory performance than we have ever seen before into the market.
The final new technology breakthrough in supercomputing in 2019 was the proof of quantum supremacy, which we have talked about before. It does not mean a lot right now and will probably steal the show in 2030 as the adoption of new technologies in supercomputing seems to move slow.
Why is any of this important to you? Today’s supercomputers are used for weather forecasting, new product development, drug research and scientific modeling. Supercomputers have replaced the laboratory in many industries, reducing the total cost of product development thus reducing the cost of the product. As supercomputing power increases, the cost of consumer goods should decrease and the level of public safety should increase.

Tuesday, February 25, 2020

11-28-19 How to find a meteorite

  In last week’s article I wrote about using hints from social media to track meteors. In that article there was a mention of a reward for finding parts of the meteor that flew over Missouri. I thought maybe someone wanted to hunt for it and did not know where to start, so here are some tips to meteorite hunting.
First I want to clear up the difference between a meteor and a meteorite. It is a meteor when it is flying overhead and on fire from entering the atmosphere. It becomes a meteorite once it hits the ground and cools. So you don’t go hunting meteors, you would never catch one.
Earth is under constant attack by space rocks, most of which either burn up completely or get lost to the depths of the oceans. However, there have been more than 40,000 catalogued discoveries of meteorites on the surface, and countless more are out there waiting to be found. Space rocks can be as valuable as $1,000 a gram, which is about the weight of the average paperclip. However, this treasure hunt requires hard work and dedication.
As always when searching for something, especially on public land or land owned by someone else, ask for permission. For example, space rocks found in national parks belong to the federal government and cannot be legally kept, but it varies from park to park. The federal land managed by the Bureau of Land Management consists of 264 million acres and is different. It is a pretty safe bet that what you find there is yours to keep, but you should still ask. Meteorites, like other artifacts, belong to the land owner, so even searching on private land requires that you ask permission to keep what you find.
The second step is to pick a good spot to hunt. “Meteorites fall anywhere, but they are easiest to spot where there are few terrestrial rocks,” says Alan Rubin, a geochemist at the University of California, Los Angeles. Most meteorites are dark, so white sand deserts, icy regions and plains are ideal. In our area in particular, the plains of Kansas are a great place to look because of the limited terrestrial rock in the area. Any new rocks farmers dig up have a very good chance of being a meteorite. More than one meteorite has been found in a farmer’s rock pile, or even propping open a screen door.
It is also a promising idea to search for the new arrivals. Bill Cooke, head of NASA’s Meteoroid Environment Office, suggests finding the ground below a meteor’s “dark flight.” This is the part of the flight where it slows down to below 6000 miles per hour and stops burning. When there is an accurate trajectory of a meteor, these “dark flight” calculations are posted on the Internet across various sites.
You can also search for magnetic objects as meteorites are usually magnetic, but not always. Don’t rely too much on metal detectors either, as most meteorites are actually discovered by sight and not by the use of special equipment. Finally, if you do happen to find a meteorite, please share your find with the world.

Monday, February 24, 2020

11.21.2019 Tracking Meteors


Last week I reviewed three social networks and wanted to mention a fourth. You might ask, what does this have to do with meteors? It might seem strange at first, but actually there is a lot that link the two together. We will get into that after I talk about Reddit.
Reddit is the self-proclaimed “front page of the internet;” it was formed in 2005. Registered users (redditors) submit content in a variety of subjects: news, science, music, books, video games, technology, etc. Redditors get to vote the content up or down and Reddit sorts the content based on these votes. Content that receives enough up votes gets displayed on the site’s front page. There is no real need for censorship by the site owners because the up-or-down voting system can raise posts to full visibility or make them disappear completely. The only problem with the system is that people can be paid to “troll” a post, falsely making a post more popular or removing it all together. Recently Reddit moderators have been accused of censorship, just like Facebook, but there is slightly more freedom on Reddit for the moment.
Now for the link to meteors. On Nov. 15, amid the Mercury transit and the Taurid meteor shower, a rogue meteor stole the show over Missouri. This lone meteor was not a member of the Taurid meteor shower. The brightness of the fireball and direction of its orbit indicated that it was a fragment of an asteroid. 
The space rock was about the size of a basketball and weighed about 200 pounds, according to the NASA Meteor Watch. The meteor traveled northwest at 33,500 miles per hour and broke apart about 12 miles above Bridgeport, Mo. More than 300 people reported to the American Meteor Society (AMS) they had seen the meteor.  This is the link between social media and meteors. The AMS can gather more information on meteors, their paths, speed, size, and potential landing sites from information posted on social media than ever before possible. The AMS can take information from photos of the meteor along with data gathered from satellite and ground based measurement systems to piece together the full story. Every photo or video of the event adds more datapoints to triangulate the trajectory of the meteor, giving a higher probability of locating any debris that reached the ground. So if you happen to get a picture of a meteor, please share it online to help the efforts of locating debris. There happens to be a $25,000 cash reward for finding a fragment of this meteor that is at least one kilogram in mass.
The formula to determine the landing site of meteorites is a series of calculations using angles and known distances to determine the exact location of an object. In the case of a meteor, if you have the exact location of at least three photos of the meteor, you can use trigonometry to determine the precise location. You do this at least two times and you then have a rough flight path. The more pictures with location information you have available, the better model of a path you can get. Next week I will show you where to get dark flight paths for meteors and share some tips on meteorite hunting. The plains of Missouri and Kansas are prime hunting grounds for meteorites.

Sunday, February 23, 2020

2-6-2020 Impacting technologies of the 2010’s

I always find it interesting to review technology developments at the end of a decade. This week I plan on not only listing, but talking a little bit about the top new technological developments in the last decade. The years between 2010 and 2020 brought about some of the most amazing technology in history.

For the first time in history we have a generation of people that never knew life without a computer or the internet.  Everyone under the age of thirty has probably had access to a computer of some kind for their entire lives. Over the last decade the computer has not only been a device that most people have access too, but a device that most of us depend on for everyday life.

The new technology has been surprising with advancements in artificial intelligence, autonomous vehicles, robotic lawyers and real time language translation, just to name a few. The technologies in this article are ones that I feel we will still be using and talking about well into the next decade.

In the 2000s Facebook started the social media trend and held the top of the social networks until the early 2010s. in 2010 Instagram became the most popular among the Gen-Zers, mainly due to the large influx of the older generation onto Facebook, bringing with them new media, marketing and politics. Instagram became the preferred method for sharing personal lives on social networks. In 2012 Facebook purchased Instagram and the network has grown to over a billion registered users.

In 2011, Spotify took the market by storm, offering the largest collection of music for streaming on demand. This brought about a massive decline in online music piracy. With free services that stream all your favorite music and collect revenue through advertising to pay the music producers, the need to pirate music dropped tremendously.

In 2012, there was the introduction of the Tesla Model S electric car. This seemed like a major release in transportation technology, but the most impactful change wasn’t a new car. It was car sharing through Uber. Uber rolled out its ride sharing service across every major U.S. city and around the world, fundamentally changing how people get around the city. Lyft also launched their ride-sharing service in 2012, making this the year of shared transportation.

In 2013, Chromecast came to the scene, allowing people to stream video services to any device with an HDMI port. Chromecast is not really all that useful today as the technology is integrated into almost every device with a screen, but it was a top selling product in 2013.

2014 was the year of the smart watch, with the release of the Apple Watch, which in all respects was an iPad in a tiny watch form factor. This first model had all kinds of issues, but as Apple has worked to resolve them it has become the best smartwatch on the market today.  

Amazon Echo started the smart speaker trend in 2015 as online voice assistants moved from the phone to the home. This device incorporated Bluetooth technology as well as machine learning and speech recognition. The Echo held the market share in smart speakers until nearly the end of the decade when Google released a competing device, Google Home.

Airpods came on the scene in 2016, releasing us from wired ear buds and allowing freedom to move away from our audio devices. There was worry of them getting lost easily when they were first released in 2016, but the freedom they give to the user has greatly decreased that fear and they are now nearly as popular was wired ear buds.

The Nintendo Switch changed the gaming console forever, with the ability to use the system both as a stationary game system tied to a large screen TV and take it along with you on the road. The main controller included a screen that allowed game play to continue on the road. The release of the Switch in 2017, brought a new approach to gaming hardware.

2018 was the year of the shareable electric scooters that have seemed to become a permanent fixture in many major cities. They have had an impact on city ordinances and been removed by law in some cities. As a result of these legislations, the technology has lost some of its staying power, but the tech in vehicle sharing has spread to the sharing of electric cars in a similar manner across several U.S. cities.

Last, but not least, is the release of Tik Tok in 2019. As Gen Z kids break into adulthood, this is the most likely platform to become the main method of communication among their peers. This short video sharing service is currently the top contributor to internet content storage and results in close to 15 percent of all the data on the internet today. It is expected to grow beyond 50 percent of all online data within the next couple of years.







1-30-2020 Server Processors in 2020

Every time you log into Facebook or search for something on Google, you access servers. Servers are remote computer systems that “serve” information of services to other computer systems. Years ago these servers used completely different technology than the computer on your desk. However, like all things, technology has changed.

Servers used to be room-sized computers running specialized processors for rapid data access. These mainframes used a parallel memory access system and contained multiple linked processors in a single system to allow the server to talk to many computer terminals at the same time. As technology has advanced even the processor in your cellular phone has the same capabilities as the mainframes from 30 years ago. Yes, your phone is a computer.

What this means is that it is possible for every computer to run the same type of processor today. You might ask how this affects the companies that design and build both servers and processors. Interestingly, it keeps the competition very exciting. In the last couple of years the landscape of server technology business has changed dramatically.

The big players like IBM and Intel are, of course, still in the game and still control most server platforms, but there are a couple of lesser known giants in the game. Among them is AMD, which in the last two years made a major comeback to control 40 percent of the server processor market. Merely a year ago they only controlled 12 percent, and two years ago it was less than five percent.

How does a smaller company like AMD take on giants like IBM and Intel to create a landslide victory in just a year? There are three factors that play a major role in selecting a server processor: price, performance, and availability. Two years ago, AMD released a new processor that was about 40 percent better performing and 20 percent cheaper than anything Intel had available. The demand for this new processor quickly began to outpace the supply and AMD’s market share suffered from a supply and demand issue. The landfall sales of 2018 allowed AMD to ramp up production and as a result take over a large portion of the processor market.

I mentioned three factors above and there is another player in the market that sells more processors than all the other designers combined. This player is the developer of an open standard for microprocessors called ARM. What makes ARM unique is that any manufacturer can take an ARM design, extend it to embed their own components and build their own unique processors. Today ARM processors completely dominate the overall processor market with over 100 billion processors produced and sold annually.

ARM is a low power consumption processor with a simpler instruction set and a similar computing power level of lower end Intel and AMD processors. They are primarily used in cell phones, tablets, watches, calculators, and other small electronic devices. However, there has recently been a strong push to build ARM based servers for general computing. The price of ARM processors is lower, power consumption is lower, and performance is similar to the top selling Intel and AMD processors, but the difference is their instruction sets limit their capabilities, which causes some headaches to software vendors in the adoption of the technology.

There are many market analysts that say ARM is the future of server processor technologies, and I confirm their belief especially with the latest announcement from Amazon. Amazon has recently announced not only the availability of new ARM processors in their cloud service, but a shift to using their own ARM processor as the default for the service. Amazon’s Graviton Processors now run a majority of their web services workloads at a fraction of the cost, and Amazon is passing the savings on to their customers. ARM has all three factors rooting for it: price, performance and availability, to become the top processors in the coming decade.

1-9-2020 Computer Images

  Have you ever wondered how computers store, print, and display images? Well that’s the topic for this week’s article. Computers are actually not all that smart when it comes to storing image information. They really only store numbers, as a sequence of ones and zeros. So how do they keep a photo?

A photo, as far as a computer is concerned, is a grid of numbers that represent the colors in the image. Each spot on the grid holds four numbers between 0 and 256, one for each of four colors. If the image is meant for viewing on a screen, it usually uses Red, Green, Blue, and Alpha (RGBa). The higher the number, the darker the color. Alpha is the transparency of the image; since images can be overlaid on a computer screen, it is necessary to tell what happens when they overlap. If the image is meant to be printed, then it is usually stored using Cyan, Magenta, Yellow, and Black (CMYK). This has to do with the differences between how colors of light mix, versus the colors of ink.

There are two things needed to tell the computer the proper size to display an image. The first is the physical size of the image, for example, a standard photo is four by six inches. The second number tells how many pixels, or rows and columns of numbers, to store to represent the image. This can be defined as either the total width and height in pixels, or as a ratio of pixels per inch. Many common modern digital cameras capture images in the rough size of eight megapixels, or about eight million pixels. This is a grid that is 3266 by 2450 pixels, which gives you 8,001,700 pixels. Notice that this megapixel definition does not provide a size in inches, so the pixels per inch can be changed to make an image much larger or much smaller.

How big can you print the photo? As big as you want, but if there are not enough pixels it will start looking like a mosaic of little squares instead of a smooth image. The general rule is not fewer than 300 pixels per inch. So in the case of an eight megapixel image, this is 3266/300 (or 10.89) inches by 2450/300 (or 8.17) inches. You see each pixel is a box of color; the more boxes you have per inch, the clearer the image. This is true in both print and on your screen.

Pixelation is used by many websites to make it nearly impossible to print their photos. How can a picture on a website look great on the screen and bad when it is printed? Because it has too few pixels. Most websites display images less than five inches wide, at 100 pixels per inch. This makes an image 500 pixels wide. From the math above, the largest image you can print clearly is only 500/300 (or 1.6) inches wide. If you try to print an eight by ten photo from this web image, you will only get 62 pixels per inch, which means you will easily see the square shapes of the pixels and have a very poor print.

You can sometimes fix low-resolution images with photo editing software like Photoshop, by using their resize options. You can usually double the size of the image by re-sampling the image at the higher resolution before causing it to start losing quality. Basically what the computer does is makes a new pixel for each pixel. Based on the color of the copied pixel, it will match the color, blend the color with that of the neighboring pixel, or match the neighboring pixel’s color. This magnifies any issues with the original photo so you cannot go much bigger.

That is a little bit about how computers store, display and print images. If you see a picture in The Licking News that looks pixelated, you know that the publisher likely started with a low resolution image and did their best to bring it to print quality. Submissions many times are fine quality for the internet but too small for good printing. The difference between screen resolutions is 72 pixels per inch on most phones and 300 pixels per inch in newspaper print. What looks fine on your phone may look bad on paper and now you know why.

1-2-2020 Drone regulations

Around this time last year I ran a column talking about the federal government regulations on drones. Tracking of online sales of technology indicates that this year was another big year for drone sales. I am guessing that some readers likely either received or bought a drone for Christmas so I wanted to mention the new regulations on operating a drone.

First, there is a requirement to register your drone with the Federal Aeronautics Administration (FAA). If the drone weighs between 0.55 and 55 lbs., it can be registered online; over 55 lbs. will require paper form registration. Failure to register a drone can result in $250,000 in fines and up to three years in prison. This requirement stands for both commercial and private use.

If you are just flying the drone on your own property as a hobby, you still have to register the drone. However, if you plan to fly the drone to film events at your church, school, or place of business, even as an individual, the rules change. This is considered a commercial use of the drone and requires a separate registration under FAA Part 107 rules.

You must also be at least 13-years old to legally register a drone. You can fly one at a younger age, but the registered owner is responsible for any damage or rule violations. You can use a single registration for multiple drones provided you own them all and store them at the same physical address. You will receive a unique-to-you registration number with which you must label the drone prior to the first outdoor flight.

You can do the registration online in two different ways - directly with the FAA at https://www.faa.gov/uas/registration/ for a $5 fee where you only receive a registration certificate, or at https://www.federaldroneregistration.com for $24.99 where you receive a packet of labels/stickers for your drone and automatic renewal options for the registration, as well as other additional services.

The Part 107 provisions require pilot training and certification before operating the drone. You must be 16-years-old to become an official remote pilot. The certification consists of a written exam called the initial aeronautical knowledge test, which is only offered in certified testing facilities and covers various topics related to general FAA knowledge. The nearest test facility is at State Technical College of Missouri in Linn. Once you have completed this exam successfully, you will receive a 17-digit Knowledge Test Exam ID. This ID is then used to register with the FAA using the Integrated Airman Certificate and/or Rating Application System (IACRA). You will now be recognized as an official remote pilot.

As a professional photographer, if I were to purchase a large drone, I would go through the pilot registration process. It costs a little more but gives you the flexibility to use the drone for things like a DIY aerial photography business. It also allows you to fly your drone in more places. There are places you are allowed to fly a commercial drone that you cannot fly a private drone.

If you were one of the lucky recipients of a drone this year, please follow the rules and enjoy your new hobby or business. Fly safe.

Friday, February 21, 2020

Online Security

Online Security
Did you know that every time you visit a website, read an e-mail, watch a YouTube video or submit a Google search, more than one person is watching you? This week I want to let you know who is watching your internet activity, a little bit of why, and some ways you can limit what they are able see.
Who is watching you on the internet? There are three main groups that watch your internet activity. The first is your Internet Service Provider (ISP); the second is your search engine (Google for example) and the third is the government.  
Your ISP watches the activity on the internet for two main reasons. The first is for the security of their network; they want to insure that what you are doing will not impact other customers. This is a very good reason and we should all be glad they monitor for these types of activities. Without this level of monitoring, anyone with the knowledge could take over the data stream and block your ability to use the internet. The second reason is for money. How do they make money off monitoring your activity? They share the information with advertisers for a fee. 
How much information can your ISP gather? Surprisingly they get a lot of information from watching simple things, but the good news is that they cannot get anything you do not share or anything you share over a secure connection, at least not without breaking the law. Any site that uses the https address instead of the http address means that only the owner of the website can see the information shared on the site. This means that your online banking is safe from your ISP, except for the fact that they can tell where you bank because they know you visited that particular banking site.
Even if you have the best firewall, anti-spyware software and tight local computer security, your ISP still knows every website you visit, how long you were on it, and how many different websites you visit in a given day. They know if you shop at Amazon, Walmart, Target or E-bay. They know all this just because of one single service that they provide call the Domain Name Service (DNS). DNS is the telephone book for the internet that takes the name you recognize and turns it into the numbers a computer understands to connect your computer to the website.  
They can get even more information if you do things like “turn on location service,” “share your relationship status of Facebook,” “post your phone number online” or “use social media sites, like Facebook.” Sometimes we accidentally share more information than we intend by failing to read the “Terms of Service” for things like Facebook. If you take the time to read the fine print, you probably would not want to continue using Facebook or the internet at all; most of us just click “I Agree” so we can move on with our day.
Google, the leader in online search, is able to offer their services to you for free because they are really one of the world’s largest targeted advertising agencies. They gather personal information about you based on your search history. For example, if you search for home remedies for the common cold, you will very soon begin seeing advertisements for cold medications and herbal supplements at the side of your search screen. If you search for a part for your car, you will start seeing advertisements for auto parts stores and possibly car dealerships for your particular manufacturer. Using this information to provide targeted advertising is in the terms of service from Google.
Finally, the government tracks internet activity at all levels: state, federal and even local governments monitor internet activity to “serve and protect.” They gather all the same information your ISP gathers, and usually rely on the ISP to provide the information. Your computer is definitely not a “private” device, especially once you connect it to the internet.
There are a couple of simple things you can do to protect yourself. First never share any information on a site that is not running on https. Second, you can sign up for Virtual Private Network (VPN) services. A VPN allows you to encrypt all the requests coming from your computer and heading to the internet so your ISP cannot tell any information about your connection. However, if you really want to make sure your information stays secure, don’t share it online in the first place. The only secure computing system is the one that is turned off and locked in a safe.

11.14.2019 Social Networks Review


This week I came across some alternatives to Facebook. As I am sure you all know Facebook is the number one social network today. Being number one does not mean it is without problems. There have been several security issues with Facebook over the last few months, resulting in personal information on Facebook being used by major corporations to make money. The information stolen from Facebook has resulted in several phishing schemes and the loss of millions of dollars by the users of Facebook.
Not only has data been stolen from Facebook, but Facebook has also been caught selling user data to every corporation and organization that comes with an offer, including nefarious ones like the CIA and Cambridge Analytica. Facebook also owns two of the other most popular social media sites in the world, Whatsapp and Instagram.
As a result of these issues, many people have threatened to leave Facebook and begin looking for alternatives. Looking at historic trends, they do leave Facebook for a while, but for some reason always come back. If you are looking for some alternatives then read on, but I will give you fair warning that it will be difficult to transition from Facebook, and the top options lack many of the features you have come to depend on.  
The top on the list is MeWe, “Like Facebook, but with privacy.” MeWe was engineered with privacy and freedom of speech as its core values. They built a revolutionary service where people can be their real, uncensored selves. MeWe is free of advertising and spyware, has no political biases or agendas, and comes with a promise to never sell member data in part of the unique “Privacy Bill of Rights.” MeWe does not gather facial recognition data or manipulate content or the newsfeeds on you page in any way. MeWe members can also enjoy fully encrypted online chat, live video and voice calls, voice messaging and private cloud storage. MeWe displays every post, chat, comment, etc. made by your personal network in true timeline order with no interference.
Next on the list is Ello, “The Creators Network.” Ello is probably more like Pinterest than Facebook and was formed in 2014 for those who create or enjoy viewing artwork. Your feed on Ello is focused around your interests rather than around a friends’ list and all your posts are public. You can comment, like, repost, follow, buy, and sell content on Ello. Just like MeWe, Ello refuses to sell your data to advertiser or third parties and is ad-free. They also do not force you to use your real name, so you can remain untraceable.
The last one I will talk about this week is Diaspora. Diaspora is unique in the realm of social media platforms. Diaspora consists of many different networks, called pods. The pods are hosted on decentralized systems deployed by individuals in the open source community. It is also ad-free and focused on freedom and privacy. Like Facebook you can post status updates, share content, and leave comments on posts. Diaspora encourages users to report offensive content, but content is only censored with the approval of the pod administrators. The cool part about Diaspora is that, since it is decentralized, it cannot be owned by any one individual or corporation. You can even operate your own pod, which acts as a server in the network and allows you keep your data completely private. You can really own your own data and control who can see it. You can also completely remove your own pod with no trace back to the public content.

Wednesday, February 19, 2020

11.07.2019 Woodie Flowers, America’s engineer


Photo by Jake Ingram
Dr. Woodie Flowers gives his signature 
thumbs up at the 2006 FIRST Championship
 in Atlanta, Ga. Used under Creative Commons 
Attribution-Share Alike 2.5 license. 
https://creativecommons.org/licenses/by-sa/2.5/deed.en
  This week I would like to take the time to honor a “hero” in engineering. Dr. Woodie Flowers, Professor of Mechanical Engineering and founder of the FIRST Robotics Competition, died October 12, 2019, following complications from aorta surgery. Flowers is one of the most well-known Mechanical Engineering professors in the world due to his unique approach to instruction. 
Flowers began his career as an assistant professor at MIT, working with Herb Richardson on the “Introduction to Design and Manufacturing” class. The class featured a design competition where Flowers would give teams in the class a box of random parts and a goal of creating something useful. In 1974, when Flowers took over as lead professor of the course, it rapidly became the most popular course on the campus.
  Flowers updated the competition each year, providing different components and different challenges.  The challenges became increasingly difficult over the years, but students always rose to meet them. The competition became so exciting that PBS began broadcasting it on “Discover the World of Science,” and jokingly referred to as MIT’s true homecoming game. In 1987 Flowers handed the class over to Harry West to move on to a more public role.
  In 1990, “Discover the World of Science” changed its name to “Scientific American Frontiers” with Flowers as the host. He hosted the show until 1993, all the while working with Dean Kamen to form For Inspiration and Recognition of Science and Technology (FIRST), a project to inspire a culture celebrating science and technology.
  Flowers introduced the phrase “gracious professionalism” to FIRST in 1992, which has driven the culture behind the movement ever since. Flower has served as the National Advisor to FIRST since its inception and was inducted into the STEM Hall of Fame during the 2017 VEX Robotics World Championship.
  Flowers was best known for his passion in sharing his expertise through experience and competition with students of all ages. The world needs more engineers with a passion for teaching the trade. His methods of hands-on experiential learning changed the course of engineering instruction over the last three decades.

Tuesday, February 18, 2020

10.24.2019 Quantum Machine Learning


Last week we discussed Machine Learning and I promised to talk about ways to improve machine learning techniques.  The latest of these is through the application of Quantum Computing to machine learning algorithms.  I know what you are thinking, “This article is about new technologies.”, but you would be wrong.

Machine learning was first invented before the modern computer.  It was invented in 1949 by Donald Hebb and based on interactions within the human brain. Hebb wrote, “When one cell repeatedly assists in firing another, the axon of the first cell develops synaptic knobs (or enlarges them if they already exist) in contact with the soma of the second cell.” Translating Hebb’s concept to machines we narrow it down to a weighting mechanism between artificial neurons and their relationship with each other.

In neural networks today we base the learning outcome on a scoring mechanism where the score determines the decision. For example, in a self-driving car, a high score might mean, “hit the brakes there is a child in the road.”, and a low score means, “keep moving everything is fine.” We base this scoring on a series of artificial neurons that add together all the various inputs of the system to arrive at a score. 

The neural network is trained by adjusting the weights of the links between the neurons through feeding in a known set of training data and adjusting the weights until the expected score for the training set is output correctly for every item in the training set. The training requires adjusting every neural link many times and processing the entire training set after each change, as a result the training can take weeks.  This is where quantum computing comes in to play.

Quantum Computing, which is also not so new, first conceptualized in the early 1980s by Paul Benioff and very shortly after Richard Feynman and Yuri Manin suggested that a quantum mechanical computer like Benioff proposed could perform calculations that are out of reach for classical computers.  This comes about because of their ability to examine the results of multiple input scenarios simultaneously.

Quantum computers can be used to determine the weighting of the links between the artificial neurons in a fraction of the time because of their ability to test all weights simultaneously, or all input simultaneously depending on the exact learning method employed.  The first method is to use the quantum bits (qubits) to represent the neuron weights, allowing you to test all the input data in sequence and pick the best weights for the training data. The second method is to represent the data with the qubits and test all the possible weights in sequence.  The fastest method depends on the size of the neural network and training data. If your quantum system is the large enough you want to represent the larger of the two using qubits.

To make it a little easier to understand let us think about a very simple test case with four neurons and a training data set with 32 values. This would require you to check weights of four links requiring 16 total tests, for each of the 32 input values for a total of 512 tests using classic machine learning.  Because the qubits can set all possible values at once if we represent the four weights with four qubits we run all weights at once and only perform 32 tests.  If we represent the training data with the qubits we can run all inputs at once and only need to run 16 tests.  If we bring these values into current training model sizes you begin to see the power. Modern quantum computers are limited to 52 qubits which allows for two the 52 power weights that can be tested simultaneously, which will reduce the total number of steps for training a 52 neuron network by roughly the number of atoms in the universe.
 
 

Monday, February 17, 2020

10.17.2019 Machine Learning


A new buzz word in computer science in the last decade is machine learning. What is machine learning? In the shortest way possible, it is a computer algorithm that is designed to learn something by recognizing patterns in the data, much like a child learns to speak a language by hearing the language over and over again, recognizes patterns, and learns to speak with proper grammar over time.
Ironically the first major use of machine learning was in natural language translation. Algorithms were developed to take large samples of several languages and learn how to put translated words in the correct order. It might seem like the computer is able to think for itself about grammar and languages, but this is not the case at all. It is simply able to copy from examples. 
There are two stages of machine learning. The first stage is the training phase, during which an extremely large dataset is manually labeled by a person so the computer can learn. For example, in the language translation, a person has to label each word in all the training data as a part of speech, i.e., noun, pronoun, verb, etc. This labeling allows the algorithm to recognize the order of the parts of speech in a sentence to correctly order the translated words. This training process is a long and tedious process and any word that is not classified will confuse the algorithm.
The second stage is making predictions. This is the part of machine learning you get exposed to every time you ask Google to translate a phrase, or see a product recommendation while shopping online.  The better you train the model, the smarter the predictions get. Google Translate retrains their translation models using feedback from users of the tool every few days, so it is constantly improving.
To train machine learning models requires large amounts of computing power and memory. The process takes, in the case of Google Translate, years to accurately train the model. There is a need for much faster training processes before machine learning starts to look anything like human learning, which is the goal of artificial intelligence. Next week I will write about some of the new training methods that are forthcoming.

Friday, February 14, 2020

10.10.2019 Artificial Intelligence


What is artificial intelligence (AI)? It is usually used to describe computers, machines, or software that mimic “cognitive” functions that humans associate with the human mind. Among them are “learning” and “problem solving.” The definition of AI has changed dramatically over the years.
As computers have become faster and more capable of simulating these cognitive functions, many of these functions have been removed from the definition of AI. A prime example of this is optical character recognition, which just a few years ago was considered an intelligent function for computers, mainly because the task was difficult.
Optical character recognition is able to take a picture of a page of text and turn it into text that can be edited in a word processing program or analyzed for content. Dr. Larry Tesler created a theorem of AI that is commonly quoted as “Artificial Intelligence is whatever hasn’t been done yet.” What he said was: “intelligence is whatever machines haven’t done yet.” In Tesler’s definition AI will constantly change because if it ever reaches a point that it can do anything we can do, it would then just be intelligence with nothing artificial about it.
The current technologies referred to as AI include human speech recognition, strategic game systems, autonomous vehicles, intelligent content delivery networks, and military training simulations. All of these will eventually become such mainstream technology that something new will define AI. 
There are currently three main fields of study in AI, data analytics, human-inspired intelligence, and humanized artificial intelligence. The most advanced of these three fields in my opinion is the data analytics field. Analytics AI learns based on past experiences to inform future decisions using cognitive intelligence to represent the world, much like a toddler learns to talk. Among tasks that fall into this category are image recognition, “Is this a picture of a cat or a dog?”; text content classification, “Is this article for or against a flat tax?”; intelligent routing, “Which route should I take to get around the accident ahead?” and product recommendation, “Since you bought the new iPhone, you might be interested in a case?” We have all experienced AI in our everyday lives, probably without even noticing it.
The area of human inspired intelligence makes an assumption that human intelligence “can be so precisely described that a machine can be made to simulate it.” This particular field brings about all kinds of ethical issues. Should we be attempting to create artificial beings endowed with human-like intelligence? There are several science fiction books and movies based on AI becoming a danger to humanity. This is the area of AI that is addressed in most of these stories. I believe the much larger risk of AI is the creation of mass unemployment, as machines are more and more capable of performing everyday human tasks.
The third area is humanized artificial intelligence. This is using AI to replace human characters in real life and in stories. Prime examples are the soldiers in the popular video game series “Call of Duty.” These soldiers make “intelligent” decisions based on the environment, including the interaction with human players. These humanized beings are also used in telephone support to remove the first level of contact. If you call a help desk, bank, or credit card company and a computer answers with a human-like voice asking, “What can I help you with today?” and gives the ability to answer in complete sentences, it is using humanized AI.
Is AI something we should be worried about, or something we should embrace? I leave that up to the reader.

Thursday, February 13, 2020

Safe Social Networking


Almost everyone is using social media and most of us use more than one social network system.  I am personally on three, Facebook, LinkedIn, and Twitter. All three have their own advantages and disadvantages, and all three are lacking good automatic protection of your privacy and security.  This week I want to provide some advice on how to safely use these networks and still protect your personal data.
First and most important is to always remember, even if you believe you are on an anonymous site, that once something is posted or messaged, it cannot be removed.  Once something is on the internet it is always there and never goes away. For example http://web.archive.org/ is personal websites from May 12, 1996 from infoseek.com, a predecessor of Facebook. Infoseek was bought by Disney and dissolved in 1999, but all the personal information is still online at archive.org. This being said, make sure you don’t post anything to social media that will hurt your reputation as it will never go away. Your online reputation can be a good thing or a bad thing. It is your choice depending on what you post and how you act online.
Second is to make sure you keep your personal information personal. The more information you post about yourself the easier it is for someone to steal your identity, access your data, or commit other crimes using your online profile.  My favorite example is a Facebook post of a family photo at Christmas, in the background is a whiteboard with the families Amazon password written in large letters.  Everyone has their e-mail address listed as public on their Facebook profiles and it is just a matter of guessing who manages the Amazon account, now they can log on as this user and make Amazon purchases.
Third make sure you know your online friends. It is very easy to create a fake profile for a person that does not exist and gather personal information on others.  Or worse yet, create a profile using a real name a photo to trick someone into sharing personal information.  Never accept a friend request from a stranger, or someone that you believe you are already friends without confirming by some other means that they have created a new account.  Even the local police force uses fake identities on Facebook to gather information about criminal activities, so be careful who you friend and what you share.
The rest of my advice has to do with protecting your accounts and your computer from malicious people and software.  Number one keep your security software up to date, it does not do any good to install an antivirus software and never apply the updates, there are new viruses released nearly every day. It is important to protect your computer because all your passwords and personal information can be recovered from your computer with the right software.
Number Two protect your accounts by having good passwords.  Just to give you an idea, a six-character password can be guessed by a hacker using simple software in under 15 minutes, and a 11 character password with the same software will take 10 years and anything longer than 12 characters will take over 200 years to guess. The strongest passwords that are easy to remember are sentences or phrases.  I personally like to use Bible verses. For example, John3:16 to create a password using the first letter of each word to come up with FGsltwthghobstwbihsnpbhelJn3:16.  It is easy to remember, fairly easy to type, but nearly impossible to guess. 
Number three never use the same username and password combination on more than one website, for example don’t use the same login information on both twitter and Facebook, or use your g-mail password for Facebook.  It is highly likely that if one of your accounts gets hacked, the hacker will attempt to use the same email and password on all the other social media sites.  Also, I do not recommend keep passwords written in a notebook, unless you have a safe and keep it locked up all the time. It is better to install a password keeper software on your computer or smartphone and keep passwords encrypted there with a very strong master password.  You can get a nice free password safe software at http://keepass.info.  Until next week stay safe online.




Wednesday, February 12, 2020

10.05.2019 Photocopiers

Artwork courtesy of U.S. Patent and Trademark Office: Chester Carlson’s 
original photocopier from his patent granted on Sept. 12, 1944.
This week we had some issues with one of the copiers at the news office and it brought up some questions about how copiers work, and why sometimes they don’t. The photocopier has been around for longer than you might believe. Chester Carlson used his kitchen for “electrophotography” experiments, and in 1938, applied to patent the process. He used a zinc plate covered with sulfur to produce the first photocopy.
Carlson attempted to sell his patent to 20 different companies from 1939-1944, before Battelle Memorial institute assisted him in refining the process. In 1947, Haloid Corporation obtained the first license to develop, produce, and market a copying machine based on this technology. Haloid felt the word “electrophotography” was too complicated and hard to remember. They changed the term to xerography with was Greek for “dry writing.” The new machines were called “Xerox Machines” and in 1948, the word “Xerox” was trademarked. Haloid eventually changed its name to Xerox Corporation.
Enough about the history, now let’s get into how this thing works. Xerography works off basic principles of light and electricity. The first thing you must understand to grasp how photocopiers work is, “Light is a kind of electricity.” Light is just an electromagnetic wave traveling through space. Inside a photocopier is a device that allows current to flow through it when light shines on it. This device is called a photoconductor and it is used to capture the pattern of the light as a pattern of static electricity. Areas that are light will get an electrical charge and dark areas will get a no charge, making an electrical copy of the page.
The numbers in this section refer to the numbers in the drawing. Suppose you are trying to copy a page of this newspaper. If you shine an extremely bright light (52) at the paper (90), at the perfect angle, you will create a shadow of the paper on another object (41). In the case of a photocopier, that shadow is created on a photoconductive drum (41). The drum is then sprinkled with powdered ink which sticks to the charged areas on the drum. A piece of blank paper is pressed against the drum, transferring the ink powder to the paper (76). The paper now has a copy of the original document.
There is one more step in the process. The powdered ink must be bonded to the paper. This happens with combined heat and pressure in the fuser unit (72) of the copier. Sometimes the still warm copy comes out of the copier with enough static electric charge still on it that it will stick to your shirt, or the wall.  A very interesting thing happens when a fuser unit fails on a copier. You will see a perfect copy come out of the paper tray, and as soon as you pick it up, all the ink falls off onto the floor as powder. 
If you are looking for a full explanation of how the process works, you can read Carlson’s full patent; though technical in nature, it is fairly easy to understand. You can find his patent at https://patents.google.com/patent/US2297691.

Tuesday, February 11, 2020

09.26.2019 E=mc^2 celebrates 114 years


Sept. 27, 1905, marked the publication date of the world’s most famous equation. The physics journal Annalen der Physik published Albert Einstein’s paper, “Does the Inertia of a Body Depend Upon Its Energy Content?” where E-mc^2 was first introduced. Energy is equal to the mass of a particle times the square of the speed of light. One of the simplest equations to write has some of the most profound meanings. In honor of that publication, I want to point out five lessons we can take from this simple equation.

The first lesson is that “mass is not conserved.” We often make the mistake of thinking that mass never changes. For example, if you take a block of iron and chop it up into a bunch of iron atoms, you fully expect that the mass of all the atoms will be the same as the mass of the block. That assumption is clearly true, but only if mass is conserved. However, according to this equation, mass is not conserved at all. If you take an iron atom, containing 26 protons, 30 neutrons, and 26 electrons, and place it on a scale, you’ll find something disturbing. An iron atom weighs slightly less than an iron nucleus, and the nucleus weighs significantly less than the 26 protons and 30 neutrons that compose it. This is true because mass is just another form of energy, and the energy required to hold the nucleus together reduces the mass of the parts.

The second lesson is the law of the conservation of energy. Energy is conserved, but only if you account for the changing masses. Imagine the Earth orbiting the Sun at 30 kilometers per second. This is the slowest it can go without falling toward the sun. It orbits 150 million kilometers away from the sun.  If you were to weigh the Earth and then weigh the Sun, you would find the total weight of the Earth and the Sun measured separately is much greater than the weight of the Earth and Sun weighed together in motion. This is because the gravitational energy holding them in orbit affects the mass. The tighter the orbit, the more energy it takes to maintain stability and the lower the mass of the combined system. Protons and neutrons bind together in the nucleus of an atom in large numbers, producing a much lighter nucleus and emitting a lot of high-energy photons in the process. This nuclear fusion process can create extreme amounts of energy.

Third, Einstein’s E=mc^2 describes why the Sun and stars shine. Inside the core of the Sun, temperatures rise to over four million degrees Kelvin, allowing nuclear reactions that power the sun to take place. Protons are fused together forming a deuteron and emitting a positron and a neutrino to preserve energy. This process eventually creates helium-4 which only weighs in at 99.3 percent of the mass of the four protons used to create it. The process also releases nearly 28 million volts of electrical energy. Over the lifetime of the sun, it has lost approximately the mass of Saturn due to the nuclear bonding in its core.

Fourth, the conversion of mass to energy is the most energy-efficient process in the universe. One hundred percent of the mass is converted directly to energy, making it 100 percent efficient. Looking closely at the equation, you can see that mass is converted directly to energy and this tells you exactly how much energy you will get out of the system. For every kilogram of mass you convert, you get nine million joules of energy, the power of a 21 Megaton bomb.

Lastly, you can create massive particles out of nothing but pure energy. This is probably the most profound discovery and the hardest to explain. If you take two rubber balls and smash them into each other, you expect to get one result, two rubber balls. With particles like electrons, though, things change. If you smash two electrons together, you will still get two electrons, but if you smash them together hard enough, you can get a pair of anti-matter particles, effectively creating new mass from the energy involved in the collision. Mass can be converted to energy and back again.

Monday, February 10, 2020

New Posts Coming

Hello everyone, it has been a while since I posted my tech column on the blog so over the next several days you will see a new post each day that starts with a date. The date is the run date of the original article in The Licking News.  I want to make them available to you, but felt flooding you with three months of post in a day would get overwhelming.  The post without a date on Thursday is being published in print on Thursday.
Thanks for reading.
Scott

9.23.2019 Quantum Supremacy


  If you follow technology news at all, you will have noticed over the weekend many articles talking about Google reaching Quantum Supremacy.  This week I would like to talk a little about what that means. First you need to understand a little bit about Quantum Computing and Classic Computing.
Classic Computing is what runs your smart phone, The Licking News website, Google’s Search, and the word processor I am using to write this article. Classic computing works on the concept of a bit, which for lack of a better example is like a light switch; it is either on or off. Everything a computer does is based on the conditions of millions of these simple on/off switches.

  Quantum Computing is based off a Quantum Bit, or Qubit.  A Qubit is a strange thing that exists in a state somewhere between on and off, called superposition. You could think of it as a dimmer switch that lets you adjust the brightness of the light in your room. They can also be entangled, which is like wrapping a rubber band around two dimmer switches so that both lights act together. These two properties make it possible to solve extremely large computational problems very fast. A quantum computer with 40 qubits has the same memory capacity as a classic computer with six trillion bits.
Quantum Supremacy is loosely defined as the point in time that a quantum device solves a problem that can not be solved using a classic computer of any size. A problem exists with this definition, because computers are continuing to get faster, the target of Quantum Supremacy is always moving. Three years ago it was thought that a 40-qubit system would reach supremacy, now that has climbed to over 50-qubits. 

  There has been very little information released on what the “problem” was that Google solved late last week with their Quantum processor that resulted in the claim of being the first to reach quantum supremacy. A paper, titled “Quantum Supremacy Using a Programmable Superconducting Processor,” was briefly published to NASA’s website but was taken down within a few hours. The paper described how Google’s processor completed this unknown task in 200 seconds, which would take NASA’s state-of-the-art supercomputer 10,000 years to perform.

  “This dramatic speedup relative to all known classical algorithms provides an experimental realization of quantum supremacy on a computational task and heralds the advent of a much-anticipated computing paradigm,” the paper states. “To our knowledge, the experiment marks the first computation that can only be performed on a quantum processor.”

  Mainstream technology press organizations have been reaching out to Google for the last several days asking for comments on the supposedly leaked paper, but as of yet there is no comment from Google. I for one am interested to see the final outcome and if quantum computers have truly surpassed our current technology.

Thursday, February 6, 2020

Impacting Technologies of the 2010’s


I always find it interesting to review technology developments at the end of a decade.  This week I plan on not only listing, but talking a little bit about the top new technological developments in the last decade. The years between 2010 and 2020 brought about some of the most amazing technology in history.
For the first time in history we have a generation of people that never knew life without a computer, or the internet.  Everyone under the age of thirty has probably had access to a computer of some kind for their entire lives.  Over the last decade the computer has not only been a device that most people have access too, but a device that most of us depend on for everyday life.
The new technology has been surprising with advancements in artificial intelligence, autonomous vehicles, robotic lawyers,  and real time language translation just to name a few.  The technologies in this article are ones that I feel we will still be using and talking about well into the next decade.
In the 2000’s Facebook started the social media  trend and held the top of the social networks until the early 2010’s.  in 2010 Instagram became the most popular among the Gen-Zers, mainly due to the large influx of the older generation onto Facebook, bringing with them new media, marketing and politics.  Instagram became the preferred method for sharing personal lives on social networks. In 2012 Facebook purchased Instragram and the network has grown to over a billion registered users.
In 2011, Spotify took the market by storm offering the largest collection of music for streaming on demand. This brought about a massive decline in online music piracy.  With free services that stream all your favorite music and collect revenue through advertising to pay the music producers, the need to pirate music dropped tremendously.
In 2012, there was the introduction of the Tesla Model S, electric car.  This seemed like a major release in transportation technology, but the most impactful change wasn’t a new car. It was car sharing through Uber.  Uber rolled out it’s ride sharing service across every major U.S. city and around the world. Fundamentally changing how people get around the city. Lyft also launched their ride-sharing service in 2012, making this the year of shared transportation.
In 2013, Chromecast came to the scene, allowing people to stream video services to any device with an HDMI port.  Chromecast is not really all that useful today as the technology is integrated almost every device with a screen today, but it was a top selling product in 2013.
2014 was the year of the smart watch, with the release of the Apple Watch, which in all respects was an iPad in a tiny watch form factor.  This first model had all kinds of issues, but as Apple worked to resolve them it has become the best smartwatch on the market today. 
Amazon Echo started the smart speaker trend in 2015 as online voice assistants moved from the phone to the home. This device incorporated Bluetooth technology as well as machine learning and speech recognition.  The Echo held the market share in smart speakers until nearly the end of the decade when Google released a competing device, Google Home.
Airpods came on the scene in 2016 releasing us from wired ear buds and allowing freedom to move away from our audio devices. There was worry of them getting lost easily when they were first released in 2016, but the freedom they give to the user has greatly decreased that fear and they are now nearly as popular was wired ear buds.
The Nintendo Switch changed the gaming console forever, with the ability to use the system both as a stationary game system tied to a large screen TV and take it along with you on the road.  The main controller included a screen that allowed game play to continue on the road. The release of the Switch in 2017, brought a new approach to gaming hardware.
2018 was the year of the shareable electric scooters, that have seemed to become a permanent fixture in many major cities.  They have had an impact of city ordinances and been removed by law in many cities.  As a result of this legislation, the technology has lost any of it’s staying power, but the tech in the vehicle sharing has spread to the sharing of electric cars in a similar manner across several U.S. cities.
Last, but not least, is the release of Tik Tok in 2019. As Gen Z kids break into adulthood, this is the most likely platform to become the main method of communication among their peers.  This short video sharing service is currently the top contributor to internet content storage and results in close to 15% of all the data on the internet today.  It is expected to grow beyond 50% of all online data within the next couple of years.