Monday, February 24, 2020

11.21.2019 Tracking Meteors


Last week I reviewed three social networks and wanted to mention a fourth. You might ask, what does this have to do with meteors? It might seem strange at first, but actually there is a lot that link the two together. We will get into that after I talk about Reddit.
Reddit is the self-proclaimed “front page of the internet;” it was formed in 2005. Registered users (redditors) submit content in a variety of subjects: news, science, music, books, video games, technology, etc. Redditors get to vote the content up or down and Reddit sorts the content based on these votes. Content that receives enough up votes gets displayed on the site’s front page. There is no real need for censorship by the site owners because the up-or-down voting system can raise posts to full visibility or make them disappear completely. The only problem with the system is that people can be paid to “troll” a post, falsely making a post more popular or removing it all together. Recently Reddit moderators have been accused of censorship, just like Facebook, but there is slightly more freedom on Reddit for the moment.
Now for the link to meteors. On Nov. 15, amid the Mercury transit and the Taurid meteor shower, a rogue meteor stole the show over Missouri. This lone meteor was not a member of the Taurid meteor shower. The brightness of the fireball and direction of its orbit indicated that it was a fragment of an asteroid. 
The space rock was about the size of a basketball and weighed about 200 pounds, according to the NASA Meteor Watch. The meteor traveled northwest at 33,500 miles per hour and broke apart about 12 miles above Bridgeport, Mo. More than 300 people reported to the American Meteor Society (AMS) they had seen the meteor.  This is the link between social media and meteors. The AMS can gather more information on meteors, their paths, speed, size, and potential landing sites from information posted on social media than ever before possible. The AMS can take information from photos of the meteor along with data gathered from satellite and ground based measurement systems to piece together the full story. Every photo or video of the event adds more datapoints to triangulate the trajectory of the meteor, giving a higher probability of locating any debris that reached the ground. So if you happen to get a picture of a meteor, please share it online to help the efforts of locating debris. There happens to be a $25,000 cash reward for finding a fragment of this meteor that is at least one kilogram in mass.
The formula to determine the landing site of meteorites is a series of calculations using angles and known distances to determine the exact location of an object. In the case of a meteor, if you have the exact location of at least three photos of the meteor, you can use trigonometry to determine the precise location. You do this at least two times and you then have a rough flight path. The more pictures with location information you have available, the better model of a path you can get. Next week I will show you where to get dark flight paths for meteors and share some tips on meteorite hunting. The plains of Missouri and Kansas are prime hunting grounds for meteorites.

Sunday, February 23, 2020

2-6-2020 Impacting technologies of the 2010’s

I always find it interesting to review technology developments at the end of a decade. This week I plan on not only listing, but talking a little bit about the top new technological developments in the last decade. The years between 2010 and 2020 brought about some of the most amazing technology in history.

For the first time in history we have a generation of people that never knew life without a computer or the internet.  Everyone under the age of thirty has probably had access to a computer of some kind for their entire lives. Over the last decade the computer has not only been a device that most people have access too, but a device that most of us depend on for everyday life.

The new technology has been surprising with advancements in artificial intelligence, autonomous vehicles, robotic lawyers and real time language translation, just to name a few. The technologies in this article are ones that I feel we will still be using and talking about well into the next decade.

In the 2000s Facebook started the social media trend and held the top of the social networks until the early 2010s. in 2010 Instagram became the most popular among the Gen-Zers, mainly due to the large influx of the older generation onto Facebook, bringing with them new media, marketing and politics. Instagram became the preferred method for sharing personal lives on social networks. In 2012 Facebook purchased Instagram and the network has grown to over a billion registered users.

In 2011, Spotify took the market by storm, offering the largest collection of music for streaming on demand. This brought about a massive decline in online music piracy. With free services that stream all your favorite music and collect revenue through advertising to pay the music producers, the need to pirate music dropped tremendously.

In 2012, there was the introduction of the Tesla Model S electric car. This seemed like a major release in transportation technology, but the most impactful change wasn’t a new car. It was car sharing through Uber. Uber rolled out its ride sharing service across every major U.S. city and around the world, fundamentally changing how people get around the city. Lyft also launched their ride-sharing service in 2012, making this the year of shared transportation.

In 2013, Chromecast came to the scene, allowing people to stream video services to any device with an HDMI port. Chromecast is not really all that useful today as the technology is integrated into almost every device with a screen, but it was a top selling product in 2013.

2014 was the year of the smart watch, with the release of the Apple Watch, which in all respects was an iPad in a tiny watch form factor. This first model had all kinds of issues, but as Apple has worked to resolve them it has become the best smartwatch on the market today.  

Amazon Echo started the smart speaker trend in 2015 as online voice assistants moved from the phone to the home. This device incorporated Bluetooth technology as well as machine learning and speech recognition. The Echo held the market share in smart speakers until nearly the end of the decade when Google released a competing device, Google Home.

Airpods came on the scene in 2016, releasing us from wired ear buds and allowing freedom to move away from our audio devices. There was worry of them getting lost easily when they were first released in 2016, but the freedom they give to the user has greatly decreased that fear and they are now nearly as popular was wired ear buds.

The Nintendo Switch changed the gaming console forever, with the ability to use the system both as a stationary game system tied to a large screen TV and take it along with you on the road. The main controller included a screen that allowed game play to continue on the road. The release of the Switch in 2017, brought a new approach to gaming hardware.

2018 was the year of the shareable electric scooters that have seemed to become a permanent fixture in many major cities. They have had an impact on city ordinances and been removed by law in some cities. As a result of these legislations, the technology has lost some of its staying power, but the tech in vehicle sharing has spread to the sharing of electric cars in a similar manner across several U.S. cities.

Last, but not least, is the release of Tik Tok in 2019. As Gen Z kids break into adulthood, this is the most likely platform to become the main method of communication among their peers. This short video sharing service is currently the top contributor to internet content storage and results in close to 15 percent of all the data on the internet today. It is expected to grow beyond 50 percent of all online data within the next couple of years.







1-30-2020 Server Processors in 2020

Every time you log into Facebook or search for something on Google, you access servers. Servers are remote computer systems that “serve” information of services to other computer systems. Years ago these servers used completely different technology than the computer on your desk. However, like all things, technology has changed.

Servers used to be room-sized computers running specialized processors for rapid data access. These mainframes used a parallel memory access system and contained multiple linked processors in a single system to allow the server to talk to many computer terminals at the same time. As technology has advanced even the processor in your cellular phone has the same capabilities as the mainframes from 30 years ago. Yes, your phone is a computer.

What this means is that it is possible for every computer to run the same type of processor today. You might ask how this affects the companies that design and build both servers and processors. Interestingly, it keeps the competition very exciting. In the last couple of years the landscape of server technology business has changed dramatically.

The big players like IBM and Intel are, of course, still in the game and still control most server platforms, but there are a couple of lesser known giants in the game. Among them is AMD, which in the last two years made a major comeback to control 40 percent of the server processor market. Merely a year ago they only controlled 12 percent, and two years ago it was less than five percent.

How does a smaller company like AMD take on giants like IBM and Intel to create a landslide victory in just a year? There are three factors that play a major role in selecting a server processor: price, performance, and availability. Two years ago, AMD released a new processor that was about 40 percent better performing and 20 percent cheaper than anything Intel had available. The demand for this new processor quickly began to outpace the supply and AMD’s market share suffered from a supply and demand issue. The landfall sales of 2018 allowed AMD to ramp up production and as a result take over a large portion of the processor market.

I mentioned three factors above and there is another player in the market that sells more processors than all the other designers combined. This player is the developer of an open standard for microprocessors called ARM. What makes ARM unique is that any manufacturer can take an ARM design, extend it to embed their own components and build their own unique processors. Today ARM processors completely dominate the overall processor market with over 100 billion processors produced and sold annually.

ARM is a low power consumption processor with a simpler instruction set and a similar computing power level of lower end Intel and AMD processors. They are primarily used in cell phones, tablets, watches, calculators, and other small electronic devices. However, there has recently been a strong push to build ARM based servers for general computing. The price of ARM processors is lower, power consumption is lower, and performance is similar to the top selling Intel and AMD processors, but the difference is their instruction sets limit their capabilities, which causes some headaches to software vendors in the adoption of the technology.

There are many market analysts that say ARM is the future of server processor technologies, and I confirm their belief especially with the latest announcement from Amazon. Amazon has recently announced not only the availability of new ARM processors in their cloud service, but a shift to using their own ARM processor as the default for the service. Amazon’s Graviton Processors now run a majority of their web services workloads at a fraction of the cost, and Amazon is passing the savings on to their customers. ARM has all three factors rooting for it: price, performance and availability, to become the top processors in the coming decade.

1-9-2020 Computer Images

  Have you ever wondered how computers store, print, and display images? Well that’s the topic for this week’s article. Computers are actually not all that smart when it comes to storing image information. They really only store numbers, as a sequence of ones and zeros. So how do they keep a photo?

A photo, as far as a computer is concerned, is a grid of numbers that represent the colors in the image. Each spot on the grid holds four numbers between 0 and 256, one for each of four colors. If the image is meant for viewing on a screen, it usually uses Red, Green, Blue, and Alpha (RGBa). The higher the number, the darker the color. Alpha is the transparency of the image; since images can be overlaid on a computer screen, it is necessary to tell what happens when they overlap. If the image is meant to be printed, then it is usually stored using Cyan, Magenta, Yellow, and Black (CMYK). This has to do with the differences between how colors of light mix, versus the colors of ink.

There are two things needed to tell the computer the proper size to display an image. The first is the physical size of the image, for example, a standard photo is four by six inches. The second number tells how many pixels, or rows and columns of numbers, to store to represent the image. This can be defined as either the total width and height in pixels, or as a ratio of pixels per inch. Many common modern digital cameras capture images in the rough size of eight megapixels, or about eight million pixels. This is a grid that is 3266 by 2450 pixels, which gives you 8,001,700 pixels. Notice that this megapixel definition does not provide a size in inches, so the pixels per inch can be changed to make an image much larger or much smaller.

How big can you print the photo? As big as you want, but if there are not enough pixels it will start looking like a mosaic of little squares instead of a smooth image. The general rule is not fewer than 300 pixels per inch. So in the case of an eight megapixel image, this is 3266/300 (or 10.89) inches by 2450/300 (or 8.17) inches. You see each pixel is a box of color; the more boxes you have per inch, the clearer the image. This is true in both print and on your screen.

Pixelation is used by many websites to make it nearly impossible to print their photos. How can a picture on a website look great on the screen and bad when it is printed? Because it has too few pixels. Most websites display images less than five inches wide, at 100 pixels per inch. This makes an image 500 pixels wide. From the math above, the largest image you can print clearly is only 500/300 (or 1.6) inches wide. If you try to print an eight by ten photo from this web image, you will only get 62 pixels per inch, which means you will easily see the square shapes of the pixels and have a very poor print.

You can sometimes fix low-resolution images with photo editing software like Photoshop, by using their resize options. You can usually double the size of the image by re-sampling the image at the higher resolution before causing it to start losing quality. Basically what the computer does is makes a new pixel for each pixel. Based on the color of the copied pixel, it will match the color, blend the color with that of the neighboring pixel, or match the neighboring pixel’s color. This magnifies any issues with the original photo so you cannot go much bigger.

That is a little bit about how computers store, display and print images. If you see a picture in The Licking News that looks pixelated, you know that the publisher likely started with a low resolution image and did their best to bring it to print quality. Submissions many times are fine quality for the internet but too small for good printing. The difference between screen resolutions is 72 pixels per inch on most phones and 300 pixels per inch in newspaper print. What looks fine on your phone may look bad on paper and now you know why.

1-2-2020 Drone regulations

Around this time last year I ran a column talking about the federal government regulations on drones. Tracking of online sales of technology indicates that this year was another big year for drone sales. I am guessing that some readers likely either received or bought a drone for Christmas so I wanted to mention the new regulations on operating a drone.

First, there is a requirement to register your drone with the Federal Aeronautics Administration (FAA). If the drone weighs between 0.55 and 55 lbs., it can be registered online; over 55 lbs. will require paper form registration. Failure to register a drone can result in $250,000 in fines and up to three years in prison. This requirement stands for both commercial and private use.

If you are just flying the drone on your own property as a hobby, you still have to register the drone. However, if you plan to fly the drone to film events at your church, school, or place of business, even as an individual, the rules change. This is considered a commercial use of the drone and requires a separate registration under FAA Part 107 rules.

You must also be at least 13-years old to legally register a drone. You can fly one at a younger age, but the registered owner is responsible for any damage or rule violations. You can use a single registration for multiple drones provided you own them all and store them at the same physical address. You will receive a unique-to-you registration number with which you must label the drone prior to the first outdoor flight.

You can do the registration online in two different ways - directly with the FAA at https://www.faa.gov/uas/registration/ for a $5 fee where you only receive a registration certificate, or at https://www.federaldroneregistration.com for $24.99 where you receive a packet of labels/stickers for your drone and automatic renewal options for the registration, as well as other additional services.

The Part 107 provisions require pilot training and certification before operating the drone. You must be 16-years-old to become an official remote pilot. The certification consists of a written exam called the initial aeronautical knowledge test, which is only offered in certified testing facilities and covers various topics related to general FAA knowledge. The nearest test facility is at State Technical College of Missouri in Linn. Once you have completed this exam successfully, you will receive a 17-digit Knowledge Test Exam ID. This ID is then used to register with the FAA using the Integrated Airman Certificate and/or Rating Application System (IACRA). You will now be recognized as an official remote pilot.

As a professional photographer, if I were to purchase a large drone, I would go through the pilot registration process. It costs a little more but gives you the flexibility to use the drone for things like a DIY aerial photography business. It also allows you to fly your drone in more places. There are places you are allowed to fly a commercial drone that you cannot fly a private drone.

If you were one of the lucky recipients of a drone this year, please follow the rules and enjoy your new hobby or business. Fly safe.

Friday, February 21, 2020

Online Security

Online Security
Did you know that every time you visit a website, read an e-mail, watch a YouTube video or submit a Google search, more than one person is watching you? This week I want to let you know who is watching your internet activity, a little bit of why, and some ways you can limit what they are able see.
Who is watching you on the internet? There are three main groups that watch your internet activity. The first is your Internet Service Provider (ISP); the second is your search engine (Google for example) and the third is the government.  
Your ISP watches the activity on the internet for two main reasons. The first is for the security of their network; they want to insure that what you are doing will not impact other customers. This is a very good reason and we should all be glad they monitor for these types of activities. Without this level of monitoring, anyone with the knowledge could take over the data stream and block your ability to use the internet. The second reason is for money. How do they make money off monitoring your activity? They share the information with advertisers for a fee. 
How much information can your ISP gather? Surprisingly they get a lot of information from watching simple things, but the good news is that they cannot get anything you do not share or anything you share over a secure connection, at least not without breaking the law. Any site that uses the https address instead of the http address means that only the owner of the website can see the information shared on the site. This means that your online banking is safe from your ISP, except for the fact that they can tell where you bank because they know you visited that particular banking site.
Even if you have the best firewall, anti-spyware software and tight local computer security, your ISP still knows every website you visit, how long you were on it, and how many different websites you visit in a given day. They know if you shop at Amazon, Walmart, Target or E-bay. They know all this just because of one single service that they provide call the Domain Name Service (DNS). DNS is the telephone book for the internet that takes the name you recognize and turns it into the numbers a computer understands to connect your computer to the website.  
They can get even more information if you do things like “turn on location service,” “share your relationship status of Facebook,” “post your phone number online” or “use social media sites, like Facebook.” Sometimes we accidentally share more information than we intend by failing to read the “Terms of Service” for things like Facebook. If you take the time to read the fine print, you probably would not want to continue using Facebook or the internet at all; most of us just click “I Agree” so we can move on with our day.
Google, the leader in online search, is able to offer their services to you for free because they are really one of the world’s largest targeted advertising agencies. They gather personal information about you based on your search history. For example, if you search for home remedies for the common cold, you will very soon begin seeing advertisements for cold medications and herbal supplements at the side of your search screen. If you search for a part for your car, you will start seeing advertisements for auto parts stores and possibly car dealerships for your particular manufacturer. Using this information to provide targeted advertising is in the terms of service from Google.
Finally, the government tracks internet activity at all levels: state, federal and even local governments monitor internet activity to “serve and protect.” They gather all the same information your ISP gathers, and usually rely on the ISP to provide the information. Your computer is definitely not a “private” device, especially once you connect it to the internet.
There are a couple of simple things you can do to protect yourself. First never share any information on a site that is not running on https. Second, you can sign up for Virtual Private Network (VPN) services. A VPN allows you to encrypt all the requests coming from your computer and heading to the internet so your ISP cannot tell any information about your connection. However, if you really want to make sure your information stays secure, don’t share it online in the first place. The only secure computing system is the one that is turned off and locked in a safe.

11.14.2019 Social Networks Review


This week I came across some alternatives to Facebook. As I am sure you all know Facebook is the number one social network today. Being number one does not mean it is without problems. There have been several security issues with Facebook over the last few months, resulting in personal information on Facebook being used by major corporations to make money. The information stolen from Facebook has resulted in several phishing schemes and the loss of millions of dollars by the users of Facebook.
Not only has data been stolen from Facebook, but Facebook has also been caught selling user data to every corporation and organization that comes with an offer, including nefarious ones like the CIA and Cambridge Analytica. Facebook also owns two of the other most popular social media sites in the world, Whatsapp and Instagram.
As a result of these issues, many people have threatened to leave Facebook and begin looking for alternatives. Looking at historic trends, they do leave Facebook for a while, but for some reason always come back. If you are looking for some alternatives then read on, but I will give you fair warning that it will be difficult to transition from Facebook, and the top options lack many of the features you have come to depend on.  
The top on the list is MeWe, “Like Facebook, but with privacy.” MeWe was engineered with privacy and freedom of speech as its core values. They built a revolutionary service where people can be their real, uncensored selves. MeWe is free of advertising and spyware, has no political biases or agendas, and comes with a promise to never sell member data in part of the unique “Privacy Bill of Rights.” MeWe does not gather facial recognition data or manipulate content or the newsfeeds on you page in any way. MeWe members can also enjoy fully encrypted online chat, live video and voice calls, voice messaging and private cloud storage. MeWe displays every post, chat, comment, etc. made by your personal network in true timeline order with no interference.
Next on the list is Ello, “The Creators Network.” Ello is probably more like Pinterest than Facebook and was formed in 2014 for those who create or enjoy viewing artwork. Your feed on Ello is focused around your interests rather than around a friends’ list and all your posts are public. You can comment, like, repost, follow, buy, and sell content on Ello. Just like MeWe, Ello refuses to sell your data to advertiser or third parties and is ad-free. They also do not force you to use your real name, so you can remain untraceable.
The last one I will talk about this week is Diaspora. Diaspora is unique in the realm of social media platforms. Diaspora consists of many different networks, called pods. The pods are hosted on decentralized systems deployed by individuals in the open source community. It is also ad-free and focused on freedom and privacy. Like Facebook you can post status updates, share content, and leave comments on posts. Diaspora encourages users to report offensive content, but content is only censored with the approval of the pod administrators. The cool part about Diaspora is that, since it is decentralized, it cannot be owned by any one individual or corporation. You can even operate your own pod, which acts as a server in the network and allows you keep your data completely private. You can really own your own data and control who can see it. You can also completely remove your own pod with no trace back to the public content.

Wednesday, February 19, 2020

11.07.2019 Woodie Flowers, America’s engineer


Photo by Jake Ingram
Dr. Woodie Flowers gives his signature 
thumbs up at the 2006 FIRST Championship
 in Atlanta, Ga. Used under Creative Commons 
Attribution-Share Alike 2.5 license. 
https://creativecommons.org/licenses/by-sa/2.5/deed.en
  This week I would like to take the time to honor a “hero” in engineering. Dr. Woodie Flowers, Professor of Mechanical Engineering and founder of the FIRST Robotics Competition, died October 12, 2019, following complications from aorta surgery. Flowers is one of the most well-known Mechanical Engineering professors in the world due to his unique approach to instruction. 
Flowers began his career as an assistant professor at MIT, working with Herb Richardson on the “Introduction to Design and Manufacturing” class. The class featured a design competition where Flowers would give teams in the class a box of random parts and a goal of creating something useful. In 1974, when Flowers took over as lead professor of the course, it rapidly became the most popular course on the campus.
  Flowers updated the competition each year, providing different components and different challenges.  The challenges became increasingly difficult over the years, but students always rose to meet them. The competition became so exciting that PBS began broadcasting it on “Discover the World of Science,” and jokingly referred to as MIT’s true homecoming game. In 1987 Flowers handed the class over to Harry West to move on to a more public role.
  In 1990, “Discover the World of Science” changed its name to “Scientific American Frontiers” with Flowers as the host. He hosted the show until 1993, all the while working with Dean Kamen to form For Inspiration and Recognition of Science and Technology (FIRST), a project to inspire a culture celebrating science and technology.
  Flowers introduced the phrase “gracious professionalism” to FIRST in 1992, which has driven the culture behind the movement ever since. Flower has served as the National Advisor to FIRST since its inception and was inducted into the STEM Hall of Fame during the 2017 VEX Robotics World Championship.
  Flowers was best known for his passion in sharing his expertise through experience and competition with students of all ages. The world needs more engineers with a passion for teaching the trade. His methods of hands-on experiential learning changed the course of engineering instruction over the last three decades.

Tuesday, February 18, 2020

10.24.2019 Quantum Machine Learning


Last week we discussed Machine Learning and I promised to talk about ways to improve machine learning techniques.  The latest of these is through the application of Quantum Computing to machine learning algorithms.  I know what you are thinking, “This article is about new technologies.”, but you would be wrong.

Machine learning was first invented before the modern computer.  It was invented in 1949 by Donald Hebb and based on interactions within the human brain. Hebb wrote, “When one cell repeatedly assists in firing another, the axon of the first cell develops synaptic knobs (or enlarges them if they already exist) in contact with the soma of the second cell.” Translating Hebb’s concept to machines we narrow it down to a weighting mechanism between artificial neurons and their relationship with each other.

In neural networks today we base the learning outcome on a scoring mechanism where the score determines the decision. For example, in a self-driving car, a high score might mean, “hit the brakes there is a child in the road.”, and a low score means, “keep moving everything is fine.” We base this scoring on a series of artificial neurons that add together all the various inputs of the system to arrive at a score. 

The neural network is trained by adjusting the weights of the links between the neurons through feeding in a known set of training data and adjusting the weights until the expected score for the training set is output correctly for every item in the training set. The training requires adjusting every neural link many times and processing the entire training set after each change, as a result the training can take weeks.  This is where quantum computing comes in to play.

Quantum Computing, which is also not so new, first conceptualized in the early 1980s by Paul Benioff and very shortly after Richard Feynman and Yuri Manin suggested that a quantum mechanical computer like Benioff proposed could perform calculations that are out of reach for classical computers.  This comes about because of their ability to examine the results of multiple input scenarios simultaneously.

Quantum computers can be used to determine the weighting of the links between the artificial neurons in a fraction of the time because of their ability to test all weights simultaneously, or all input simultaneously depending on the exact learning method employed.  The first method is to use the quantum bits (qubits) to represent the neuron weights, allowing you to test all the input data in sequence and pick the best weights for the training data. The second method is to represent the data with the qubits and test all the possible weights in sequence.  The fastest method depends on the size of the neural network and training data. If your quantum system is the large enough you want to represent the larger of the two using qubits.

To make it a little easier to understand let us think about a very simple test case with four neurons and a training data set with 32 values. This would require you to check weights of four links requiring 16 total tests, for each of the 32 input values for a total of 512 tests using classic machine learning.  Because the qubits can set all possible values at once if we represent the four weights with four qubits we run all weights at once and only perform 32 tests.  If we represent the training data with the qubits we can run all inputs at once and only need to run 16 tests.  If we bring these values into current training model sizes you begin to see the power. Modern quantum computers are limited to 52 qubits which allows for two the 52 power weights that can be tested simultaneously, which will reduce the total number of steps for training a 52 neuron network by roughly the number of atoms in the universe.
 
 

Monday, February 17, 2020

10.17.2019 Machine Learning


A new buzz word in computer science in the last decade is machine learning. What is machine learning? In the shortest way possible, it is a computer algorithm that is designed to learn something by recognizing patterns in the data, much like a child learns to speak a language by hearing the language over and over again, recognizes patterns, and learns to speak with proper grammar over time.
Ironically the first major use of machine learning was in natural language translation. Algorithms were developed to take large samples of several languages and learn how to put translated words in the correct order. It might seem like the computer is able to think for itself about grammar and languages, but this is not the case at all. It is simply able to copy from examples. 
There are two stages of machine learning. The first stage is the training phase, during which an extremely large dataset is manually labeled by a person so the computer can learn. For example, in the language translation, a person has to label each word in all the training data as a part of speech, i.e., noun, pronoun, verb, etc. This labeling allows the algorithm to recognize the order of the parts of speech in a sentence to correctly order the translated words. This training process is a long and tedious process and any word that is not classified will confuse the algorithm.
The second stage is making predictions. This is the part of machine learning you get exposed to every time you ask Google to translate a phrase, or see a product recommendation while shopping online.  The better you train the model, the smarter the predictions get. Google Translate retrains their translation models using feedback from users of the tool every few days, so it is constantly improving.
To train machine learning models requires large amounts of computing power and memory. The process takes, in the case of Google Translate, years to accurately train the model. There is a need for much faster training processes before machine learning starts to look anything like human learning, which is the goal of artificial intelligence. Next week I will write about some of the new training methods that are forthcoming.