Friday, February 14, 2020

10.10.2019 Artificial Intelligence


What is artificial intelligence (AI)? It is usually used to describe computers, machines, or software that mimic “cognitive” functions that humans associate with the human mind. Among them are “learning” and “problem solving.” The definition of AI has changed dramatically over the years.
As computers have become faster and more capable of simulating these cognitive functions, many of these functions have been removed from the definition of AI. A prime example of this is optical character recognition, which just a few years ago was considered an intelligent function for computers, mainly because the task was difficult.
Optical character recognition is able to take a picture of a page of text and turn it into text that can be edited in a word processing program or analyzed for content. Dr. Larry Tesler created a theorem of AI that is commonly quoted as “Artificial Intelligence is whatever hasn’t been done yet.” What he said was: “intelligence is whatever machines haven’t done yet.” In Tesler’s definition AI will constantly change because if it ever reaches a point that it can do anything we can do, it would then just be intelligence with nothing artificial about it.
The current technologies referred to as AI include human speech recognition, strategic game systems, autonomous vehicles, intelligent content delivery networks, and military training simulations. All of these will eventually become such mainstream technology that something new will define AI. 
There are currently three main fields of study in AI, data analytics, human-inspired intelligence, and humanized artificial intelligence. The most advanced of these three fields in my opinion is the data analytics field. Analytics AI learns based on past experiences to inform future decisions using cognitive intelligence to represent the world, much like a toddler learns to talk. Among tasks that fall into this category are image recognition, “Is this a picture of a cat or a dog?”; text content classification, “Is this article for or against a flat tax?”; intelligent routing, “Which route should I take to get around the accident ahead?” and product recommendation, “Since you bought the new iPhone, you might be interested in a case?” We have all experienced AI in our everyday lives, probably without even noticing it.
The area of human inspired intelligence makes an assumption that human intelligence “can be so precisely described that a machine can be made to simulate it.” This particular field brings about all kinds of ethical issues. Should we be attempting to create artificial beings endowed with human-like intelligence? There are several science fiction books and movies based on AI becoming a danger to humanity. This is the area of AI that is addressed in most of these stories. I believe the much larger risk of AI is the creation of mass unemployment, as machines are more and more capable of performing everyday human tasks.
The third area is humanized artificial intelligence. This is using AI to replace human characters in real life and in stories. Prime examples are the soldiers in the popular video game series “Call of Duty.” These soldiers make “intelligent” decisions based on the environment, including the interaction with human players. These humanized beings are also used in telephone support to remove the first level of contact. If you call a help desk, bank, or credit card company and a computer answers with a human-like voice asking, “What can I help you with today?” and gives the ability to answer in complete sentences, it is using humanized AI.
Is AI something we should be worried about, or something we should embrace? I leave that up to the reader.

Thursday, February 13, 2020

Safe Social Networking


Almost everyone is using social media and most of us use more than one social network system.  I am personally on three, Facebook, LinkedIn, and Twitter. All three have their own advantages and disadvantages, and all three are lacking good automatic protection of your privacy and security.  This week I want to provide some advice on how to safely use these networks and still protect your personal data.
First and most important is to always remember, even if you believe you are on an anonymous site, that once something is posted or messaged, it cannot be removed.  Once something is on the internet it is always there and never goes away. For example http://web.archive.org/ is personal websites from May 12, 1996 from infoseek.com, a predecessor of Facebook. Infoseek was bought by Disney and dissolved in 1999, but all the personal information is still online at archive.org. This being said, make sure you don’t post anything to social media that will hurt your reputation as it will never go away. Your online reputation can be a good thing or a bad thing. It is your choice depending on what you post and how you act online.
Second is to make sure you keep your personal information personal. The more information you post about yourself the easier it is for someone to steal your identity, access your data, or commit other crimes using your online profile.  My favorite example is a Facebook post of a family photo at Christmas, in the background is a whiteboard with the families Amazon password written in large letters.  Everyone has their e-mail address listed as public on their Facebook profiles and it is just a matter of guessing who manages the Amazon account, now they can log on as this user and make Amazon purchases.
Third make sure you know your online friends. It is very easy to create a fake profile for a person that does not exist and gather personal information on others.  Or worse yet, create a profile using a real name a photo to trick someone into sharing personal information.  Never accept a friend request from a stranger, or someone that you believe you are already friends without confirming by some other means that they have created a new account.  Even the local police force uses fake identities on Facebook to gather information about criminal activities, so be careful who you friend and what you share.
The rest of my advice has to do with protecting your accounts and your computer from malicious people and software.  Number one keep your security software up to date, it does not do any good to install an antivirus software and never apply the updates, there are new viruses released nearly every day. It is important to protect your computer because all your passwords and personal information can be recovered from your computer with the right software.
Number Two protect your accounts by having good passwords.  Just to give you an idea, a six-character password can be guessed by a hacker using simple software in under 15 minutes, and a 11 character password with the same software will take 10 years and anything longer than 12 characters will take over 200 years to guess. The strongest passwords that are easy to remember are sentences or phrases.  I personally like to use Bible verses. For example, John3:16 to create a password using the first letter of each word to come up with FGsltwthghobstwbihsnpbhelJn3:16.  It is easy to remember, fairly easy to type, but nearly impossible to guess. 
Number three never use the same username and password combination on more than one website, for example don’t use the same login information on both twitter and Facebook, or use your g-mail password for Facebook.  It is highly likely that if one of your accounts gets hacked, the hacker will attempt to use the same email and password on all the other social media sites.  Also, I do not recommend keep passwords written in a notebook, unless you have a safe and keep it locked up all the time. It is better to install a password keeper software on your computer or smartphone and keep passwords encrypted there with a very strong master password.  You can get a nice free password safe software at http://keepass.info.  Until next week stay safe online.




Wednesday, February 12, 2020

10.05.2019 Photocopiers

Artwork courtesy of U.S. Patent and Trademark Office: Chester Carlson’s 
original photocopier from his patent granted on Sept. 12, 1944.
This week we had some issues with one of the copiers at the news office and it brought up some questions about how copiers work, and why sometimes they don’t. The photocopier has been around for longer than you might believe. Chester Carlson used his kitchen for “electrophotography” experiments, and in 1938, applied to patent the process. He used a zinc plate covered with sulfur to produce the first photocopy.
Carlson attempted to sell his patent to 20 different companies from 1939-1944, before Battelle Memorial institute assisted him in refining the process. In 1947, Haloid Corporation obtained the first license to develop, produce, and market a copying machine based on this technology. Haloid felt the word “electrophotography” was too complicated and hard to remember. They changed the term to xerography with was Greek for “dry writing.” The new machines were called “Xerox Machines” and in 1948, the word “Xerox” was trademarked. Haloid eventually changed its name to Xerox Corporation.
Enough about the history, now let’s get into how this thing works. Xerography works off basic principles of light and electricity. The first thing you must understand to grasp how photocopiers work is, “Light is a kind of electricity.” Light is just an electromagnetic wave traveling through space. Inside a photocopier is a device that allows current to flow through it when light shines on it. This device is called a photoconductor and it is used to capture the pattern of the light as a pattern of static electricity. Areas that are light will get an electrical charge and dark areas will get a no charge, making an electrical copy of the page.
The numbers in this section refer to the numbers in the drawing. Suppose you are trying to copy a page of this newspaper. If you shine an extremely bright light (52) at the paper (90), at the perfect angle, you will create a shadow of the paper on another object (41). In the case of a photocopier, that shadow is created on a photoconductive drum (41). The drum is then sprinkled with powdered ink which sticks to the charged areas on the drum. A piece of blank paper is pressed against the drum, transferring the ink powder to the paper (76). The paper now has a copy of the original document.
There is one more step in the process. The powdered ink must be bonded to the paper. This happens with combined heat and pressure in the fuser unit (72) of the copier. Sometimes the still warm copy comes out of the copier with enough static electric charge still on it that it will stick to your shirt, or the wall.  A very interesting thing happens when a fuser unit fails on a copier. You will see a perfect copy come out of the paper tray, and as soon as you pick it up, all the ink falls off onto the floor as powder. 
If you are looking for a full explanation of how the process works, you can read Carlson’s full patent; though technical in nature, it is fairly easy to understand. You can find his patent at https://patents.google.com/patent/US2297691.

Tuesday, February 11, 2020

09.26.2019 E=mc^2 celebrates 114 years


Sept. 27, 1905, marked the publication date of the world’s most famous equation. The physics journal Annalen der Physik published Albert Einstein’s paper, “Does the Inertia of a Body Depend Upon Its Energy Content?” where E-mc^2 was first introduced. Energy is equal to the mass of a particle times the square of the speed of light. One of the simplest equations to write has some of the most profound meanings. In honor of that publication, I want to point out five lessons we can take from this simple equation.

The first lesson is that “mass is not conserved.” We often make the mistake of thinking that mass never changes. For example, if you take a block of iron and chop it up into a bunch of iron atoms, you fully expect that the mass of all the atoms will be the same as the mass of the block. That assumption is clearly true, but only if mass is conserved. However, according to this equation, mass is not conserved at all. If you take an iron atom, containing 26 protons, 30 neutrons, and 26 electrons, and place it on a scale, you’ll find something disturbing. An iron atom weighs slightly less than an iron nucleus, and the nucleus weighs significantly less than the 26 protons and 30 neutrons that compose it. This is true because mass is just another form of energy, and the energy required to hold the nucleus together reduces the mass of the parts.

The second lesson is the law of the conservation of energy. Energy is conserved, but only if you account for the changing masses. Imagine the Earth orbiting the Sun at 30 kilometers per second. This is the slowest it can go without falling toward the sun. It orbits 150 million kilometers away from the sun.  If you were to weigh the Earth and then weigh the Sun, you would find the total weight of the Earth and the Sun measured separately is much greater than the weight of the Earth and Sun weighed together in motion. This is because the gravitational energy holding them in orbit affects the mass. The tighter the orbit, the more energy it takes to maintain stability and the lower the mass of the combined system. Protons and neutrons bind together in the nucleus of an atom in large numbers, producing a much lighter nucleus and emitting a lot of high-energy photons in the process. This nuclear fusion process can create extreme amounts of energy.

Third, Einstein’s E=mc^2 describes why the Sun and stars shine. Inside the core of the Sun, temperatures rise to over four million degrees Kelvin, allowing nuclear reactions that power the sun to take place. Protons are fused together forming a deuteron and emitting a positron and a neutrino to preserve energy. This process eventually creates helium-4 which only weighs in at 99.3 percent of the mass of the four protons used to create it. The process also releases nearly 28 million volts of electrical energy. Over the lifetime of the sun, it has lost approximately the mass of Saturn due to the nuclear bonding in its core.

Fourth, the conversion of mass to energy is the most energy-efficient process in the universe. One hundred percent of the mass is converted directly to energy, making it 100 percent efficient. Looking closely at the equation, you can see that mass is converted directly to energy and this tells you exactly how much energy you will get out of the system. For every kilogram of mass you convert, you get nine million joules of energy, the power of a 21 Megaton bomb.

Lastly, you can create massive particles out of nothing but pure energy. This is probably the most profound discovery and the hardest to explain. If you take two rubber balls and smash them into each other, you expect to get one result, two rubber balls. With particles like electrons, though, things change. If you smash two electrons together, you will still get two electrons, but if you smash them together hard enough, you can get a pair of anti-matter particles, effectively creating new mass from the energy involved in the collision. Mass can be converted to energy and back again.

Monday, February 10, 2020

New Posts Coming

Hello everyone, it has been a while since I posted my tech column on the blog so over the next several days you will see a new post each day that starts with a date. The date is the run date of the original article in The Licking News.  I want to make them available to you, but felt flooding you with three months of post in a day would get overwhelming.  The post without a date on Thursday is being published in print on Thursday.
Thanks for reading.
Scott

9.23.2019 Quantum Supremacy


  If you follow technology news at all, you will have noticed over the weekend many articles talking about Google reaching Quantum Supremacy.  This week I would like to talk a little about what that means. First you need to understand a little bit about Quantum Computing and Classic Computing.
Classic Computing is what runs your smart phone, The Licking News website, Google’s Search, and the word processor I am using to write this article. Classic computing works on the concept of a bit, which for lack of a better example is like a light switch; it is either on or off. Everything a computer does is based on the conditions of millions of these simple on/off switches.

  Quantum Computing is based off a Quantum Bit, or Qubit.  A Qubit is a strange thing that exists in a state somewhere between on and off, called superposition. You could think of it as a dimmer switch that lets you adjust the brightness of the light in your room. They can also be entangled, which is like wrapping a rubber band around two dimmer switches so that both lights act together. These two properties make it possible to solve extremely large computational problems very fast. A quantum computer with 40 qubits has the same memory capacity as a classic computer with six trillion bits.
Quantum Supremacy is loosely defined as the point in time that a quantum device solves a problem that can not be solved using a classic computer of any size. A problem exists with this definition, because computers are continuing to get faster, the target of Quantum Supremacy is always moving. Three years ago it was thought that a 40-qubit system would reach supremacy, now that has climbed to over 50-qubits. 

  There has been very little information released on what the “problem” was that Google solved late last week with their Quantum processor that resulted in the claim of being the first to reach quantum supremacy. A paper, titled “Quantum Supremacy Using a Programmable Superconducting Processor,” was briefly published to NASA’s website but was taken down within a few hours. The paper described how Google’s processor completed this unknown task in 200 seconds, which would take NASA’s state-of-the-art supercomputer 10,000 years to perform.

  “This dramatic speedup relative to all known classical algorithms provides an experimental realization of quantum supremacy on a computational task and heralds the advent of a much-anticipated computing paradigm,” the paper states. “To our knowledge, the experiment marks the first computation that can only be performed on a quantum processor.”

  Mainstream technology press organizations have been reaching out to Google for the last several days asking for comments on the supposedly leaked paper, but as of yet there is no comment from Google. I for one am interested to see the final outcome and if quantum computers have truly surpassed our current technology.

Thursday, February 6, 2020

Impacting Technologies of the 2010’s


I always find it interesting to review technology developments at the end of a decade.  This week I plan on not only listing, but talking a little bit about the top new technological developments in the last decade. The years between 2010 and 2020 brought about some of the most amazing technology in history.
For the first time in history we have a generation of people that never knew life without a computer, or the internet.  Everyone under the age of thirty has probably had access to a computer of some kind for their entire lives.  Over the last decade the computer has not only been a device that most people have access too, but a device that most of us depend on for everyday life.
The new technology has been surprising with advancements in artificial intelligence, autonomous vehicles, robotic lawyers,  and real time language translation just to name a few.  The technologies in this article are ones that I feel we will still be using and talking about well into the next decade.
In the 2000’s Facebook started the social media  trend and held the top of the social networks until the early 2010’s.  in 2010 Instagram became the most popular among the Gen-Zers, mainly due to the large influx of the older generation onto Facebook, bringing with them new media, marketing and politics.  Instagram became the preferred method for sharing personal lives on social networks. In 2012 Facebook purchased Instragram and the network has grown to over a billion registered users.
In 2011, Spotify took the market by storm offering the largest collection of music for streaming on demand. This brought about a massive decline in online music piracy.  With free services that stream all your favorite music and collect revenue through advertising to pay the music producers, the need to pirate music dropped tremendously.
In 2012, there was the introduction of the Tesla Model S, electric car.  This seemed like a major release in transportation technology, but the most impactful change wasn’t a new car. It was car sharing through Uber.  Uber rolled out it’s ride sharing service across every major U.S. city and around the world. Fundamentally changing how people get around the city. Lyft also launched their ride-sharing service in 2012, making this the year of shared transportation.
In 2013, Chromecast came to the scene, allowing people to stream video services to any device with an HDMI port.  Chromecast is not really all that useful today as the technology is integrated almost every device with a screen today, but it was a top selling product in 2013.
2014 was the year of the smart watch, with the release of the Apple Watch, which in all respects was an iPad in a tiny watch form factor.  This first model had all kinds of issues, but as Apple worked to resolve them it has become the best smartwatch on the market today. 
Amazon Echo started the smart speaker trend in 2015 as online voice assistants moved from the phone to the home. This device incorporated Bluetooth technology as well as machine learning and speech recognition.  The Echo held the market share in smart speakers until nearly the end of the decade when Google released a competing device, Google Home.
Airpods came on the scene in 2016 releasing us from wired ear buds and allowing freedom to move away from our audio devices. There was worry of them getting lost easily when they were first released in 2016, but the freedom they give to the user has greatly decreased that fear and they are now nearly as popular was wired ear buds.
The Nintendo Switch changed the gaming console forever, with the ability to use the system both as a stationary game system tied to a large screen TV and take it along with you on the road.  The main controller included a screen that allowed game play to continue on the road. The release of the Switch in 2017, brought a new approach to gaming hardware.
2018 was the year of the shareable electric scooters, that have seemed to become a permanent fixture in many major cities.  They have had an impact of city ordinances and been removed by law in many cities.  As a result of this legislation, the technology has lost any of it’s staying power, but the tech in the vehicle sharing has spread to the sharing of electric cars in a similar manner across several U.S. cities.
Last, but not least, is the release of Tik Tok in 2019. As Gen Z kids break into adulthood, this is the most likely platform to become the main method of communication among their peers.  This short video sharing service is currently the top contributor to internet content storage and results in close to 15% of all the data on the internet today.  It is expected to grow beyond 50% of all online data within the next couple of years.

Thursday, September 19, 2019

High Performance Computing


High Performance Computing (HPC) consists of two main types of computing platforms. There are shared memory platforms that run a single operating system and act as a single computer, where each processor in the system has access to all of the memory. The largest of these available on the market today are Atos’ BullSequana S1600 and HPE’s Superdome, which max out at 16-processors sockets and 24TB of memory. Coming out later this year, the BullSequana S3200 will supply 32-processor sockets and 48TB of memory.

The other type of HPC is called a distributed memory system and it links multiple computers together by a software stack that allows the separate systems to pass memory from one to another, utilizing a message passing library. These systems first came about to replace the expensive shared memory systems with commodity computer hardware, like your laptop or desktop computer. Standards for how to share the memory through message passing were first developed about three decades ago and formed a new computing industry. These systems made a shift from commodity hardware to specialized platforms about twenty years ago with companies like Cray, Bull (now a part of Atos), and SGI leading the pack.

Today the main specialized hardware manufactures are Atos with their Direct Liquid Cooled Sequana XH2000 and HPE with a conglomerate of technologies from the acquisition of both SGI and Cray in the last few years. It is unclear in the industry which product line will be kept through the mergers. HPC used to be purely a scientific research platform used by the likes of NASA and university research programs, but in recent years it has made a shift to being used in nearly every industry, from movies, fashion and toiletries to planes, trains, and automobiles.  

The newest use cases for HPC today are in data analytics, machine learning, and artificial intelligence. However, I would say the leading use case for HPC worldwide is still in the fields of computational chemistry and computational fluid dynamics. Computational fluid dynamics studies how fluid or material move or flow through mechanical systems. It is used to model things like how a detergent pours out of the detergent bottle, how a diaper absorbs liquids, and how air flows through a jet turbine for optimal performance. Computational chemistry uses computers to simulate molecular structures and chemical reaction processes, effectively replacing large chemical testing laboratories by large computing platforms.

The two latest innovations that are causing a shift in HPC are cloud computing, Google Cloud Services, Amazon Web Service, Microsoft Azure, and Quantum Computing, which is the next generation of computers that are still under development and are not likely to be easily available for ten years or more.

If you are interested in learning more about HPC, there are a couple of great places to start. The University of Oklahoma hosts a conference that is free and open to the public every September. This year the event is being held on Sept. 24 and 25; more information about the event can be found at http://www.oscer.ou.edu/Symposium2019/agenda.html. There is also professional training available from the Linux Cluster Institute (http://www.linuxclustersinstitute.org). Scott Hamilton, Senior Expert in HPC at Atos has also published a book on the Message Passing Interface designed for beginners in the field. It is available on Amazon through his author page (http://amazon.com/author/techshepherd). There are also several free resources online by searching for HPC or MPI.

Thursday, September 12, 2019

Big Data and data structures


There is a recent term in computer science, “Big Data,” which has a very loose definition and causes a lot of confusion within the industry. Big Data has been in existence since before computers existed. A perfect example of Big Data is ancient history that was recorded on scrolls. A scroll could only hold so much information before it was full and could hold no more. A single scroll was not too much to handle and carry around, but the amount of recorded data quickly expanded to hundreds and thousands of scrolls. This is what we refer to as Big Data, and we are still trying to come up with a solution to handle data that grows too large to be managed easily.

Most companies have an entire division dedicated to the management of data, and they have a big issue to face as we produce and gather more data in a single day than was collected and gathered for all of history prior to the computer age. Big Data is not a problem that will just go away and one of the ways we have begun to manage this landslide of data is to form tighter data structures. 

I know, I have now introduced another new term to talk about an old problem. Don’t worry, data structures are easy. Remember the scroll, it had a linear data structure, things were recorded on the scroll in the order that they happened and stored as characters of a written language. This is a very loose structure that is usually referred to as unstructured data, because you can write anything on a scroll. To have real data structure, you need a set format for recording the data. A great example of a data structure you have all seen is your federal income tax return form. They provide a set number of blocks to record your information on the form and reject the form if you go outside of the boundaries.  This is a data structure in paper format.

So how do data structures help to manage Big Data? The biggest way is by keeping the data in a known order, with a known size and known fields. For example, you might want to keep an address book; it would have all your friends’ names, addresses, phone numbers, and birthdays. What if you just started writing your friends’ information on a blank sheet of paper in a random order?

Bill, 9/1/73, 123 Main Street, Smith, MO, Licking, 65462, John Licking, Stevens, MO, 4/23/85, 573-414-5555, 65462, 573-341-5565, 123 Cedar Street. 

It would become quickly impossible to find anyone’s contact information in your address book, and even with the two friends in my example, you already have a Big Data problem; we don’t know what information belongs together.

If we take the same two people and provide a structure for the data, it suddenly becomes much more usable. 

Bill Smith, 123 Main Street, Licking, MO 65462, 537-414-5555, 9/1/73; John Stevens, 123 Cedar Street, Licking, MO 65462, 573-341-5565, 4/23/85. 

It is still not easily readable by a computer, because even though there is a known order, we have a field separator, the comma, but there is no known length, which complicates things for computer software. A computer likes to store data structures of a known length, so you need to define a size for each data field, and a character to represent empty space. In my example we will use 15 characters for every field and ^ will be an empty space.

Our address book now looks like this:

Bill Smith^^^^^
123 Main Street
Licking,_MO^^^^
65542^^^^^^^^^^
573-414-5555^^^
09/01/1973^^^^^
John Stevens^^^
123 Cedar Stree
Licking,_MO^^^^
65542^^^^^^^^^^
573-341-5565^^^
04/23/1985^^^^^


Thursday, September 5, 2019

Blockchain outstanding questions


Last week we left a few unanswered questions while talking about blockchain technologies. I would like to address those unanswered questions this week. The first one that comes to mind is the problem of double spending. Double spending can occur with a digital currency or any digital payment processing system, unless all payments are authorized by a single, central authority. Blockchain does not have a single central authority so in the early stages of development, the main problem that had to be addressed was the ability for the same currency to be used in two transactions simultaneously, resulting in a double spend.

The double spend problem was solved by only allowing a single path in the chain, and making each link depend on the hash code of the prior link. If two efforts to spend on the same chain occurred at exactly the same time, only one transaction would be processed. The other would have an invalid hash code linking to the previous transaction and be ruled invalid. This solution raised the second question, “What is a hash?”

A hash is created by a computational function, called a hash function. A hash function maps data of any size to a fixed size value. There are three basic rules to a hash function. The first is that each time you encode, or hash, the data you get the same results. The second is that small changes, even a single character change in the data must result in a different hash. The third is that a hash function cannot be reversed, meaning that you cannot use the hash to recreate the original data. Two pieces of data with a large difference can result in the same hash. An example of a trivial hash function is a function that maps names to a two-digit number. John Smith is 02, Lisa Smith is 01, Sam Doe is 04, and Sandra Dee is also 02. We won’t get into exactly how the mapping takes place, because it is a very advanced topic. All we need to know to understand how a hash function works is that it maps input data to a given set of specific values, like the example maps names to numbers between 00 and 99.

We mentioned that the hash function is used to tie the links in the blockchain together. The links form a Merkle tree. In cryptography and computer science, a Merkle tree is a data structure that links data in a single direction from a leaf to parent. In a Merkle tree, each leaf node is labeled with the hash of the data it contains, and every parent node is labeled with the cryptographic hash of the labels of its child’s nodes. Merkle trees allow for efficient and secure verification of the contents of large data structures, like transactional databases used in blockchains.

Merkle trees are named for Ralph Merkle who patented the technology in 1979. They are used to verify any stored data that is handled and transferred in and between computers. They ensure that datablocks are not changed by other peers in a peer-to-peer network, either by accidental corruption or fake blocks created by malicious systems on the network. This makes it difficult, but not impossible, to introduce fake data into a blockchain as it merely requires creating a data block that matches the hash of the block you are replacing, in effect, corrupting the tree. However, generating the fake data block is a time-consuming process and likely to not be completed by the time the next real block is generated, making it impossible to inject your change.

Join me again next week for an overview of data-structures and their applications.