Thursday, May 16, 2019

Celebrating Numbers


  We like to use numbers to represent things in the natural world. I believe this is because we can manipulate, modify, and understand things when put into numbers. We grow up learning to count things in order to share toys, candy and turns in games. We use numbers every day. As character Charlie Eppes in the American crime drama television series NUMB3RS so eloquently states, “Math is nature’s language: its method of communicating directly with us. Everything is numbers.”
  Many of you have probably heard of Pi day, March 14, because the date 3.14 matches the constant known as Pi, which is the ratio between the circumference and diameter of a circle. This week I thought about how other mathematical constants in our world get ignored. I figured, why not talk about some of the lesser known constants and pick one to celebrate.
  For our new math celebration, I would like to recommend Bohr day, May 29. I know, now you are all wondering what is a Bohr? Well for those who are not into nuclear physics, or computational chemistry, it’s probably a real bore. However, to me it is fascinating.
llustration by Stephen Lower, https://chem.libretexts.org

  Niels Henrik David Bohr, a Danish physicist, made foundational contributions to our understanding of the atomic structure of atoms. He received a Nobel Prize in Physics in 1992 for his work. He was best known for the development of the Bohr model of the atom. He proposed that energy levels of electrons were discrete, causing them to obtain stable orbits around the atomic nucleus, but they can jump from one orbit (or energy level) to another.
  Bohr’s model is no longer the accepted atomic model, but the principles of his model remain completely valid, even his theoretical measurement of the Bohr radius (5.29 x 10E-11). This is the average distance of the electrons in Hydrogen from the nucleus. In the Bohr model proposed in 1913, it is stated that electrons only orbit at set distances from the nucleus, depending on their energy.  In the simplest atom, hydrogen, a single electron orbits the nucleus and it smallest possible radius, with the lowest energy, has an orbital radius almost equal to the Bohr radius.
  Although Bohr’s model is no longer in use, the Bohr radius remains very useful in atomic physics calculations, mostly because of its simple relationship to other fundamental constants. It is the unit of length in atomic units, just like we use, inches, feet, and miles to measure length; at the atomic level the Bohr radius is like the inch.
  The Bohr radius is one of three units of length used in atomic physics, the other two being the Compton wavelength of the electron and the classical electron radius. The Bohr radius is calculated from the electron mass, Planck’s constant, and the electron charge. The Compton wavelength is built from the electron mass, Planck’s constant, and the speed of light. The classic electron radius is built from the electron mass, the speed of light, and the electron charge. Any of the three can be converted to the others by using the fine structure constant.  Interestingly the Compton wavelength is about 20 times smaller than the Bohr radius, and the classical electron radius is about 1000 times smaller than the Compton wavelength.

Thursday, May 9, 2019

Batteries Past, Present, and Future

    As we learned over the last couple of weeks, what we see as new technology in electric generation is not that new. Guess what? The battery is even older than the means of generating electricity. We think of a battery today as a way of storing excess electrical power from things like solar panels and wind turbines, but the battery used to be the only source of electrical power. 
Photo By Ironie - Own work,
CC BY-SA 2.5, https://commons.wikimedia.org/
w/index.php?curid=2091669
 Illustration of the
Baghdad Battery, earliest discovery of a battery
in history.
    The first possible batteries, the “Baghdad batteries,” were discovered during an archeological dig just outside present-day Baghdad, Iraq. They were clay jars about five-inches long, containing an iron rod encased in copper. There was evidence of acidic substances having been stored in the jars, leading Wilhelm Konig, who discovered the jars, to believe they were batteries. Since the discovery, replicas have been made and have proven to generate electricity. These batteries were dated from around 200 B.C. We do not really know what they were used for, but other discoveries indicate that they may have been used for electroplating. Electroplating is a method of using electrical current to coat one type of metal with another.
    What I find surprising is that batteries did not reappear on the scene of history until 1799, around 2000 years after these first batteries would have been created. Alessandro Volta created the first battery, not including these ancient ones in 1799, by stacking layers of zinc, cloth, and silver in a brine solution. This was not the first chemical device to generate electricity, but it was the first to emit steady, lasting currents. Volta’s voltaic pile had limitations, because as it grew larger, the weight of the plates squeezed the brine out of the cloth causing the battery to fail. The discs also corroded quickly, causing a short-lived battery. Despite these shortcomings, the standard unit of electric potential is called the volt in his honor. Volta’s battery made many new experiments possible, including the first electrolysis of water by Anthony Carlisle. Carlisle used Volta’s battery to separate water into hydrogen and oxygen for the first time.
Photo By I, GuidoB, CC BY-SA 3.0,
 https://commons.wikimedia.org/w/
index.php?curid=2249821

A voltaic pile, the first chemical battery.
    The next major improvement to the battery came from John Fredric Daniell in the form of the Daniell Cell in 1837. He found a way to solve the biggest issue with Volta’s battery, which was the build-up of hydrogen bubbles on the copper plates. He solved this by utilizing a copper sulfate solution separated from a zinc bar, submerged in sulfuric acid by an earthenware barrier. The barrier kept the liquids from mixing, but allowed the chemical ionization to occur, and without the copper plate coming in direct contact with the acid, the hydrogen bubbles did not build up on the plate. This gave his battery a much longer life expectancy.
    In 1860, a Frenchman named Callaud invented the gravity cell, which was a simplified Daniell Cell in which he eliminated the earthware barrier, reducing the internal resistance of the system and increasing the current yielded by the battery. His battery was the battery of choice for the American and British telegraph networks and was used until the early 1950s.
    Our most recent batteries are still based on the concept of chemical reactions between metals when accelerated by acid compounds, creating an electrical current. We have learned a lot about how the process works, and our greater knowledge of chemical processes have allowed us to create better, longer lasting batteries. The three most common types of batteries today are nickel-metal hydride non-rechargeable, lithium-ion and lead-acid rechargeable batteries. 
    This leads me to ask, what is the battery of the future? Batteries are the largest limiting factor in modern technology, from robotics, computers and cellphones to electric cars. We have been searching for decades for a more efficient, smaller, lighter battery that stores or produces larger amounts of power. The most efficient batteries to date are lithium polymer batteries that were released by Sony in 1997. These batteries hold their electrolytes in a solid form instead of a liquid, making it possible to form them in different sizes and shapes. 
    For now we can only speculate that newer chemical compounds and manufacturing techniques will make smaller, lighter and safer batteries in the future. If you are interested in battery technology, a great documentary on the topic was released in 2017 by PBS and is available on DVD. The files is titled “Search for the Super Battery: Discover the Powerful World of Batteries.”

Thursday, May 2, 2019

The power of the Sun

     You might find it hard to believe that people have been converting the power of the Sun to electricity for 180 years. In 1839 Alexandre Edmon Becquerellar demonstrated the photovoltaic effect, the ability to convert sunlight into electricity. It was about four decades later, in 1883, that Charles Fritts installed the world’s first rooftop solar array in New York. This was a year after Thomas Edison opened the world’s first commercial coal power plant, and four years before the first wind power plant was installed in Scotland in 1887.
     Fritts used glass panels coated with selenium to produce a very weak electric current, but he did not really understand why it worked. It was not until 1905 that Albert Einstein published a paper explaining the photoelectric effect. Between Becquerellar and Einstein, the basis of all solar technology development was formed. 
Photo by Scott Hamilton, A local 6000-watt solar array
designed to power an off-grid home.
     The photovoltaic effect is caused when a material, such as selenium, absorbs light, and as a result, the atoms in the substance get excited, causing them to shed an electron. This electron gets passed to a neighboring atom create a voltage difference between the two atoms. This is known as the photovoltaic effect. There is a second effect at work in solar panels as well that is created by the heat from the absorption of the light. As the panel absorbs light, it heats up, creating a temperature difference between the top and bottom of the panel. The mixture of chemicals in the panel along with the temperature difference creates a voltage by the Seebeck effect.
     The Seebeck effect occurs when two different metals, or semiconductor materials touch and a temperature difference is created between them. This causes electrons to move from the hot side of the contact point to the cold side, creating a voltage difference between the hot and cold side. This effect is used in modern thermostats to measure and control the temperature in most buildings today.
Bell Labs developed the modern photovoltaic cell in 1954 and the technology was quickly adopted by the U.S. Navel Research Laboratory for use on the first space craft to utilize solar panels. The Vanguard I was launched in 1958, and by 1964 NASA has launched Nimbus I, the first satellite equipped with panels that automatically tracked the Sun. It was not until the Emergency Petroleum Allocation Act of 1973, which brought about an energy crisis, that the public release and availability of solar power occurred. 
     The “Solar Heating and Cooling Demonstration Act of 1974” turned several federal buildings into billboards for solar energy. Around the same time additional legislation mobilized federal agencies to research how to make solar technology more affordable. The goal of the coordinated federal effort was to make solar viable and affordable to the public. There have been several waves of federal moneys turned into incentives for solar energy, and yet today it is only holding about 1% of the total electric generation market.
     Solar panels are not the only method of extracting energy from the sun, and they are also not the most efficient use of solar power. If you have ever opened your car door on a sunny afternoon to find the temperature inside well over 2000 degrees, you have experienced the most efficient use of solar energy. There are experiments all over the internet that allow you to observe the extreme heating power of the sun. One is called, “Burning Stuff with 2000-degree solar power” by The King of Random, where he melts concrete, pennies, glass, and steel with a four foot magnifying lens he took from a rear projection TV.

Thursday, April 25, 2019

Power from the wind

     In recent years you may have noticed a lot of talk about the use of wind turbines to generate electricity.  It has even been in the news recently that President Donald Trump believes the “noise” from wind turbines causes cancer. It is not the first time government officials have used scare tactics to stop the use of renewable energy resources, but that’s not really the topic of this week’s article. There is a lot more history behind the use of wind energy than you might believe.
People have been harnessing the power of the wind since around 1000 B.C. The oldest and first known use of wind power was in sailing ships. This led to the development of the earliest sail-type windmills. The first windmills were used to grind, or mill, grains, thus the name windmill. The earliest known wind mills were built in Persia around 500 A.D.; they were used to process grains and pump water. The early windmills were vertical access designs, meaning they rotated parallel to the ground. 
     In the 1300’s some of the first windmills begin to appear in Europe; these were the earliest horizontal access windmills. No one really knows why there was a sudden shift from vertical to horizontal access windmill designs. It probably has a lot to do with the fact that the wind can only strike half the blades of the vertical windmill result in half the power transferred from the wind, rather than being able to strike all the blades on a horizontal design.
     The horizontal mills did add complexity to the design as they required gearing mechanisms to transfer the horizontal rotation of the main shaft into the vertical rotation needed for turning the mill stones. The early mills used the gear mechanisms from horizontal water wheel driven mills. The Dutch were the first to offer new designs from the early post mills to tower mills. The primary difference between the post mills and the tower mills were the additional floors, with equipment powered by the post style mill atop the tower. The top floor of the tower mill could be rotated manually to make the blades face the wind, and the speed of the mill could be controlled by adjusting the angle of the blades in the wind. The sails were removable to protect the mill from strong winds during the stormy season, and the windsmith usually resided in the mill itself.
     Over the course of about 500 years, these mills were incrementally improved, dramatically increasing their efficiency. By the time this process was complete, these early mills from the 1870s had most of the features of modern wind turbines. These mills were the “electric motors” of pre-industrial Europe. The applications of the windmills ranged from water well pumps; irrigation and drainage systems; grinding grain; sawing timber; processing spices, cocoa, paints, dyes, and tobacco; and even operating sewing machines. In the early 1900s, large steam engines began to replace the windmills.
     During the next 120 years, between 1850 and 1970, over six million mostly small (one horsepower or less) mechanical output windmills were installed in the U.S. alone. These were primarily used across the Midwest to pump water from wells and ponds for watering livestock. The larger mills were used to pump water into towers for early steam trains, which provided the primary source of commercial transportation in areas where there were no large rivers. The first windmills to generate electricity began appearing around 1850, nearly 50-years after the invention of the electric light.
     The first commercial wind-farm to generate electricity was built in Cleveland, Ohio, in 1888 by Charles F. Brush. It was a post mill with a multi-blade “picket-fence” rotor that was 56 feet in diameter and featured a large “tail” that was used to turn the rotor out of the wind. It was the first known windmill to use a gearbox to increase the rotational speed of the mill to the 500 RPMs needed to properly turn the generator. The mill was in operation for 20-years and produced 12 kilowatts of power. It was not nearly as efficient as newer turbine designs of equal size that are capable of producing 70 kilowatts, but it goes to show that wind power is not the new kid on the block.

Thursday, April 18, 2019

Rock, Paper, Scissors

     This week we will begin a very short series in programming and let you get started with your new Raspberry Pi, that is if you found last week’s article of any interest. First of all if you do not have a Raspberry Pi, don’t worry, you can skip the first steps and just use any computer you have available for the actual programming part.
     If you want a very detailed, step-by-step process for setting up your Raspberry Pi, head over to http://projects.raspberrypi.org  and look for setting up your Raspberry Pi. There are all kinds of projects on that site to keep you busy for a very long time, but I want to make sure you are able to get started without a computer. This will require a MicroSD Card of at least 6GB in size. A MicroSD card is the little card you can get just about anywhere that is usually used to store photos and video on your phone or camera. You will also need access to a computer to download an image to the SD card. If you don’t have a computer, or access to one, you can order an SD card with Raspbian pre-installed from Amazon. Hopefully you have a TV or computer monitor with an HDMI port, an HDMI cable, and a MicroUSB phone charger. Most TV’s with an HDMI port also have a USB port that actually supplies enough power to run the Raspberry Pi, so you should not need a power supply.
     You can get Raspbian, or any other of several different operating systems for the Pi by downloading a utility called NOOBS from http://www.raspberrypi.org/downloads you will then want to format your SD card using an SD card formatter that you download from https://www.sdcard.org/downloads/formatter_4/index.html. Then unzip the NOOBS archive to your SD card. Next, you will need to connect everything to your Raspberry Pi and turn it on. The photo shows where to connect everything. Connect the power cord last, and if things go well, you will see a red light in the top left corner of your Pi. As it starts up, also called booting, you will see raspberries appear across the top of the screen. The steps are fairly straightforward following prompts on the screen and after a few minutes of software downloads, the Raspbian desktop will appear. It looks very similar to a Windows computer desktop except that the menu is at the top of the screen.
If you made it this far, congratulations, you have a working computer. If you plan to use a different computer for the rest of the steps, you will need to install python 3 on your computer following the instructions at http://www.python.org. The Raspbian Operating System has python installed by default so you are ready to begin learning to program as soon as it successfully boots. You can also use an online version of python for learning at http://trinket.io, which allows you to write and test python code from the internet without installing anything.
     For this week’s project we are going to write a simple game of Rock, Paper Scissors and play against your computer. The rules are simple, just like playing the game against another person. Both you and the computer pick rock, paper, or scissors and the winner is decided by the following rules. Rock breaks the scissors, so rock wins. Paper covers the rock, so paper wins. Scissors cut the paper, so scissors wins.  It might sound hard to get a computer to play a simple game, but it isn’t that difficult, and I’m sure you can do it.
     First you will need to teach the computer to pick rock, paper, or scissors randomly. For this you will need to get a random value. Python has a library called random that does just what we need, but to use a library you have to import it. Only type the stuff between the quotes, not the quotes themselves. Using a single line of computer code: “from random import randint”.
We will let the player go first, which might not seem fair, but I promise the computer does not know what you pick when it picks, unless of course you allow it to cheat. Letting the computer always pick the winning move is a more advanced step you can add. To let the player pick we have to let the computer read what we type. This is done using the following line of code: “player=input(‘rock (r), paper (p), or scissors (s)?’)
     Next just to make sure the computer read it, we will print what the player types: “print(player, ‘ vs ’)”.
     Now for the computer’s turn, it only understands numbers for now so it will pick one, two or three: “chosen = randint(1,3)”. 
     We teach it what the numbers mean using a choice function called “if”; for if to work, python depends on spaces so this will be a sets of two lines, the second one starts with two spaces, “if chosen == 1:” and “  computer = ‘r’”. We then only want to check to see if the computer picked two if it did not pick one, so we use a command called “else if” which gets shortened to elif.  “elif chosen == 2:” and “  computer = ‘p’”. Finally we know that if it was not one or two it must be three so we just use “else” in the next two lines: “else:” and “  computer = ‘s’”.
     Now we can print what the computer chose: “print(computer)”. You can now run your code and decided for yourself who won. It takes 14 more lines of code, using nothing more than you already learned to let the computer tell you who won. To make it a little easier to follow I will show you the whole program.  You can download the sample code from our website at https://www.thelickingnews.com/p/code-samples.html.
from random import randint
player = input(‘rock (r), paper (p) or scissors (s)?’)
print (player, ‘vs’)
chosen = randint(1,3)
if chosen == 1:
  computer = ‘r’
elif chosen == 2:
  computer=’p’
else:
  computer=’s’
print (computer)
if player == computer:
  print(‘DRAW!’)
elif player == ‘r’ and computer == ‘s’:
  print(‘Player wins!’)
elif player == ‘r’ and computer == ‘p’:
  print(‘Computer wins!’)
elif player == ‘p’ and computer == ‘r’:
  print(‘Player wins!’)
elif player == ‘p’ and computer == ‘s’:
  print(‘Computer wins!’)
elif player == ‘s’ and computer == ‘r’:
  print(‘Computer wins!’)
elif player == ‘s’ and computer == ‘p’:
  print(‘Player wins!’)

Thursday, April 11, 2019

Flexible and Affordable Computers

     Have you always wanted a computer to learn programming, surf the internet, check e-mail, or just for fun, but can’t find the extra money? In this week’s Tech Talk I will introduce you to an affordable computer that anyone with a flat screen TV can afford. It is called a Raspberry Pi and can be found online for as little as $35.
Raspberry Pi single board computer from the 
Raspberry Pi Foundation.
     I know what you are thinking: a $35 computer can’t be that great. I personally own three of them and they work great for small everyday things, like checking your e-mail online, watching You-tube videos, viewing most websites and even word-processing. It makes a great media center device connected to your TV for watching videos and listening to online music and radio.
     First let me give you a little background on the Raspberry Pi Foundation. The foundation was formed in the UK as a charity that focused on educating people in computing and creating easier access to computers primarily for educational purposes. In 2012, the foundation launched the first version of the Raspberry Pi for $35 and found very quickly that they could not keep up with the demand. They have partnered now with several manufacturers and can do production runs of 4,000 computers a day.
     They are now in production with the latest model 3 which is roughly four times more powerful than the original system, and they have a goal of keeping the price the same, at $35 or less for every new release.  They even have the Raspberry Pi Zero which is only $5. I would say they achieved their goal as people all over the world use the Raspberry Pi to learn programming, build projects, do home automation, and even for use with industrial applications.
     The nicest feature of the Raspberry Pi is the integrated GPIO (general purpose input/output) pins and controller that let you connect and control external electronic components. This is not possible with your laptop or PC without spending several hundred dollars for additional equipment. One other thing to note is that all the software that runs on the Raspberry Pi is free.
     The Raspberry Pi 3 B+ is a single board computer system with an ARM Cortex-A53 1.4GHz processor, 1GB of memory, 300Mbps wired network connection, WiFi controller and Bluetooth. It runs the Raspbian operating system based off of the popular Debian Linux Operating system.
Over the next several weeks I will cover some Raspberry Pi projects and give you ideas on how you can learn computing with the Raspberry Pi. If you want to follow along and complete the projects, I will provide a list of things you need for the next week. For next week, you will need a Raspberry Pi, with a power supply, keyboard, mouse, HDMI cable and a TV with an HDMI port. If you have questions about what you need, you can come by The Licking Publisher@TheLickingNews.com.
News Office or email me at
     Next week’s lesson can also be done on your computer. Just download from Python.org the Python 3.7 software and follow along as I teach you how to get started with some basics of computer programming using Python. I must give a word of warning about the column over the next weeks. Programming can become very addictive; once you learn how to control the computer, you may never want to stop.  Computers, even inexpensive ones like the Raspberry Pi, can do some pretty amazing things. With a little instruction and practice, you might create the next best thing in technology.

Thursday, April 4, 2019

Impacts of the Global Positioning System


     The Global Positioning System (GPS), in operation since January 6, 1980, was designed to broadcast date and satellite identification information used to determine the position of the receiver on the ground. The position of the receiver is extrapolated from the timestamp of the received signal from each of satellites that are within view of the receiver. Once the receiver knows its exact distance from at least four satellites, it can use geometry to determine its location on earth in three dimensions. As you can see, the time stamp of the signal from the satellite provides critical information for determining your location.
By United States Government
A GPS Block IIR(M) satellite.
     There is a reason an article about GPS, which many of us use regularly, is necessary at this time. In the original design of the system, a week number was included as part of the time stamp to reduce the size of the information packet from the satellite. This was done to improve the overall performance of the system. The week number was set up as a 10-bit binary number, forcing it to roll over back to zero at the end of 1,024 weeks. This happened the first time on Aug. 21, 1999, and there was minimal impact to the system. This roll-over will happen again on April 6, 2019.
     Most of the GPS receiver manufacturers understand what the roll-over means and have software verification code in place to insure that the date remains correct and the equipment can operate without issues. However, if equipment is not running a proper version of the software when the week rolls over on April 6, unpatched GPS receivers will roll back in time to August 21, 1999. This is not expected to be a problem for newer GPS receivers, but older units may experience very unexpected behavior.
     You might be wondering what the impact of a failed time stamp roll-over would be. There is a slight possibility of major impacts to the electrical grid as a result of the inaccurate time information coming from the satellites. How does GPS affect our stationary electric grid? It may seem odd, but the grid depends heavily on GPS for time synchronization of all the control systems in order to keep all the critical components of the system in sync. Currently these systems rely entirely on the timestamp signals from the GPS system that identifies the current week and second within the week. The signal is then converted to a proper date by the receiver. 
     Essentially what is going to happen on April 6 is a reset that will cause the satellites to send a signal of week 0, which is the week beginning August 21, 1999, instead of Week 1025, April 6, 2019. The receivers are responsible for making the adjustment. The electric grid uses these time signals as part of the Phasor Measurement Units and the North American Electric Reliablity Corporations’ (NERC) requirements use these Syncophasers, along with the GPS systems to get real-time snapshots of grid performance to adjust the power levels at the power plants in real-time to create a more stable electric grid. These changes in technology have greatly increased the chances of the GPS rollover event impacting our electric grid.
     Operators of these mission critical systems have been notified of the forth-coming event and been given guidelines to circumvent any problems. Notifications were sent January 25 to all power companies, airline industries, and other critical use industries.
     If you have an older model GPS receiver, now is the time to call your manufacturer and find out if you need to perform a software update, or you might just travel back in time to 1999 on Saturday.

Thursday, March 28, 2019

Cecilia Payne, the unsung hero of astronomy

     Cecilia Payne was the first female astronomer to be granted a professorship at Harvard and the first female department chair. Her doctoral thesis was a study on stellar spectra. Stellar spectra refers to the spectrum of colors of light reflected from stellar objects. By measuring the stellar spectra and comparing it to the light reflected from known elements, Payne was able to determine the chemical composition of the stars.
Cecilia Helena Payne Gaposchkin (1900-1979),
astrophysicist at Harvard College Observatory,
 known for her research on stellar spectra.
 Photo from Smithsonian Institution Archives
(Acc. 90-105 – Science Service, Records, 1920s-1970s)
     Payne’s research was in direct conflict with the pre-eminent American physicists of her day. Geochemist Frank Wigglesworth Clarke had written a book comparing the strong spectral lines of the sun with his comprehensive sampling of minerals from the earth’s crust. Henry Norris Russell and Henry Rowland believed that the elemental abundances on the earth and in the Sun were nearly identical. Rowland’s opinion was because the spectra of the stars and the Sun were similar, that the relative abundance of elements in the universe was like that in the Earth’s crust.
     Payne had a stronger knowledge of atomic spectra than most astronomers at the time and disagreed with Rowland. She applied research by Meghnad Saha that indicated temperature had a large effect on the atomic spectra.  Payne used Saha’s equations to show that only one in 200 million of the hydrogen atoms in the Sun existed in the excited stat
e that gives off the signature spectra of hydrogen. As a result, she went on to show, the Sun as well as the stars were primarily formed of hydrogen and helium. The currently accepted values for elemental abundance in the Milky Way Galaxy (74% hydrogen, 24% helium, and 2% everything else) completely support her results. 
     Payne’s discovery brought a new view of the universe, much like the giants Copernicus, Newton, and Einstein. However, Payne was relatively unknown in the field. Many attribute this to the fact that she was a woman in the field at a time when women were denied many opportunities. Her obituary read in part, “Cecilia Helena Payne-Gaposchkin, a pioneering astrophysicist and probably the most eminent woman astronomer of all time, died in Cambridge, Massachusetts, on December 7, 1979. In the 1920s she derived the cosmic abundance of the elements from stellar spectra and demonstrated for the first time the chemical homogeneity of the universe.”

Thursday, March 21, 2019

Happy Birthday World Wide Web

     It was March, 1989, when Sir Tim Berners-Lee, a researcher at CERN in Geneva, Switzerland, released a paper titled, “Information Management: A Proposal,” which outlined a method of interconnecting related documents for sharing research. His proposal was originally rejected as “Vague but exciting,” and was rejected for funding. However, Mike Sendall, Tim’s supervisor, gave him permission to work on the project unofficially.
Sir Tim, the creator of the World Wide Web, arriving at Guildhall
to receive the Honorary Freedom of the City of London.
Used under the Creative Commons Attribution-Share Alike 4.0
 International license. (https://en.wikipedia.org/wiki/Creative_Commons)
     Tim began work on the project in September 1990 and wrote three fundamental technologies that remain the foundation of World Wide Web. The first of these is HTML, the HyperText Markup Language, which consists of “tags” that allow one document on the web to link to other documents, or even other sections of the same document. The second is the URI, a Uniform Resource Identifier, which you can think of as an address for locating a file on the web, it is also commonly referred to as a URL (Uniform Resource Locator). The third and final is HTTP, the HyperText Transfer Protocol, which is the method by which the documents are shared across a network, allowing files on different computers throughout the world to link to files on other computers.
     Tim also wrote the world’s first web browser and web page editor, as well as the first web-server. The web-server is the computer where the shared files are stored and runs the HTTP services so that you can read the files. The web-browser is the application on your computer, smart-phone, game-system, or television that lets you view the files or “pages” that are stored on the server. Tim felt that it would not be right for a single entity to control the code and processes to make the web available and convinced CERN to release the tools on a royalty-free basis, forever. This made the web the first truly open-source free software. The decision to announce the public availability of the web was made in April, 1993, and sparked a wave of creativity.
     Prior to this, work files were shared via Bulletin Board Systems (BBS), which were files stored on personal computers connected to phone lines. In order to share files, there were “known” but not well advertised phone numbers that you could dial in with your computer and get a list of other computers and files with other phone numbers. Basically you had to make several phone calls to get files from what is now called the web. Tim’s system simplified the process of knowing where to find the files and created a centralized repository of file links. In 2003, companies banded together to create a new standards committee that kept the web royalty free, and in 2014, two in five people globally were connected and using the web.
     This month we celebrate 30-years of the web. None of us involved with the web in the early 1990s ever dreamed that it would be used for online interactive gaming, teleconferences with live video and even holographic-like augmented reality systems. We were just happy we didn’t have to dial multiple phone numbers and figure out which files we wanted based on people naming things the right way. Tim’s work allowed us to separate the eight character file name from the title or content of the file so we could link documents without caring what the file was named. Many times in the early days, files on the web were called file001, file002, etc. Without HTML and the URI, we would have never been able to figure out what a file contained without reading it.
   
In reference to sharing research information, Tim said, “In those days, there was different information on different computers, but you had to log on to different computers to get at it. Also, sometimes you had to learn a different program on each computer. Often it was easier to go and ask people when they were having coffee….” Just imagine having to pick up the phone and call someone every time you needed a piece of information instead of saying, “Okay Google.”

Thursday, March 7, 2019

Tiny radios in your pocket


     Did you ever wonder how that hotel room key card works, when it never enters a slot and doesn’t even have to leave the cardboard sleeve? It is a technology called a Radio Frequency Identification (RFID) tag.  An RFID tag is really a tiny radio that can both transmit and receive data that is stored in a memory device on the card. 
Photo by Scott Hamilton RFID tag from laser printer toner
cartridge used to track cartridge life.
     The RFID tag is less than a hundredth of an inch in size, and usually has an antenna attached that is around a tenth of an inch in size. Sometimes the antenna is larger to make it possible to read the card from a longer distance. RFID tags can store digital information much like a USB drive or computer hard-drive. The real difference is that they are much smaller, cannot store very large amounts of information, and can be read and written to without a physical connection to the reader/writer.
     Most modern hotels use RFID cards to secure their rooms because they can modify the code on both the card and the lock randomly for every visitor to the hotel, greatly increasing the security of the rooms. Other uses of RFID tags are product tracking and security, inventory, and theft prevention. 
     The technology used in these products was originally created to assist large cattle ranches in tracking and identifying cattle. It is now used to track products in every industry and may eventually replace the bar code seen on products today. The main advantage of RFID is that a system can read multiple tags simultaneously without physically making contact with or seeing the tag. 
     RFID tags can make the future of grocery store checkout as simple as walking through a gate with your cart full of groceries and your debit card. The gate is a large RFID reader that will seemingly instantaneously read the tags in all the products in your cart and the tag in your bank card. The system will charge your card, e-mail your receipt, and you are on your way.
     RFID technology has been around since 1970, but only recently has become inexpensive enough to produce that it has come into wide use. The early technology used inductive coupling, which basically means that it used complicated metal coils that reacted in a specific way with a magnetic field, creating a specific current, or radio signal. This technology was difficult to manufacture and every tag had a unique shape and design. 
     The inductive designs were replaced by capacitive coupled tags, which used conductive carbon ink to create disposable tags that could be printed on-demand. This new technology used a microchip to store just 96-bits of information. This technology was not widely adopted and the company that developed it shut down in 2001.
     The latest innovations in RFID technology have combined the two methods to create a robust tag system that can either be constantly powered by an integrated battery (active); powered on demand with an integrated battery (semi-active); and powered by proximity to the reader (passive). 
     Active and semi-active tags are the most expensive and used to track expensive equipment like railroad cars and truckloads of inventory. They can be read from more than twenty feet away and are not considered disposable. When a product connected to an active RFID tag, like a railway car, is retired, the tag is moved to another product.
     Passive RFID tags are used in everything from your toll road pass sticker on your car windshield and your hotel room key to the bottle of shampoo you bought last week at the store. The tags can either be write-once-read-many, or read-write tags. You can get applications for your smartphone that can read, store, and simulate RFID tags. If you want to play around with RFID technology, old hotel room keys can usually be rewritten with card writer applications. You can then use them to automatically start applications on your phone, like turning on Pandora when it detects the tag in your car. You can also copy the RFID tag from your room key to your cell phone and use your phone as a room key. RFID technologies are coming en masse to our lives and I leave it to you to decide if this is good or bad.