Thursday, May 30, 2019

Additive Manufacturing


By 3DPrinthuset (Denmark) The building on demand (BOD)
printer developed by COBOD International
(formerly known as 3DPrinthuset, now its sister company)
 , CC BY-SA 4.0
  This week we are going to review some advanced manufacturing techniques. A majority of parts manufacturing is done with a process known as subtractive manufacturing. This is the process of taking a piece of metal, plastic, wood or other material and removing portions of it in a controlled fashion, resulting in a finished part. This method is used to make parts like gears, screws, engine blocks and crankshafts.   There has been a lot of talk in recent years of additive manufacturing techniques. This is where you start with nothing and build a part by adding material in a controlled fashion. The most popular additive manufacturing process is called 3-D printing.
  The very first 3-D printers used chemical compounds that harden when exposed to ultraviolet (UV) radiation. This is the invisible light that causes sunburns. It was in 1987 that stereolithography (SL) techniques were developed to create acrylic components. A UV laser is aimed at the reactive resin which instantly hardens. The object is then lowered layer by layer into the liquid and the final part is pulled from the vat of liquid. This technique produces the highest level of detail in the parts, but is the most complex process.
  The second method of additive manufacturing is called jetting and is similar to how an inkjet printer works. The same reactive resin is now sprayed from a nozzle onto a surface and exposed to the UV light, hardening it, before another layer is sprayed. There are also similar methods using powder and spraying an adhesive to create the layers. This is heavily used in industrial manufacturing facilities.
By Bre Pettis - Flickr:
 A Makerbot three-dimensional printer
 using PLA extrusion methods.
CC BY 2.0
  A very similar method to jetting and SL is Selective Laser Sintering (SLS). A powered material that can be rapidly fused by laser heat, such as polymides and thermoplastic elastomers, is placed in thin layers on a metal surface. A powerful laser then fuses (not melts) the powders into layers forming a very durable object. These were used in the manufacturing of custom hearing aids molded to fit the individual’s ear canal perfectly.
The final method of readily available 3-D printing technologies is definitely the cheapest method and the most popular among home users. It is called extrusion printing. This method uses strands of PLA or ABS plastics that are run through a temperature controlled nozzle which melts the plastic and creates the objects layer by layer. This extrusion method has been used not only with plastics, but with concrete, metal, ceramics, and even food, such as chocolate.
  Over the last two decades Missouri University of Science and Technology has been involved in bleeding edge research in the field of additive manufacturing. Among their claims to fame is the Freeze Form Extrusion machine. This machine uses extremely low temperatures (-16 to -40 degrees Celsius) to freeze ceramic pastes consisting of Boron, and Aluminum Tri-Oxide to form ceramic components capable of withstanding extreme heat greater than 2400 degrees Celsius. The process is a two-step process where the part is first frozen together and then baked at a very high temperature to rapidly fuse the ceramics.  Their latest research is in extending this process to fabricate titanium alloy components.
Additive manufacturing can be used today to make everything from key-chains to houses and even human skin grafts and organs. We have come a long way in the last 30-years in manufacturing technologies.


Thursday, May 23, 2019

The slow demise of an industry

It is a sad week in the High Performance Computing (HPC) industry as one of the best known giants in the field is acquired. Hewlett Packard Enterprises (HPE) is in the process of closing on a 1.3 billion dollar deal to purchase supercomputer maker Cray. This is the second supercomputing company to be purchased by HPE within 3-years. HPE purchased Silicon Graphics Incorporated in 2017.
This acquisition brings the global total of true supercomputing manufacturers selling product in the U.S. down to three. There are other companies offering HPC platforms, but only HPE, IBM and Atos remain as manufacturers of truly integrated HPC platforms.
I know this raises a question, what is an integrated HPC platform? I will first start by telling you a little history of HPC. In 1995, the biggest computers were very expensive mainframe systems manufactured by only a couple of companies. They could do the equivalent work of 10-12 normal computers, but cost two to three hundred times more. Out of the need for more powerful computing systems at a lower cost, a group of engineers developed a programming method that allowed computers to pass messages between one another. The Message Passing Interface (MPI) was born. I was among the team members that developed the standards behind that interface.
MPI brought about a whole new realm of computing called distributed computing. It is the backbone of almost every HPC system, including some aspects of modern cloud computing. MPI allowed smaller research organizations with a low budget to take even their used desktop computer systems that were being pulled out of service and link them together to create massive computing platforms. They were termed Beowulf clusters, named after the first one created by Thomas Sterling and Donald Becker at NASA in 1994.
This research sparked a brand new industry and a few short years ago there were hundreds of HPC manufacturing companies. Over the years a few big users of HPC felt the need for more robust systems than the hodge-podge, stick-it-together-with-duct-tape-and-network-switches type systems. This birthed companies like Cray and SGI. IBM and Atos/Bull had been around manufacturing the expensive mainframes before entering the HPC market.
The things that set the likes of HPE, IBM, SGI, Cray and Atos apart is the manufacturing of these Beowulf style clusters into integrated racks including customized computer motherboards, cooling methods, and interconnects in order to shrink the size and power utilization, effectively reducing the total cost of ownership of the building-sized computing platforms. The other manufacturers stuck to racking commodity server hardware and manually cabling the networks and infrastructure. The initial cost of these systems is lower, but they cost more to maintain over time.
It saddens me to see us down to three companies working on the bleeding edge of HPC hardware because, much like any other industry, as the competition dies off or merges, the prices increase and the innovation decreases. I miss the days when every researcher in computer science was building their own HPC systems and we all worked together to solve the problems and innovate new solutions. Quantum Computing will probably save the day with innovation, but that’s a topic for another day.

Thursday, May 16, 2019

Celebrating Numbers


  We like to use numbers to represent things in the natural world. I believe this is because we can manipulate, modify, and understand things when put into numbers. We grow up learning to count things in order to share toys, candy and turns in games. We use numbers every day. As character Charlie Eppes in the American crime drama television series NUMB3RS so eloquently states, “Math is nature’s language: its method of communicating directly with us. Everything is numbers.”
  Many of you have probably heard of Pi day, March 14, because the date 3.14 matches the constant known as Pi, which is the ratio between the circumference and diameter of a circle. This week I thought about how other mathematical constants in our world get ignored. I figured, why not talk about some of the lesser known constants and pick one to celebrate.
  For our new math celebration, I would like to recommend Bohr day, May 29. I know, now you are all wondering what is a Bohr? Well for those who are not into nuclear physics, or computational chemistry, it’s probably a real bore. However, to me it is fascinating.
llustration by Stephen Lower, https://chem.libretexts.org

  Niels Henrik David Bohr, a Danish physicist, made foundational contributions to our understanding of the atomic structure of atoms. He received a Nobel Prize in Physics in 1992 for his work. He was best known for the development of the Bohr model of the atom. He proposed that energy levels of electrons were discrete, causing them to obtain stable orbits around the atomic nucleus, but they can jump from one orbit (or energy level) to another.
  Bohr’s model is no longer the accepted atomic model, but the principles of his model remain completely valid, even his theoretical measurement of the Bohr radius (5.29 x 10E-11). This is the average distance of the electrons in Hydrogen from the nucleus. In the Bohr model proposed in 1913, it is stated that electrons only orbit at set distances from the nucleus, depending on their energy.  In the simplest atom, hydrogen, a single electron orbits the nucleus and it smallest possible radius, with the lowest energy, has an orbital radius almost equal to the Bohr radius.
  Although Bohr’s model is no longer in use, the Bohr radius remains very useful in atomic physics calculations, mostly because of its simple relationship to other fundamental constants. It is the unit of length in atomic units, just like we use, inches, feet, and miles to measure length; at the atomic level the Bohr radius is like the inch.
  The Bohr radius is one of three units of length used in atomic physics, the other two being the Compton wavelength of the electron and the classical electron radius. The Bohr radius is calculated from the electron mass, Planck’s constant, and the electron charge. The Compton wavelength is built from the electron mass, Planck’s constant, and the speed of light. The classic electron radius is built from the electron mass, the speed of light, and the electron charge. Any of the three can be converted to the others by using the fine structure constant.  Interestingly the Compton wavelength is about 20 times smaller than the Bohr radius, and the classical electron radius is about 1000 times smaller than the Compton wavelength.

Thursday, May 9, 2019

Batteries Past, Present, and Future

    As we learned over the last couple of weeks, what we see as new technology in electric generation is not that new. Guess what? The battery is even older than the means of generating electricity. We think of a battery today as a way of storing excess electrical power from things like solar panels and wind turbines, but the battery used to be the only source of electrical power. 
Photo By Ironie - Own work,
CC BY-SA 2.5, https://commons.wikimedia.org/
w/index.php?curid=2091669
 Illustration of the
Baghdad Battery, earliest discovery of a battery
in history.
    The first possible batteries, the “Baghdad batteries,” were discovered during an archeological dig just outside present-day Baghdad, Iraq. They were clay jars about five-inches long, containing an iron rod encased in copper. There was evidence of acidic substances having been stored in the jars, leading Wilhelm Konig, who discovered the jars, to believe they were batteries. Since the discovery, replicas have been made and have proven to generate electricity. These batteries were dated from around 200 B.C. We do not really know what they were used for, but other discoveries indicate that they may have been used for electroplating. Electroplating is a method of using electrical current to coat one type of metal with another.
    What I find surprising is that batteries did not reappear on the scene of history until 1799, around 2000 years after these first batteries would have been created. Alessandro Volta created the first battery, not including these ancient ones in 1799, by stacking layers of zinc, cloth, and silver in a brine solution. This was not the first chemical device to generate electricity, but it was the first to emit steady, lasting currents. Volta’s voltaic pile had limitations, because as it grew larger, the weight of the plates squeezed the brine out of the cloth causing the battery to fail. The discs also corroded quickly, causing a short-lived battery. Despite these shortcomings, the standard unit of electric potential is called the volt in his honor. Volta’s battery made many new experiments possible, including the first electrolysis of water by Anthony Carlisle. Carlisle used Volta’s battery to separate water into hydrogen and oxygen for the first time.
Photo By I, GuidoB, CC BY-SA 3.0,
 https://commons.wikimedia.org/w/
index.php?curid=2249821

A voltaic pile, the first chemical battery.
    The next major improvement to the battery came from John Fredric Daniell in the form of the Daniell Cell in 1837. He found a way to solve the biggest issue with Volta’s battery, which was the build-up of hydrogen bubbles on the copper plates. He solved this by utilizing a copper sulfate solution separated from a zinc bar, submerged in sulfuric acid by an earthenware barrier. The barrier kept the liquids from mixing, but allowed the chemical ionization to occur, and without the copper plate coming in direct contact with the acid, the hydrogen bubbles did not build up on the plate. This gave his battery a much longer life expectancy.
    In 1860, a Frenchman named Callaud invented the gravity cell, which was a simplified Daniell Cell in which he eliminated the earthware barrier, reducing the internal resistance of the system and increasing the current yielded by the battery. His battery was the battery of choice for the American and British telegraph networks and was used until the early 1950s.
    Our most recent batteries are still based on the concept of chemical reactions between metals when accelerated by acid compounds, creating an electrical current. We have learned a lot about how the process works, and our greater knowledge of chemical processes have allowed us to create better, longer lasting batteries. The three most common types of batteries today are nickel-metal hydride non-rechargeable, lithium-ion and lead-acid rechargeable batteries. 
    This leads me to ask, what is the battery of the future? Batteries are the largest limiting factor in modern technology, from robotics, computers and cellphones to electric cars. We have been searching for decades for a more efficient, smaller, lighter battery that stores or produces larger amounts of power. The most efficient batteries to date are lithium polymer batteries that were released by Sony in 1997. These batteries hold their electrolytes in a solid form instead of a liquid, making it possible to form them in different sizes and shapes. 
    For now we can only speculate that newer chemical compounds and manufacturing techniques will make smaller, lighter and safer batteries in the future. If you are interested in battery technology, a great documentary on the topic was released in 2017 by PBS and is available on DVD. The files is titled “Search for the Super Battery: Discover the Powerful World of Batteries.”

Thursday, May 2, 2019

The power of the Sun

     You might find it hard to believe that people have been converting the power of the Sun to electricity for 180 years. In 1839 Alexandre Edmon Becquerellar demonstrated the photovoltaic effect, the ability to convert sunlight into electricity. It was about four decades later, in 1883, that Charles Fritts installed the world’s first rooftop solar array in New York. This was a year after Thomas Edison opened the world’s first commercial coal power plant, and four years before the first wind power plant was installed in Scotland in 1887.
     Fritts used glass panels coated with selenium to produce a very weak electric current, but he did not really understand why it worked. It was not until 1905 that Albert Einstein published a paper explaining the photoelectric effect. Between Becquerellar and Einstein, the basis of all solar technology development was formed. 
Photo by Scott Hamilton, A local 6000-watt solar array
designed to power an off-grid home.
     The photovoltaic effect is caused when a material, such as selenium, absorbs light, and as a result, the atoms in the substance get excited, causing them to shed an electron. This electron gets passed to a neighboring atom create a voltage difference between the two atoms. This is known as the photovoltaic effect. There is a second effect at work in solar panels as well that is created by the heat from the absorption of the light. As the panel absorbs light, it heats up, creating a temperature difference between the top and bottom of the panel. The mixture of chemicals in the panel along with the temperature difference creates a voltage by the Seebeck effect.
     The Seebeck effect occurs when two different metals, or semiconductor materials touch and a temperature difference is created between them. This causes electrons to move from the hot side of the contact point to the cold side, creating a voltage difference between the hot and cold side. This effect is used in modern thermostats to measure and control the temperature in most buildings today.
Bell Labs developed the modern photovoltaic cell in 1954 and the technology was quickly adopted by the U.S. Navel Research Laboratory for use on the first space craft to utilize solar panels. The Vanguard I was launched in 1958, and by 1964 NASA has launched Nimbus I, the first satellite equipped with panels that automatically tracked the Sun. It was not until the Emergency Petroleum Allocation Act of 1973, which brought about an energy crisis, that the public release and availability of solar power occurred. 
     The “Solar Heating and Cooling Demonstration Act of 1974” turned several federal buildings into billboards for solar energy. Around the same time additional legislation mobilized federal agencies to research how to make solar technology more affordable. The goal of the coordinated federal effort was to make solar viable and affordable to the public. There have been several waves of federal moneys turned into incentives for solar energy, and yet today it is only holding about 1% of the total electric generation market.
     Solar panels are not the only method of extracting energy from the sun, and they are also not the most efficient use of solar power. If you have ever opened your car door on a sunny afternoon to find the temperature inside well over 2000 degrees, you have experienced the most efficient use of solar energy. There are experiments all over the internet that allow you to observe the extreme heating power of the sun. One is called, “Burning Stuff with 2000-degree solar power” by The King of Random, where he melts concrete, pennies, glass, and steel with a four foot magnifying lens he took from a rear projection TV.