Truth and Dare at the TED 2015 Conference

Formaspace-TED-Conference-2015-Truth-and-Dare

 

If you live up north, the heavy snows of this year’s record-breaking winter may not yet be over, but in central Texas we are celebrating the outdoors as springtime comes into bloom. It’s a time to enjoy the outdoors, spot bluebonnets growing on the side of the highway and make an unspoken wish that please please please maybe just this one time springtime would last all the way until August – holding back the inevitable brunt of the brutal summer Texas sun.

Here in Austin this season of possibility is marked by the annual SXSW Conference. Among the many technological innovations unveiled at SXSW, it looks like Meerkat is the technocrati’s favorite. Thanks to the invention of Meerkat, all of us can now live stream video to our Twitter followers. If you’re limited to 140 characters on Twitter and a picture is worth 1000 words, what does live streaming video add up to? We’ll have to work out the math on that and get back to you.

Meantime Austin’s SXSW conference is not the only event of its kind in March. Out in Vancouver the 2015 TED Conference (which stands for Technology, Entertainment and Design) brought together an outstanding ensemble of inspirational speakers in a program called Truth… And Dare. The event theme asks the question whether it’s possible to be optimistic in a world where many assume that:

  • It’s too late to prevent a climate crisis.
  • Robots will destroy more jobs than they create.
  • We’ve lost the battle against big brother.
  • Our kids will be worse off than we are.
  • Technology is no fun anymore.

 

We’ve Selected Three Speakers from the TED 2015 Conference that Dare to Say No!

First up is Joseph DeSimone, a chemistry professor and entrepreneur from the University of North Carolina Chapel Hill. He’s turning the world of 3D printing upside down — literally. Conceptually, DeSimone’s jump in technology is like moving from a typewriter to a laser printer. In a conventional 3D printer some combination of mechanical arms, belts and stepper motors move a print-head to the place where the heated plastic polymer needs to be deposited. This is inherently a slow process and it leaves behind telltale marks on the printed surfaces as each layer is added. DeSimone’s concept is different. Rather than adding material at the top, 3D printed objects emerge from a liquid pool of plastic polymer located at the bottom. The resulting printing process is not unlike the special effects in the movie Terminator 2, where T-1000 emerges from a pool of molten metal.

How does it actually work? First you have to consider that light can convert liquid polymer resins into solids. OK, so far so good. The light shines from underneath through a window, and, acting like a digital print head, the light ‘prints’ or hardens the area which needs to be solid. Great! But if the light is coming from underneath why wouldn’t all the liquid polymer in the well stick together in a gooey mess? In other words, how do you control which areas get solidified and which remain liquid during the printing process? The answer is the window underneath the bath is permeable to oxygen, which inhibits the polymerization, a.k.a the hardening process.

In other words, thanks to a computer controlled application of oxygen transmitted by the permeable window underneath, the part being printed won’t stick to floor of the well. DeSimone thinks this process could speed up the printing process dramatically — ultimately it might be up to 1,000 times faster than conventional 3D printing. Another major breakthrough: it doesn’t produce the telltale lines on the printed surface either. To learn more, watch the video above; you can also take a look at the paper that DeSimone co-authored this month in Science Magazine: Continuous Liquid Interface Production of 3D objects or CLIP for short.

 

Joseph DeSimone, CEO of innovative 3D printing company Carbon3D, speculates on what will happen when 3D printing becomes 1,000 times faster than today.
Joseph DeSimone, CEO of innovative 3D printing company Carbon3D, speculates on what will happen when 3D printing becomes 1,000 times faster than today.

 

Can Our Human Brains Adapt to Work with New Sensory Inputs? Yes!

David Eagleman is well-known for his work studying the brain, including time perception, brain plasticity and neurolaw. Perhaps you have seen his PBS series “The Brain”. In Eagleman’s TED talk, he explains how the brain is not unlike a general-purpose computer. It’s able to make sense of many different inputs; even those that our physical body did not provide us with originally. Eagleman has been experimenting with the concept called sensory substitution.

The idea is if we can pass information from sensors directly to the brain, the brain can figure out what to do with the new information. Watch the Eagleman’s TED talk to see how this works in practice. In one example, the sound of a spoken word is transmitted by Bluetooth wireless signals to a specially constructed vest, which is equipped with vibratory motors sewn into the garment. When the garment receives a signal, it vibrates like a mobile phone, except the vibrations are in tune with the original spoken words. A deaf person who wears the specially equiped garment can interpret these vibrations as speech. As Spock would say: It’s fascinating.

 

You may recognize David Eagleman from his six part PBS series "The Brain". At TED Eagleman demonstrates a vibrating vest, which can help deaf people interpret sound.
You may recognize David Eagleman from his six part PBS series “The Brain”. At TED Eagleman demonstrates a vibrating vest, which can help deaf people interpret sound.

 

Can a Computer Program be as Good as Your Brain in Interpreting Visual Data?

This is a question that Fei-Fei Li, Director of Stanford University’s Artificial Intelligence Lab and Vision Lab has been asking. In her TED talk, Li recounts how they used a crowd-sourced database of 15 million photographs to help teach a computer program to function like the human brain — to not only correctly identify objects in a picture, but also interpret their context and meaning. The need for this is growing. More and more we are living in a world that relies on computer software to make important decisions on our behalf. And the demand is growing as technologies ranging from facial recognition systems to autonomously driven-cars become part of our everyday lives.

But in the end, on the Internet it’s all about cat videos. Just kidding. But you’ll have to watch the “tale end” of Li’s talk to hear the computer interpret the photo of a cat lying on the bed. It probably won’t be too much longer before your home robot starts using Meerkat to live stream videos of your cat, while providing non-stop artificial-intelligence color commentary. For those fans of Mystery Science Theater 3000, perhaps the far off future will seem like a familiar place.

 

Fei-Fei Li is Director of Stanford’s Artificial Intelligence Lab. In her TED talk, Li presents ways that they are teaching computers to understand pictures and make interpretations of what they see.
Fei-Fei Li is Director of Stanford’s Artificial Intelligence Lab. In her TED talk, Li presents ways that they are teaching computers to understand pictures and make interpretations of what they see.

 

Formaspace Dares You to Change the Future

We invite you to join the roster of satisfied Formaspace technical, manufacturing and laboratory furniture clients — including Apple Computer, Boeing, Dell, Eli Lilly, Exxon Mobile, Ford, General Electric, Intel, Lockheed Martin, Medtronic, NASA, Novartis, Stanford University, Toyota and more.

Give us a call today at 800.251.1505 to find out more about the Formaspace line of built-to-order computer workstations, industrial workbenches, laboratory furniture, lab benches and dry lab/wet labs — as well as our design / furniture consulting services. Like all Formaspace furniture, it’s backed by our famous 12 year, three shift guarantee.

Contact

We'll never share your email with anyone else.