It’s science fiction authors imagining how robots could improve our lives, and inspiring real-life scientists, engineers and roboticists to realize their vision.
It’s the last half-century of innovation in computer hardware and software meeting advancements in artificial intelligence and robotics.
Scroll down to discover some of the moments that made helper robots possible.
In the post-war era, technological progress accelerates at an unprecedented rate. Ideas about robots, and artificial intelligence begin to take shape.
Pennsylvania University professors John Mauchly and J. Presper Eckert build the ‘grandfather’ of digital computers, the Electronic Numerical Integrator and Calculator (ENIAC). It fills a 20 x 40-ft room and has 18,000 vacuum tubes.
Researchers William Shockley, John Bardeen and Walter Brattain at Bell Laboratories invent the transistor. Its ruggedness, small size and low power consumption kickstart a wave of miniaturization. The possibilities of computing are revolutionized.
Alan Turing introduces ‘The Turing Test’ — a test of a machine's ability to exhibit intelligent behavior equivalent to, or indistinguishable from, that of a human.
Grace Hopper develops COBOL, the first computer language. The second, FORTRAN, is developed by a team of IBM programmers a year later.
IBM’s chairman and CEO, Thomas J. Watson Jr., bets the company’s future on the IBM Series/360 — the largest privately-financed commercial project in history. The risk pays off, changing the computer industry forever. Work is revolutionized, productivity is enhanced and countless new tasks become possible.
Mainframes are big, expensive machines that can only be operated by scientists. They are powerful, but inaccessible. Then, the microprocessor is invented, kickstarting the personal computer revolution.
Intel and Ted Hoff introduce the first microprocessor, the Intel 4004. Costing $200 or just about $1,350 today, it contains 2,300 transistors, performs 60,000 OPS (operations per second) and holds 640 bytes of memory.
Intel co-founder, Gordon Moore, theorizes that computing would dramatically increase in power, and decrease in relative cost, at an exponential pace. The insight, known as Moore’s Law, becomes the golden rule for the electronics industry, and a springboard for innovation.
Steve Wozniak and Steve Jobs release the Apple 1. Due to built-in computer terminal circuitry, it requires only a keyboard and television to use. The power of computing is placed in the hands of everyday people, for the very first time.
A year later, Apple releases the Apple II. It’s the world’s first highly successful mass-produced computer. Software, like Visicalc, the original electronic spreadsheet, turn the computer from a hobby for enthusiasts into a serious business tool.
From one revolution to the next, the internet transforms the way we live forever. The world’s information is at our fingertips. Meanwhile, artificial intelligence leaps forward, starting its journey towards becoming, well, intelligent.
Tim Berners-Lee and his colleagues at CERN develop hypertext markup language (HTML) and the uniform resource locator (URL), giving birth to the first incarnation of the World Wide Web.
Larry Page & Sergey Brin, two computer science graduate students from Stanford University, pioneer a new way to search for and find information on the web. They call their invention ‘Google’.
Soon enough, the microprocessor becomes so advanced that it can be made much, much smaller. Now, computers fit in the palm of our hands.
From Mighty Atom in Japan to The Jetsons in America, the idea of robots assisting people has been part of our imaginations for generations. Finally, technology catches up, bringing the promise of helper robots into our reach.
The Google self-driving car project begins, which will later become Waymo LLC. Today, the Waymo Driver combines artificial intelligence with cutting-edge lidar, radar, cameras, and compute to perceive and navigate the world.
Engineers Alex Krizhevsky, Ilya Sutskever, and Geoff Hinton submit a program called AlexNet to the 2012 ImageNet Large Scale Visual Recognition Challenge(LSVRC). The program is a breakthrough for computer vision — halving the existing error rate to just 16%. The idea that computers will be able to perceive our world starts to become possible.
Born at the University of Tokyo, Schaft is the creator of a new kind of actuator that improves robots’ strength.
An imaging company that spun off from the Menlo Park robotics company Willow Garage.
Builds research robots with series elastic actuators in them; best known for the M1 humanoid and Dreamer.
A collaboration between Willow Garage, SRI, and Meka to design a low-cost robotic arm.
Uses robot arms for precise and repeatable camera control.
Creators of high-tech wheels for omnidirectional motion.
Builder of all kinds of futuristic bots, from the bipedal, humanoid robot Atlas to the impossibly fast, four-legged Cheetah.
While at Alphabet, Boston Dynamics introduces ‘Spot’, a quadruped robot capable of climbing stairs and traversing rough terrain with ease. Spot’s agility is thanks to a focus on ‘athletic intelligence’ that mimics the way humans and other animals perform functions such as walking, running and climbing.
Spot is controlled by a human operator, and can be used for a variety of missions such as site documentation, thermal inspections, and gas, leak and radiation detections.
DeepMind, an Alphabet company, creates AlphaGo — the first artificial intelligence to defeat a professional human Go player and a Go world champion.
As simple as the rules may seem, Go is profoundly complex. There are an astonishing 10 to the power of 170 possible board configurations - more than the number of atoms in the known universe. This makes the game of Go a googol times more complex than chess.
Everyday Robots is born, combining hardware and software to teach robots to help us in our daily lives.
From Help, The Everyday Robot Project is born. From X HQ, we set out to build the world's first learning robot.
We start by building lots of experimental applications. Prototypes that, while playful, push our technology forward.
We carry out a mobility benchmark test with a robot that can take user requests and navigate to and from locations to transport items.
The origins of the “ML/RL” bet — we build a robot that uses learned behaviors to move along a route, collect dishes, and load them into a dishwasher.
The first 3D maps of X HQ are created by robots that autonomously navigate more than 1000 kilometers
We teach a robot to clear items from desks across X HQ.
No robot is totally self-reliant, they all need support from people. Our team of Robot Operators are on hand to supervise, teach and advise our robots when needed.
We demonstrate data collection for robot grasping can be reduced from months to days by training 50,000 virtual robots in simulation using Google Cloud.
We achieve a breakthrough in reinforcement learning for robotics, in collaboration with Google Research. Our system learned closed-loop reactive behaviors for successfully grasping thousands of objects through trial and error.
We teach robots to solve a sustainability challenge in Alphabet buildings. Taking on the problem of trash sorting, robots use reinforcement learning to reduce contamination in recycling, compost and landfill streams from 20% to 3%.
This breakthrough has the potential to solve America’s trash troubles, which see 417 Empire State building’s worth of waste sent to landfills every year.
In a time of global pandemic, we teach a robot to take on a number of tasks that keep offices safe and clean including:
Right now, there are 100 helper robot prototypes learning, experimenting and operating as an integrated system. While these robots sort waste, wipe tables, and learn new tasks across our offices. Over the past year 240 million virtual robots have practiced in sim — that’s almost as many cars as there are in the USA. Together, our real world and virtual world fleet learn as a whole, becoming better, and faster in tandem.
Developed in partnership with Google Research, PaLM-SayCan is an AI research breakthrough that unifies natural language understanding with the physical capabilities of robots. Although we’re currently only testing this technology in a lab setting, PaLM-SayCan shows it is possible for robots to fulfill complex, natural language instructions by combining the reasoning abilities of large-scale language models with learned robot actions.
One made possible by the ingenuity of those who came before, and will require the help of others. Although we understand the difficulty that lies ahead, we are excited to discover what we’ll learn, and who we’ll meet along the way.