by Ed Bayes (Policy Lead) and Johnathan Jenkins (General Counsel)

Oct 04, 2022

Conference landscape 2

From offices, to institutions of care, to eventually in our homes, our vision is for helper robots to be as transformative to the physical world as our computers are in the digital world. Even though Everyday Robots is far away from deploying our robots beyond Alphabet spaces, we still take the broad societal consequences of our work very seriously. That’s why we’ve developed a set of values that puts responsible innovation at the front and center of everything that we do, and have brought together a multidisciplinary team of economists, labor leaders, ethicists (and even dancers) to help us tackle tough questions about how robots will fit into everyday life today — so we can work toward a better future tomorrow.

But we know we can’t answer such questions on our own. So we engage in conversations with experts to help anticipate unintended consequences of our technology, helping us proactively mitigate potential issues. As part of this engagement, we were invited to attend WeRobot 2022, a conference that brings over 150 leading scholars and practitioners to discuss legal and policy questions relating to robots. Here are some of our top takeaways and what they might mean for helper robots.

Bad Ideas Lounge

Professor Woody Hartzog introduces the “Bad Ideas Lounge,” a tongue-in-cheek panel where attendees brought their worst ideas for regulating robots, aided by constraints.

What’s top of mind in the robotics law and policy community?

Learning from steamboats: yesterday's policy for tomorrow's robots

Over three days of workshops and paper presentations, academics from the UK to Japan presented papers focused on cutting edge robot policy and law theory, design, and development. For example: in one session, researchers from the University of Edinburgh presented on what policymakers today can learn about regulating breakthrough technologies by looking to the past, taking lessons from steamboat regulation in the 1800s. In another, researchers from Colorado, Michigan, and Richmond presented on ethical considerations associated with balancing human oversight over complex autonomous systems. Others compared how the US and the EU are approaching the question of AI regulation, and explored different models for AI governance. (If you’re interested in reading these papers or others, you can find the full program here.)

“I'm walkin' here!”: robotics and city planning

Sprinkled among the learnings, there was also plenty of fun. There were tongue-in-cheek workshops where attendees tried their hand at writing fictional policies and laws to regulate robots. Attendees put themselves in the shoes of city government officials thinking through how to regulate sidewalk robots, with teams discussing how robots should share space with pedestrians on the sidewalk, whether robots should announce themselves to passers by, and how to ensure they don’t cause damage or injury.

more cards

Some cards from one of the interactive workshops.

Furry robots?! The best of the bad ideas lounge.

In another session, teams flipped this concept on its head, bringing their “terrible ideas for regulating robots” and then designing speculative robots to follow the rules they came up with. What’s the worst that could happen? Some of the weird and wacky robots that followed included sidewalk robots that yell at passers by to “get out of the way” to slothbots that move incredibly slowly (for safety) and are covered in fur (to build empathy) 🦥 One team even proposed requiring all robots to be made of human skin (eww!). While this session was all fun and games, there was a serious point: lawmakers and regulators have to balance allowing innovation with public good considerations like safety. But even the most well-meaning rules can sometimes lead to unforeseen outcomes. (It’s safe to say we don’t expect any of these ideas will be making their way into the real world any time soon.)

helper robot

A helper robot wiping a table in a cafe.

What does this all mean for helper robots?

As Everyday Robots’ advisor Andy Stern, former President of the SEIU, puts it: “change is inevitable, but progress is optional.” Because of the radical potential of our robots to change how we live, work, and age, responsible innovation has been front and center of everything that we do from day one.

Despite our early stage, we have kickstarted a number of initiatives, including collaborating with Google’s Responsible Innovation team to integrate Google's AI Principles into the development of our robots with a focus on fairness, accountability, safety, and privacy. Our legal and policy leads are also collaborating with the team to answer challenging questions associated with our technology like, how can we ensure our algorithms are fair? And how can we ensure our robots complement workers?

What role will Everyday Robots play in bringing about a robot-enabled future? And how will we proactively manage the potential unintended consequences of our breakthrough technologies? While there is no easy way to answer these questions with certainty today, we’re optimistic that if we continue to focus on solving problems, building in an intentional way, and working in partnership with others that share our positive vision, we can tackle tough questions about how robots will fit into everyday life.

Everyday Robots thanks the organizers of WeRobot for hosting us this year, including Ryan Calo and the rest of the team at the University of Washington’s Tech Policy Lab.