Podcast
The Future Of Human-Centric Robots
In this episode of The Future Of, Jeff is joined by Jonathan Hurst, Co-founder & CTO at Agility Robotics, James Dietrich, Robotics Solutions Director at Fresh Consulting and Grace Brown, Co-founder of Andromeda to discuss human-centric robots and the demand for robots in today’s world.
James Dietrich: When you build a robot that looks even anything like a human, the expectation management is really challenging. All of a sudden, the consumer or the business says, “Gosh, this needs to operate like C-3PO or K-2SO,” or pick your notable humanoid robot from what science fiction has shown us. That focus on expectation management is just a challenge we still haven’t really been able to get over yet in the human-centric space.
***
Jeff Dance: Welcome to The Future Of, a podcast by Fresh Consulting, where we discuss and learn about the future of different industries, markets, and technology verticals. Together, we’ll chat with leaders and experts in the field and discuss how we can shape the future human experience. I’m your host, Jeff Dance.
***
Jeff: In this episode of The Future Of, we’re joined by Jonathan Hurst and James Dietrich to explore the future of human-centric robots. Given how fast the robotic space has been progressing and how there’s such a rich, deep history we have in this space, we’re excited for this discussion and to learn from both of you as people, experts that have been in this space for quite some time. If we can start with you, Jonathan, would you care to tell the listeners a little bit more about yourself, also about Agility, where you’re both a founder and the CTO?
Jonathan Hurst: Thanks. Yes, I have been thinking about legged locomotion, walking and running, and physical interaction, in general, my whole career. That’s taken me through an academic path. I was a professor before this, studying biomechanics of animals, studying reduced-order models, simulations, and then if you really understand something, you should be able to reproduce it, right?
Figuring out how to capture the principles of physics, “how do these things work?” and make robots that capture those core physics and really demonstrate walking and running in the same way that you see an animal doing it. That’s my background in a nutshell. I co-founded this company because I want to make a difference with it. That’s an enabling technology if you can build robots that go where people go and can really interact with the world in many of the same ways that people can, right?
Our vision is to enable humans to be more human, and it’s about having robots that come into our world and our environment, do a lot of the dull, dirty, dangerous, do a lot of things that we’d rather not do, and help us, really enable people to do the things that people are good at, the decision-making, the variety, the creativity, the ambition, all of those things. These robots are really useful tools, and that’s what we want to create.
Jeff: Amazing. I noticed that Agility recently raised $150 million, Series B. I’ve been reading about the company in the news with everything that you’ve accomplished, hearing about successful pilots with customers, and more. It’s impressive. You also have a background you mentioned in academia at the university, can you tell us a little bit more about that?
Jonathan: Sure. I came through the Robotics Institute at Carnegie Mellon. That’s the oldest robotics program in the world and by far, the largest and certainly at the time when I was there. I then came over to Oregon State and helped to co-found the robotics program that we have there now with over 100 graduate students earning PhDs and masters in robotics. It’s a great robotics program.
The core of my research there, again, was understanding legged locomotion, but it was at that point, what are some of the core science pieces of it? It wasn’t so much the engineering or a useful application or use case. At the time, it was, can we make a robot that is able to really capture the core dynamics that we see in animals, walk and run, and get the efficiency, the robustness?
That means walking and running outdoors over all kinds of terrain. We succeeded at that. Our robot, ATRIAS, and if you Google ATRIAS robot, you’ll find a bunch of videos of it. It doesn’t look anything like an animal. It’s a big contraption, and it has more in common with high-speed, pick-and-place robot arms than something that looks like a person or a bird.
The whole point of that robot was to just capture a math model that is from biomechanics and to demonstrate that this math model really can describe the behaviors and get what we want, and it did. Once we captured the right physics and had the controller to the point where it settled into the right dynamics, we were walking and running, accelerating to different speeds, going outdoors over grass and gravel and pavement.
That was really the demonstration that allowed us to then spin out and start the company. I joined up with my good friend from grad school, Damion Shelton, who started a company while I was a professor and sold it. He had gone down the entrepreneurial route, and he came out a CEO, and my student, Mikhail Jones, who was really responsible for the code that made ATRIAS walk and run successfully. At that point, we developed the Cassie robot. That was our first product at the company, and we started to build and grow from there.
Jeff: Awesome. I was recently reading about the Guinness World Record that was set, and I saw your name in that article. Can you tell us just briefly about that before we move over to James?
Jonathan: Sure. That’s the Cassie robot I just mentioned. Cassie was like I said, the very first product that Agility Robotics sold. It’s just a bipedal robot sold to the research market for the purpose of trying to figure out control algorithms for walking, running, and so on. That’s what my lab at the University has been doing in parallel, as we’ve been building out the company, Agility Robotics.
One of the things that you can do at a university that you can’t do at a company is blue-sky ideas. Just try something. You’re not sure if it’s going to work. Let’s check it out, and that’s at National Science Foundation, DARPA kinds of things. At that point, our real focus was to say, “Hey, there’s a lot of limitations to the control methods that we’re using now.
“There’s a lot of limitations to an engineer being able to write down a model, a human-understandable model, or a human-understandable equation and describe the behavior that you want. We’re going to need to figure out something more like machine learning, more like some sort of optimization tools in order to really create this complex, nonlinear, strange behavior.”
We know a lot about the behavior. We can write down all the symptoms that we see in the behavior. We started with, say, running pretty quickly on a treadmill, and we took it out to do our first outdoor run. It was interesting. The crown of the road was enough slant that we hadn’t modeled and hadn’t tested that that be stable, then we were able to run a 5k outdoors on about half a battery on Cassie.
Then that’s progressed. The robot can go up and downstairs, and it’s completely blind, and then recently ran that 100-meter dash and broke the world record for speed. What’s really exciting about that is that now it’s really outperforming any of the control methods that we used before. This machine-learning approach is just a really promising way.
Jeff: Congratulations. That’s fun. It’s interesting to hear about that duality of the depth and being able to explore the unknown with academia but then also having the reality of business. James, if you can tell us more about your experience with robotics and also with human-centric humanoid robots?
James: Sure, absolutely. I come from just the, completely, other side of the spectrum from Jonathan. I was really fortunate to stumble my way into robotics with a French company, back in 2013, called Aldebaran Robotics that was then later acquired by SoftBank. Yes, the focus was more on just the humanoid side of robots versus just what we’re talking about with more broadly human-centric robots. The flagship product was something called the NAO humanoid robot It was really focused around expanding upon human-robot interaction and, specifically, those social interactions.
The founder of the company, after a really awful heatwave that had rolled through France and took the lives of several elderly folks who needed some more assistance in the home to remind them to drink water or take their medications or keep themselves safe and healthy, prompted him to want to start a company where he envisioned a future where these types of humanoid robots could serve a really valuable purpose in the home, and really servicing an aging population that needed that additional support.
I was more on the go-to-market and the business development side. We were selling this robot to academia for, again, advanced human-robot interaction research. We realized it had a really powerful place in the STEM education world and helping to teach young kids how to write code and program robots to mimic certain types of human interactions and create the type of robot friend and engagements they would want.
We found some interesting value in using the robots in retail and other healthcare settings, a lot of really impressive work with the autism community. From there, I was fortunate enough to make a move to a company called Sarcos Robotics, which was and is doing some incredible work in advanced exoskeleton development, really taking the sort of intelligence, the judgment, and decision-making of humans and pairing that with the strength and durability, and reliability of robots.
It’s neat to see human-centric robotics from two very opposite ends of the spectrum. Then over the last few years, I’ve been at Fresh Consulting and really given a fantastic opportunity to help other companies who are trying to solve really challenging and complex issues within their businesses with different types of robots and automation, so really focusing on helping people see the value and the positivity that robots can bring into helping humans operate more efficiently or safely in any number of working environments.
Jeff: Appreciate you being on the show. Jonathan, for you, how do you define human-centric robots and make some of those distinctions? Help us understand that a little bit better.
Jonathan: It’s challenging what we call our robot. Agility Robotics is building a robot that does look a bit like a person, so it’s often called humanoid, but it’s really designed to do useful work in the world. It’s made to, say initial applications, pick up totes in warehouses, move them to conveyor belts, and move them around, these very process-automated jobs that are extremely repetitive, and that’s ripe for automation.
The robot’s really made to operate with people and for people and be a partner to people. What we’re not trying to do is make a robot that looks like a person. The reason it may look something like a person is because it’s made for human environments, but everything about it is entirely function-first first principles, how are we best going to be able to do the useful task in the world, and how are we best going to be able to interface with the people that are the partners to the robot?
It’s, in some ways, a nuanced distinction because you can get a human morphology by copying a person and biomimicry and saying, “Well, I’m going to make a humanoid that has the same face, the same five fingers, the same everything as a person,” or you can make something that ends up looking a little bit like a person because you’re trying to do some of the same useful kinds of physical interaction tasks.
That’s the nuance we’re trying to get at, at human-centric. We are not trying to build a robot to copy how people look, but we are trying to build a robot that does useful things and is a partner to people in human environments.
James: I would just add to that if I could, Jeff. It’s the unique combination of several things that make humans human. Not all human-centric robots have to incorporate all of them. Not all human-centric robots have to have legs or arms. There’s a lot of different categories of things that make sense to me that occupy this category around, are you focused on advanced dexterity?
Are you like Jonathan and other companies who are highly focused on maneuverability? Others are creating robots that are really around the sensing, the vision, perception, the AI, and ML associated with that. Others are approaching it from a durability standpoint. There’s a lot of different facets of what a human-centric robot could be.
Jonathan: One more comment, though, on the human-centric on, say, biomimicry versus bio-inspired, right? Have you heard of the cargo cults? I don’t remember the exact details of the story, but I think it was a World War II era. The story is that for a long time, there were supply drops coming into some of these Pacific Islands and people on the ground with headphones and things and waving for the airplanes to come in to tell them where to drop.
For years after that, there were people on the island who had no idea what the technology was, and they would put fashion things that looked like headphones and wave sticks, hoping that supplies would drop. Even if it looks like it, that’s like biomimicry. That’s something where you’re copying some feature of it that actually isn’t the relevant one. Bio-inspired would be really understanding deeply why something works the way it does, deciding if that’s a really useful feature for what you want to implement, and then doing that.
What our robots end up looking a little bit like human form, it’s understanding why and doing it for that functional reason, which I think is very different from having the appearance be similar.
Jeff: Yes, and if we’re going to have robots work with us, integrate into some of the work we do, or support us, they’re operating in our world. They’re connected into a world, so there’s aspects where they need to fit in, too, right?
Jonathan: That’s true. That’s the human-robot interaction aspect of it, right? How are people going to react to these things? That matters, but that’s a functional choice.
James: Absolutely. Jonathan, you might know him, but Luis Sentis who leads the UT Austin human-centric robot lab, he talks about two really interesting facets of robot intelligence. One of which is intervention, building a robot with logic that assesses a situation for optimal actions. We talked about this a while ago, which is, your standard washing machine, your modern washing machine has this intervention-like capability where it’s going to operate based on the load it senses inside of it or how dirty it might sense.
It could be a smart thermostat in the same way, but that concept of intervention and intelligence in future human-centric robots is a little scary, as well as this other aspect of self-efficacy. Basically, robots that have the self-confidence to achieve a certain behavior or task, and how can they grow and expand upon that over time? There’s just two really interesting, but also potentially polarizing aspects of that.
Jeff: We see a lot of robots in our space. We’re working with lots of different robot companies. Jonathan, what makes some of the core components of human-centric robots unique from other robots?
Jonathan: All right. Safety is a big one. A big question. Humans, obviously, are a lot better about manipulating things than robots are, in many cases. The opposite is true in other cases. If you think, for example, about a CNC machine, right? They can make super precise parts and if you as a human hold a woodworking router and then try to freehand a shape in wood, humans are not as good at that as a robot is.
There’s a lot of things robots do that are really impressive, and that’s great. Then there’s a lot of things that humans do that robots can’t. A lot of that has to do with not just force sensitivity, but that’s a big one. It’s knowing what forces you’re applying and then being able to control those forces. It’s even more nuanced than that. It’s having a preset, it’s having a set of dynamics that are already described.
Maybe an example of that would be a physical pogo stick, right? It’s a physical spring. It’s a physical system, and that system wants to bounce, and then you can bounce on a pogo stick. Then a really sophisticated version of that would be your hand and the way the tendons are and the specific compliance of the tendon and the specific non-round shape of each knuckle such that you have a configuration-dependent gear ratio basic and all of these structures in how the muscles are, all of that stuff, call it engineered dynamics.
You’re setting up a behavior that’s a combination of the passive dynamics of the physical hardware and also the software control and how that happens. Figuring out how to do that with a robot is going to be a long-term, 100-years-plus kind of effort to really understand how to do great physical interaction with the world. The thing I’m excited about with our robots, with robots that operate in human spaces, robots that do things that are useful with Digit, it’s a little bit like it’s a platform.
It’s like a smartphone, and how many different features and things can you do on your smartphone, but it’s all based on data and information? Now, all the stupid apps that you have on your phone that nobody would’ve imagined when they first started coming out with the smartphone, that’s the kind of thing that’s going to be useful on a Digit robot because it’s going to be able to physically interact with the world and not just provide data and information to you.
Jeff: James, we know Agility is a main player in the space. It’s really fun to watch their growth, and we’re excited about their future. What are examples of other organizations and companies that are in this space as well?
James: Absolutely. I think some of the leading ones that come to mind are doing really meaningful things, obviously, Agility being one of the key players there. I think if you’re talking about companies like Agility, you clearly have to mention Boston Dynamics as well, who are doing equally impressive things with their bipedal and quad-pedal robots that can dance and do parkour and have really impressive agility and balance.
Shadow Robot is another one that comes to mind. They’re focused on a really specific area where it’s really about advanced robot dexterity and manipulation. We talk about our world being designed by humans for humans and being able to mimic human hands. That level of dexterity and sensitivity is equally important along that sort of journey to bringing all of these things together.
My father is alive today because of the intuitive surgical robot. The da Vinci robot, I would still argue, falls into this broader category because of its integrated and intelligent, and intuitive capabilities. Jonathan, you mentioned earlier, Honda and the ASIMO Robot, everyone’s seen it or it’s a widely associated robot in the space although it’s not really been advanced or done meaningful things in the real world, but it inspired a lot of people, and it shed the light on what was possible or what could be possible.
Sarcos I mentioned where I worked previously, making huge advancements in robotic exoskeleton technology and pairing human and robot skin around it in a really unique way. Again, combining that human intelligence, instincts, and judgments with robotic strength, endurance, and precision, those are all really big stepping stones and strides to bringing humans and robots together. Those are a couple of companies that come to mind that are really taking this space by force and doing really meaningful work in it.
Jonathan: It’s been a dream of humanity forever. Rosey the Robot before people understood technology, it was a dream, automatons, and things like that. In fact, the word robot was even coined in a movie about automatons before technology existed, right? It’s interesting how we’re at a point where it’s starting to become possible to do this. A lot of people are starting to realize this is possible once you see a few examples of it. A few companies start to raise money. A few companies start to produce a product. Others say, “Hey, I want to do that and start to get into the game.” That’s exciting. That’s exciting.
James: There is certainly going to be a large amount of space for many different types of human-centric robot companies to have a field of play. I think there’s still a lot of reasons why these robots are still failing. The company that I worked for previously, the NAO humanoid robot and the Pepper robot, we struggled with advancements in natural language processing.
Our robots were intended to speak 20 different languages, be able to communicate back and forth with humans. We discovered pretty early on that the robot had a really hard time speaking to little children and understanding the nuances of their speech patterns and how they pronounce enunciated words and similarly with the elderly. That’s just one nuance area where that robot failed if you will.
Right now, I think there’s still a big challenge with cost versus utility. Most human-centric robots have been prohibitively expensive with limited sets of capabilities and value they can offer. I think we’re going to be watching that curve come down over the next several years.
###
Jeff: In addition to the conversation we had with our guests on today’s episode, we asked another expert to provide their insights on the future.
Grace Brown: Hi, everyone. I’m Grace. I’m one of the co-founders of Andromeda Robotics, which is an early-stage robotics startup here in Melbourne where we build healthcare humanoid companion robots for Australia’s elderly and disability healthcare sector. I’m a mechatronics graduate from Melbourne University and a self-proclaimed mathematician. Engineers, storytellers, and scientists have been obsessed for so many hundreds of years of creating a system that so eloquently simulates the behaviors and traits of us.
I think what would be really cool is seeing how well that simulated, I guess, digital twin of us, how well we can actually emotionally connect with this system. When we engage with them, do we get the impression that they are an independent, autonomous being, or just how compelling that is?
###
Jeff: I’m glad you touched on that because we have a long history with humanoid robots. We talked about this with Honda and others and, Jonathan, you saying, “Hey, this is where the robot was coined.” This isn’t new, but what’s changing right now that positions us in a better chance for success. Jonathan, interested in your thoughts on that.
Jonathan: Sure. I think maybe I’ll start by pointing out that there’s, like I mentioned earlier, a lot that’s really hard for a robot to do in terms of manipulation and physical interaction with the world. Certainly, that doesn’t change if a robot is shaped like a humanoid or is a six-axis robot arm. It’s still a really hard problem. Even if you shape it like a humanoid, you’re going to have to understand those dynamics and pieces.
Even if it looks like something that should be able to do that job, looking like it, can, and actually being able to are very different things. Yes, the proof’s in the pudding. Now, I’ll say there’s a couple of things that I’ve observed really shifting in recent years. One of them is basic science. I focus, maybe, on the legged locomotion piece because I think of legged locomotion as one of the hardest possible physical interaction tasks.
It’s not a fine manipulation task, but it’s a mobile robot that has to apply large-enough forces to lift itself off the ground. It has regular impacts with the ground over and over and over. Then it’s got to apply those forces throughout the stance and then have a swing face. You’ve got this kind of model where you’re swinging your lightweight leg or limb or whatever it is and then the model switches when you impact the ground, and now you’re trying to move the mass of the robot.
That’s a hybrid system. It’s just incredibly complicated. Now, I’ve heard people describe running as juggling yourself. Now if I’m going to juggle a mass or something like that, that’s a hard physical interaction task. If we can really tackle the hardware and the software to get us to do all different kinds of gaits and behaviors, we’re also going to be able to manipulate boxes and packages and things like that using a lot of the exact same tools and a lot of the same understanding on the engineering, on the hardware side, of course, different specialized engineering system for each, but the principles are there.
I’d say 10 or 20 years ago, even though we are familiar with seeing animals and humans do amazing physical interaction and walking and running tasks, the actual understanding of how that works, being able to write down the equation that describes how that works and then do some engineered system that can reproduce it is very, very limited, just from a basic science understanding.
We understand so much more now. Like I mentioned, ATRIAS is the first machine to reproduce human walking gait dynamics. That’s a big deal, right? It’s never been understood before to the point where you can reproduce it. Now it has. That will never change. It’s something that is now understood. How does walking work? You can write down an equation, you can build a system, and you can get those same dynamics and behaviors.
Still, a ton to figure out, of course, but there are some core science things about understanding that, and that understanding is just broadening in the world. When I was a graduate student, the Dynamic Walking Conference started, and it started with four of us at a coffee house and has grown to a really big regular conference. All of those people have gone out and joined companies, started companies, become professors, et cetera.
That knowledge is really starting to become much more broadly understood, so some of the basic science. Then the other piece of it is, how do you coordinate, say, 26, 30 motors, 100-plus sensors, including complex ones like cameras and lidar and things like that, but also all of the encoders and all of the thermal sensors and everything? Just think you’ve got hundreds of sensors giving you data.
Now write an equation or an algorithm that then outputs torque to 30 different motors and then coordinate that to play basketball. 20 years ago it was much more about can I control a 6-degree-of-freedom arm using inverse dynamics, inverse kinematics, and so on. Now it’s starting to open up as people understand various learning techniques, optimization techniques, figure out control hierarchies that use reduced-order models and map those to the more complex systems, and in real-time, be able to control that many degrees of freedom with that much data coming in from sensors. That’s also a new kind of enabling technology. Those two things coming together, that’s why you’re starting to really see some success out of robots.
James: Just to add to that, Jonathan, if I can, we’ve talked about it. What makes humans innately unique is we have bones, we have skin, we have things that make us very durable and reliable in an otherwise really chaotic world that we live in. That concept of sensory skin for robots and things that make them equally as durable and reliable in our world is a new area where I’m excited to see some additional advancement, allowing robots to respond to human touch, but within the context of the other information it’s receiving from its environment, really learning when and how to be delicate when necessary or forceful when necessary.
We’ve seen early-stage robots for manufacturing, but they were relegated to cages because it wasn’t safe for humans to be around them. Now we’re entering the time of really great cobots that are now working much more closely without those types of safety barriers around humans. Now that next evolution is, how can we create advanced sensory and perception and skin for robots where, yes, now they’re just that much more in tune and aware of their place in our environment?
Jonathan: You know what’s interesting, too, when you think about, again, looking at this bio-inspired idea, you think, “Wow, animals, humans are so amazing. Everything must be just really optimal in how those sensors work, how those actuators work, and muscles and things.” They’re not. Human muscles are really not ideal for a lot of the things that we do.
There’s a whole lot of, I would call it, compromises that are made because of the limitations of the dynamics of a human muscle, and that’s where a huge amount of the complexity in our muscular structure comes from is physical adaptations to deal with the fact that, hey, our actuators aren’t that great for what we’re trying to do with them.
Similarly, when you’re trying to use electric motors instead of muscle, they have a very different set of limitations or a hydraulic system or whatever.
It’s a different set of real limitations that then you have to figure out how to engineer around to get that behavior that you actually want. That’s a very deep problem. People are going to be working on that for a long time.
James: Yes. I would argue just because a human has the strongest and biggest muscles, it doesn’t make them ideal and adept at all activities. If you look at rock climbers, a lot of them need to be lean and a little bit more agile, and it’s not necessarily the person with the biggest muscles that can climb the best rock faces. Just as within robots, yes, how strong or capable you are from a muscle replication standpoint doesn’t necessarily mean that that robot’s more valuable or more capable.
Jeff: We have lots of different types of robots, AMRs, AGVs, drones, arms, crawlers, et cetera. Where do we see people buying these right now, and where are they most useful relative to, let’s say, other types of robots?
Jonathan: Right now, it’s in warehouses, in logistics. It’s such a huge industry, and it’s growing so, so fast. Everybody wants their packages delivered in one day. I know my family, we don’t really go to the grocery store anymore. We want them to be either delivered or right now, we have to go pick them up, but there’s businesses working on how to get that autonomous and delivered to your home in an actually cost-effective way, and that kind of thing really improves quality of life.
I know it improves ours. It gives us back time to be more human to do what we would like to do with our time rather than chores.
Digit’s going to be moving. Our first application is just moving totes, those plastic totes that at the fulfillment centers get filled with the things that people order, and then you need to move it from that automated filling system over to a conveyor belt and it’s taken off, or you take the empty totes and you stack them.
With the pressure and the growth on that kind of role, it’s changing very rapidly and so having a robot that’s multipurpose, that’s pretty flexible, that can do different workflows and in an area where it’s really, really hard to hire people to do those jobs, it’s not really a fun job, typically. I guess aside from judgment calls on it, there’s incredibly high turnover, very high turnover in most of those roles, so it’s a real struggle. Having a robot that can just come in and fill in and augment that human workforce and do some of those tasks early on is really valuable.
That’s the first application. Purpose-built specialty automation is always going to be nipping at the heels of applications like that. Eventually, in 100 years, what are warehouses going to look like, or 20 years? What do warehouses look like now? Well, if you build one from scratch, there’s an awful lot of purpose-built automation coming in. At some point, the autonomous truck backs up, things get sifted, and sorted, autonomous trucks fly out.
That’s what it’s going to look like someday, right? The forever use cases for robots like Digit that are really focused on human spaces and human environments is like, say, delivering out of the autonomous vehicle to your porch. That’s always going to be a human environment. You’re always going to– We design that around ourselves. You’re going to have your front walk, little stairs, there’s a gate to open, people who are going to be in the way or pets or children or toys.
That kind of application and in our spaces and in our homes, and in our workplaces where we’ve designed a whole environment around us, those are the forever homes in the future when these machines are safe enough, smart enough to really get there.
James: Yes. I have just a unique story. I had the privilege of going to UNESCO in New York at the UN headquarters to speak in front of some students as part of their summer program. I had our small, little humanoid robot with me just to demonstrate some of the work our company was doing. There was a young boy in the room that said, “Hey, I really don’t like what you’re doing. You’re putting my dad’s career at risk. You’re going to replace him with robots, and I don’t like that.”
I said, “Well, what does your dad do?” He said, “Well, he pushes buttons and moves levers at a factory.” I said, “That’s great. It sounds like he’s provided a great life for you, but let me ask you an important question.” I said, “Do you want to do the job that your dad does?” You could just see the light bulb go off in his head, and no, the answer was no.
Yes, we are coming into a time in the evolution of robots where, yes, there are generations coming up right now that shouldn’t want to and don’t want to do these mundane, dull, dirty, dangerous jobs that maybe their parents have done in previous years. That’s not just a light switch. You don’t just wake up tomorrow and thousands of people are displaced and are out of jobs. It’s a slow drip.
For every job that’s being replaced by a robot, there’s two to three new jobs being created for people who need to monitor, manage, and operate different facets of those robot capabilities. I think it’s just such an important nuance. Jonathan, I think, made a fantastic distinction there between, hey, there’s a value in last-mile delivery, but then there’s also this other piece of the puzzle of the last 50 feet that can be done with a drone, but more likely, it’s going to be done with something that looks human-centric because yes, it’s got open a gate.
It’s got to walk up your steps. It’s got to maybe greet your barking dog in the front yard. I see a world where the harmony and duality of wheeled autonomous-type vehicles and legged robots and drones all work harmoniously in solving tasks jointly.
Jonathan: Another point on some of these. You’re talking, James, about the jobs that have been created, or that will be created. It’s really hard to imagine them all. I have a lot of faith that they will but think back to things that have also changed our world. Again, the smartphone, who could have thought of, at the time, I don’t know, Uber, Airbnb, Tiktok influencers? There are so many things that are jobs, that are careers that have all come out of that, that, man, you couldn’t have imagined at the time.
###
Grace: There’s really two most evident use cases for humanoid robotics. First would be industrial humanoid robots who you often find around industrial manufacturing settings, taking over all of the really extremely laborious and repetitive tasks that people are currently still conducting in today’s industrial environments even after the Industrial Revolution. That would be the first case.
The second one would be the ones that I’m more focused on, which are healthcare companion robots designed to provide social and mental support for more vulnerable demographics in society today. This is the area that I’m most passionate about. I think it brings a more holistic perspective to the applications of humanoid robots because it’s not just focused on the robotics dexterity, but it’s more so focused on how they as a system can positively impact the day-to-day lives of millions of people from an emotional perspective.
###
Jeff: These last few years have created new demand, but they’ve also created new challenges, supply chain challenges as an example. Agility seems to be still fast at work in producing units. How have the last few years affected your company?
Jonathan: It’s definitely a double-edged sword. I will say that the supply chain stuff has been really challenging. We need to be iterating very quickly on our hardware drives, and some of these, the timelines for getting components and parts and have just really extended in ways that are very hard to deal with. We’ve been redesigning circuit boards regularly to deal with parts that are just no longer available and we have to change the design to find a part that is available, so that’s some friction.
Obviously, work-from-home and so on has been challenging as well when you have to physically build something, but we’ve managed that. We’ve made it through that. Our company has been bicoastal since its inception. Damion, our CEO, lives in Pittsburgh, and we’re building out our office there. We have our office here in Oregon. All of our processes have enabled us to continue to function pretty well through that.
On the other hand, people’s attitudes have really changed about automation when they realized how much tools like this can improve their quality of life and the things that they rely on, and people’s perception of risk, of what’s dangerous in the dull, dirty, dangerous has changed. That’s really helped us. People now look at the possibilities of what a machine like Digit can do for them and how that’s really going to improve things a lot more now.
That cultural shift has been great because it’s not new. It’s something that those of us who’ve been in robotics for a long time, the data shows the value of this. Those of us who’ve thought it through and been in it for a long time maybe see it or have examples of it. It’s a new thing as it starts to shift from movies being the primary perception that people have about robots to examples in the market that people have and starting to get a much more nuanced and better understanding.
It’s a really valuable and important shift. I’ll say the movies are great because let’s explore the fears and the dangers first. Let’s make sure that we all ask those hard questions first and then start to really dig in deeper.
James: To touch on something you brought up earlier, Jonathan, I still believe there’s a stepping stone where these task-specific droids if you will, things that don’t have a human-centric design are paving the way for acceptance and social adoption of future human-centric robots because, today, when you build a robot that looks even anything like a human, the expectation management is really challenging.
All of a sudden, the consumer or the business says, “Gosh, this needs to operate like C-3PO or K-2SO,” or pick your notable humanoid robot from what science fiction has shown us. That focus on expectation management is just a challenge we still haven’t really been able to get over yet in the human-centric space.
Jonathan: One of the things that’s really important if you’re building a robot that’s multipurpose, does a lot of things and is meant to be working with and around people, you don’t want it to surprise people. You don’t want people to be shocked by it or have it do something that they think is creepy. Now, if its head turns 360 degrees or the elbows bend backwards, they’re going to step back and really be concerned about it like if a person would because that’s part of the interaction.
Judging how people react to, say, Digit when we’re walking outside on the sidewalk, maybe half of people pull out their cell phones or want to talk about it or look at it. Man, probably, half of people or more just walk by as if we were out walking our dog or something, and that’s great. That’s exactly the kind of reaction we want. You want these things. Once the novelty has worn off, you want them to fade into the background and people to basically have an inherent trust for them. Having them move in a natural way, having them look a certain way, that is how you generate that kind of reaction.
Jeff: If we see into the future a little bit, you talked about warehouse use and logistics, where do we see this space going in the next 20 to 40 years? We talked about the convergence of so much technology. What do we see in the future? I’d like to get both of your thoughts on this.
Jonathan: Yes. We’ve been thinking about this a bit. I think of it because this is like the strategic plan for our company, right? We’re not building a tote manipulation robot. We’re building a robot that’s going to be part of everyday life, but it’s not going to be part of everyday life tomorrow or next year. It’s going to be a while. We break it down into technology eras.
The technology era, maybe number five, is where you’re going to be able to talk to this robot. You’re going to be able to ask it to help you and do things. I don’t know. You’ve seen DALL-E, right, and the AI-generated art? You basically talk to this thing, and it generates an image, and you can say, “No, no, erase that chair. Add this. No, not like that. Do it in this style.”
You can have, basically, a conversation with this thing to create something that you want, in this case, an image. For robots in the future, it’s going to be actions and tasks and things you want it to do and having it just be helping around the house. That is clearly the future. Everybody wants that. It’s going to be possible, so it’s completely inevitable. The question is, who’s going to do it? What are the details of exactly how it’s going to look and so on? Okay, so that’s the goal.
Step one, technology era number one, that’s where we are right now, and this is, make something useful to the customer today. Do something immediately that’s going to have a return on the investment for the customer so that they want to buy more. That’s how you build a business, right? For us, that’s moving totes. That’s picking up. It’s a relatively structured environment. You know what you’re picking up.
You have some choices about where it goes but not too many choices. The safety is really straightforward because you can have it be aware of people, and people can be trained, who are around it. It’s not the public. That’s step one. Step two is, scale that. Now, how do you deal with fleets of thousands of robots, all of the information security, starting to get statistical information about the safety and about what breaks and durability and start to incorporate all of these much bigger fleet software challenges?
Okay. The next step I think is starting to break out into multipurpose. We’ve been moving totes around. We’ve got two or three or four different workflows of how we move totes in different ways. We start to break out into, now we’re depalletizing and manipulating cardboard boxes and then start to unload tractor trailers and then start to do a whole bunch of a hundred different workflows of moving totes and boxes around warehouses and things like that.
Now the safety takes a step up because the machines are going to be walking from task to task in warehouses where people are going to be walking by them as well, but there are still employees. They still can be trained. The robot is in its known, mapped environment, and it’s still totally feasible, so tens of thousands of robots moving and manipulating and doing things.
The next era I would say is third parties, customers perhaps, programming the robots to do what they want them to do. It’s straightforward. It’s a very mature API at this point. Maybe it’s even a graphical interface, and maybe it’s the point and you set waypoints for it and as you would teach a very, very inexperienced high school student to do this job, you now have an interface and you’re teaching the robots to do that job.
Then that’s where these things start to become much more general-purpose, and it’s not a team of engineers creating the task board. It’s now a platform that other people can then tell the robots what to do. Then the next stage after that is that final era, which is the forever spot of you can talk to these things and interact with them, and it’s a platform for–it switches from being a commercial product to a consumer product.
James: Just building off that last point, Jonathan, my perspective on the future and something that’s a personal passion of mine is just robots for senior care. Right now, globally, we’re seeing an incredible shortage of hospice nurses, people who can help senior citizens continue to age gracefully, but also age in place. I believe very firmly that where human-centric robots are going to crack the egg and really show their value and place in our world is in taking care of our seniors.
To your point, taking this iPhone model, taking and expanding on what we see today with your standard Echo and Alexa-enabled device, where this is now something that can communicate with you. It can remind you to be healthy, to be safe, help you with recipes, help you with shopping lists and things around the house that will allow people to age gracefully and in place.
Jonathan: I think that when the three of us are in our 80s and our 90s, I do think our kids are going to be able to buy us a robot to help us around the house.
James: I agree. Hopefully sooner.
Jonathan: Yes, hopefully, sooner. Man, it’s not easy, right?
James: No.
Jonathan: Think of it this way. When we are building robots to move totes in a warehouse, who cares if it drops a tote? Well, it really matters to mobility and so on. Who cares if the robot falls or breaks, the company efficiency, et cetera? If you’re 80 years old and in your own home and then the robot falls down the stairs and lands on you, then you care, right? It can’t do that.
It has to be 99.999s or whatever it is. It has to be more reliable than another person that’s helping in a home. It’s going to be some real work to get to that point so that that safety level of helping people who depend on that help has to be of a very high standard.
###
Grace: In 20 years’ time, humanoid robots will be as prevalent as dogs and cats. That’s my strong belief. I hope to see a future where humanoid robots are sufficiently developed, where they can automate nearly everything a person can do, both physically and mentally. At this point in time if we do achieve this and when we do achieve this, our society at this point can be completely sustained by humanoid-robot and machine-generated wealth that allows us as people to then spend time on our hobbies, developing our relationships with friends and family, and just live a life of complete pleasure. I think that’ll be a huge turning point in society. I hope I’m here for that.
###
Jeff: What do you think is going to be critical to make the integration a reality and to say, “Okay, hey, there’s going to be more acceptance”? Obviously, many people are coming from a place of fear. What are your thoughts about, yes, if we see things, that helps, the movies help, but what are some of those core things that will bridge and bring more harmony to that relationship?
Jonathan: It’s all about trust. It’s about having your devices meet your expectations and not surprise you with things that you didn’t expect. It’s not just devices and robots. That’s universally true of humans, too. If you’re in a city and then you see somebody acting in a certain way that’s not what you expect, you give them some space. There’s a lot of people who are very good at detecting things that are a little bit wrong, and I would say that’s where–
Have you heard about the uncanny valley? Is that a term that’s familiar? Just for everybody in the audience, uncanny valley is things that look a bit like us or like a stuffed animal, something where you can identify it. People like it a lot, but as you get closer and closer and closer and more similar to a person, you get to a point where it’s a little too close where at first you think it might be a person, but then you identify things that are wrong with it, and you hate it.
People hate that. It’s totally creepy, and it’s scary, and that’s called the uncanny valley. That’s the point at which things are wrong. Look, I don’t know, another example I’ve heard. Predators are really, really good at detecting when prey animals have a slight limp in their gait, right? How would you detect that? That’s so nuanced. Scientifically, how do we detect that? It’s very, very hard to identify what feature is it that’s a little bit off or something like that. Humans and animals are good at seeing when something is a little bit unexpected.
James: I would just build off of the topic of the uncanny valley. Something that made the NAO robot from Aldebaran and SoftBank so successful was that we found that with the right balance of it looks like a human, but it’s also very consciously designed not to have prominent facial features, and we found this to be really impactful, especially working with children on the autism spectrum.
The fact that this robot could be highly repeatable, its tone never changed, we derive so much information from people’s faces. Hey, are you frustrated in working with me? Are you tired? Are you bored? Are you just mad at my inability to keep up? A robot that lacks some of those prominent facial features, we found, was really productive in helping nonverbal children on the autism spectrum feel comfortable to now work with a robot and grow some of those skills.
Yes, it’s really important to find that appropriate balance. To go one step further, Jeff, your concept of what bridges that gap and creates that harmony, well, I’ll take it back to what Luis Sentis has talked about, which is this concept of intervention and self-efficacy, and then also drawing in what Jonathan talked about, which is that foundation of trust. Humans are innately flawed in our decision-making.
We have good judgment. We have good instincts, but we’re not always right. What is that appropriate balance of a robot intervening and saying, “Hey, you’re about to make a decision that computationally I don’t think is in your best interest,” that “I’ve run the numbers, I’ve calculated the scenario, and I’m going to intervene here because you’re about to make a bad choice”? I think that harmony is established when there is a strong degree of trust. Also, robots learn how to intervene in really tactful and appropriate ways.
Jeff: To close out, just a few more questions. Jonathan, one for you is, what would you recommend for those that do want to get involved in this space? You’ve been in this space for a long time. It’s obviously a deep passion, bridging academia and business. Any recommendations for those that want to get started?
Jonathan: There’s a ton of dimensions to doing robotics in general and certainly, a robotics business. I don’t know. The advice is, do the thing that you’re really, really good at and excited about, and if it’s related, whatever kind of engineering it is, whatever kind of business approach it is, if you’re excited about it, you’re going to be good at it, and there’s going to be a place. It’s big teams of people that are just required in order to do these things. It’s being able to work well with others and be really good at something you’re excited about.
Jeff: James, for you, you had mentioned your father. You benefited from robot surgery, and that sounded really meaningful. What are some of the cooler things you hope to see in the future, as far as human-centric robots really helping humans?
James: Again, I think it comes back to just a passion of mine, which is seeing robots being able to help our elderly. I think when we talk about trust, when we talk about robots being programmed for good things and good purposes, that’s going to be a huge burden or lift of burden on so many. Right now, we see so many people who have to quit their jobs and become full-time caregivers for their parents because there’s not others who can do that and do that well and do that with a high degree of acceptance and trust for the member who’s needing all of that support.
When I think of the very promising future of human-centric robots, I think it starts with entering robots into the home and doing really valuable things for our elderly and our aging population. I think that if we can do that really well, that’s going to garner a remarkable amount of trust and positivity around the space and what the future holds. I’m just excited to see that continuing to advance.
Especially right now, where nurses in hospitals, bedside care, hospitals are not making it attractive and easy for nurses, certainly, pandemics like COVID and others are making it even harder. Again, robots in hospitals serving really valuable purposes and use cases is a layer of that. You look at autonomous tugs and things that can transfer and move items, maybe beds within hospitals, they’ve not succeeded because they’re just large, obstructive objects now that are in the way of the flow of a hospital. I think, again, being in a hospital is something where a human-centric robot design is really going to prevail and be required, in fact.
Jeff: I want to thank you both for your devotion to this space. I clearly hear and feel the passion that you have. I think it’s so exciting to think about where we are today, the opportunity to start moving the needle and to go faster. I want to thank you both for joining the show. It’s been great to get your insights and think about how you can continue to impact our future. Thank you.
Jonathan: Yes, thank you. This was a great conversation. I enjoyed this.
James: Yes, thank you as well. This is always a fun topic, and it’s worth having more of these conversations. When we get back to managing expectations, socializing, and making people comfortable with the prospect of this future, more of these conversations are important. Thank you as well, Jeff.
Jeff: Thank you both.
###
Jeff: The Future Of podcast is brought to you by Fresh Consulting. To find out more about how we pair design and technology together to shape the future, visit us at freshconsulting.com. Make sure to search for The Future Of in Apple Podcasts, Spotify, Google Podcast, or anywhere else podcasts are found. Make sure to click Subscribe so you don’t miss any of our future episodes. On behalf of our team here at Fresh, thank you for listening.