Hello? Is anyone there?

It has been awhile since we’ve posted an update here. We’ve just been very heads down on development of our companion robot. The age of “personal robotics” is only just beginning and creating one is more challenging than it might seem. Creating such a platform takes much more than PC, tablet, or mobile phone, in part because there are many more components, not only on the hardware side, but also for the software. This new “personal” generation requires more autonomy and more natural interaction, and natural interaction typically means supporting spoken input. While speech technology has seen improvement over the years, it still remains a most challenging form of input, much more prone to error than keyboards and mice. Even in human conversations, recognition isn’t always 100%. Advances in speech input have only set user expectations higher. We are pleased when speech oracles like Siri, Echo, or Cortana get it right, but typically very disappointed when they don’t.

Further, robotics is often associated with developments in artificial intelligence, which also seeing great advancements is often portrayed in the press with more hype than substance. There is no doubt that software algorithms are able to do amazing things, but most of this is based on what computers have done better for some time; that is, processing data very quickly. The result is that they not only can match, but exceed human potential for taking a large dataset and predicting probable outcomes faster and more accurately than we do.

But this must be put into the proper perspective. Any pocket calculator can outperform a person with multiple digit division or multiplication in terms of speed, but that doesn’t mean it understands what those numbers represent. Computer programs can beat Jeopardy or Go champions, but that doesn’t equate to the same intelligence that humans use to play the game. Machine learning also has yet to exhibit intuition, the ability to think beyond what the data says, and see new concepts or solutions.

However, we don’t often consider that when we speak to a machine or robot. Even with small children or our pets, we make assumptions about intelligence based on what they appear to understand from the words we say.

It doesn’t help that film and fiction has always portrayed robots (and other forms of machine intelligence) as our peers, and sometimes as our potential rivals. This has led to a current in the media about potential dangers that may be here. They would be easier to dismiss if there were a more accurate reporting of where things actually stand. (I favor the views of the eminent Dr. Rod Brooks)

It cannot be denied that advancements in technology have an impact on how we live. It was true of fire, steam power, guns, electrical power, cars, etc. So yes, robots (and AI) will have some affect on our lives, which may include some jobs. As Brooks also notes in another interview, a person using an electrical drill is going to be more productive than a person with a hand-drill. Gone are the days of vast secretarial pools or ranks of telephone operators. Technology has generally removed those jobs. Likewise, for the most part, the wheelwright who would create and repair wooden wagon wheels is no longer an occupation in high demand. However, few would feel that the replacement of those human functions is a bad thing. Instead we tend to accept and enjoy the benefits that technology advancements have provided.

So there is little doubt that technology and automation may render some jobs obsolete, especially those that can be reduced to computational processing or repetitive activities. To avoid being a victim, it is best to understand how to employ technology to augment oneself, or check out Anthony Goldbloom’s Ted Talk. Those that don’t tend to become disadvantaged. If you are reading this post, would you really be willing to give up your car, your washing machine, microwave oven, electric lights, or the device you are reading this from?

But I somewhat digress. My point is that despite advances in AI, many expectations exceed reality. Machine learning can be helpful in certain aspects, but crafting a robot that we can naturally interact with (and is useful) still takes a lot more “human intelligence” than “machine intelligence”. If it were otherwise, we would be done with ours by now. That doesn’t imply we aren’t applying machine learning in what we are developing. The speech engine we use has been trained using neural networks, and there are many other ways we use such technology to support navigation or other aspects of how our robot works.

However, a successful personal robot is more than its hardware and software. It also requires studying and applying principles of human psychology and social interaction. Human interface design is just as important as the technology it runs on. Conversational engagement involves more than turning audio information into words, and words into behavior and actions. It takes considering the social nuances of words and their presentation. Intonation and facial expression are an important part of how we interpret what we hear.

The good news is that we’ve made good progress here, but it is still too early to share specific details. We prefer to be able to demonstrate that when it’s ready rather than to create imaginative videos that may not fully represent reality. Stay tuned.