Robotics and Ethical Concerns

February 18, 2024  •   David Pring-Mill

By David Pring-Mill

The following text has been excerpted from the Policy2050.com analysis “Robotics Trends (2023-2025),” which is available as a free download. To support this innovative research or contribute to future papers, join our Deep Tech membership.

While the fields of AI and robotics are becoming primed to solve complex, entrenched problems and drive economic growth, young children may be aspiring to career paths that will no longer exist or will be radically transformed by the time they’re adults. Additionally, the AI enabling self-driving cars may actualize a version of the Trolley problem, introducing even more conundrums for the next generation, as AI policy advisor Rob McGargow suggested in a TEDx Talk.

McGargow’s discussions and interactions with his own children motivated them to scribble out a list of “Robot Rules”:

  1. Bad people shouldn’t build robots.
  2. There has to be an off switch.
  3. There shouldn’t be bombs in robots.
  4. Robots shouldn’t look like humans.

These rules hint at a responsibility to preserve the best or most sacred aspects of humanity, from a sense of agency, especially over life or death decisions, to perceived authenticity.

Robots shouldn’t be bomb carriers, according to innocent perspectives. While this may not pan out, robots do have a life-saving history of bomb disposal. The first robot of this kind dates back to 1972, when Lieutenant-Colonel Peter Miller, who had a reputation for dreaming up unorthodox solutions, modified the chassis of an electric wheelbarrow to contend with car bombs in the Northern Ireland conflict.

Now, a new era of robots with law enforcement and military uses, led by companies such as the MIT spinoff Boston Dynamics, has introduced new methods of threat reduction as well as weaponization, including the dangerous possibility of makeshift consumer modifications in the aftermarket. Even those are the forefront of such robotics developments have called for industry standards and regulations to counteract misuse.

In an interview for this analysis, Rhonda Dibachi told Policy2050.com that from her industrial vantage point, a kill switch functions as a straightforward and separate safety measure, yet it’s also important to account for distributed liabilities within a broader framework. Prior to her current role as founder and CEO of HeyScottie, a manufacturing services marketplace, Dibachi worked as a nuclear site engineer at GE, a manufacturing consultant at EY, and a manufacturing development manager for Oracle. As co-founder and CTO and later CEO of an LED lighting manufacturer, she oversaw industrial automation in their SMT PCB line and metal shop.

As Dibachi elaborated: “We used robots on our floor, and they all shared one thing: a kill switch. Just cut the power! AI-enabled robots I am sure will come with that feature, too. It’s easy, it’s low-tech, and it operates completely separately from any other part of the system.”

She noted that robotic functions could be influenced by various actors, all of whom leave their imprints across hardware, software, and operations, including the robot’s designer, developer, operator, and foreman running the line. If an accident were to occur, any of these agents might bear responsibility for improper functioning. “The robot manufacturer’s responsibility was to make a robot that behaved like its user manual said, the operator’s responsibility was to operate it according to the manufacturer’s instructions, et cetera,” Dibachi stated.

Moving forward into the next generation of systems, accountability and liability must again be assigned based on specific use cases, from hazardous cleanup robots to delivery drones, since clear attributions with associated financial liabilities can help to deter irresponsible deployments. Dibachi commented, “The potential of incurring billions of dollars of liabilities will be more effective in deterring bad AI deployments than any number of AI watchdogs.” New “highly complex, loosely coupled” configurations of emerging Large Language Models (LLMs) or generative systems and robotics will call for comprehensive evaluations. Dibachi suggested, “You’ll find the potential for error to live everywhere, including the dynamic interactions between the different parts of the framework.”

In another interview for this analysis, David Reger, founder and CEO of NEURA Robotics, said that the current hype around LLMs distracts from other challenges that must be navigated to realize AI and robotics on a global scale. This begins with raising the seemingly simple question of what activities a robot should – not can, he emphasized – perform.

“Is it morally justifiable for a robot to take over the care of a senior citizen because people do not have the time or inclination to do so, or because society is not prepared to duly reward these healthcare activities?” Reger asked.

Another set of ethical questions arises when weighing the capabilities of robotic deployments in the civilian security sector as compared to human police officers who bring their own discretionary power, past experiences, and awareness of unexpected circumstances to the job. Reger continued, “How should a robot, whose decisions are based solely on the evaluation of collected data and the rational description of third-party experiences, act appropriately? A common set of ethics must be found at the international level that serve as a basis for the use of intelligent robotics.”

Reger’s own company promotes its units as “robotic assistants built to collaborate with you in a natural way.” Elaborating on the ChatGPT/LLM issue, Reger suggested, “An essential task of robotics will be to bring artificial intelligence out of the virtual space and endow it with the ability to gain its own physical experience. AI can only develop human perception of the world in the body of a robot with cognitive and sensory capabilities. It is necessary to prevent artificial intelligence from acquiring the status of a modern deity due to a lack of physical presence.”

Rhonda Dibachi noted that the development of increasingly sophisticated AI also takes an environmental toll, bluntly characterizing AI as “a pig for bandwidth” and predicting astronomical increases in server farm demands. She, therefore, stressed the importance of AI optimizations in the data centers themselves as well as harder permitting requirements for energy-efficiency standards such as LEED Gold.

Excerpt from “Robotics Trends (2023–2025),” published by Policy2050.com.

The full analysis, titled “Robotics Trends (2023–2025),” is available in our open-access section.

  1. ← Previous Article The NewSpace Economy: Preparing for the Future, Learning from the Past

    Next Article → Robotics and Climate Change