Ethics and the Internet of Things

A new report on malicious examples of artificial intelligence is giving us a glimpse into how complicated security will be for the Internet of Things. People may joke about their microwave or toaster being hacked but the reality of integrated home systems and self-driving cars means we’re putting a lot of trust in artificial intelligence. The strength of A.I. is the speed at which it can make decisions and the ease with which it can make connections. So fast that it won’t be able to wait for us to tell it whether or not it’s making the right choice. Could it be that the developers who will succeed in the IoT market will be the ones who taught their A.I.s the ethics to make their own decisions?

Why ethics

We’ve all gotten emails from far-off princes who offer us millions to “help them by temporarily transferring money into our account.” We’re all educated in the ways of the world here, and we avoid these scams by engaging learned cynicism. But most of us first avoided by engaging our ethical sense.

10 Open email

20 Do I deserve millions of dollars at random?

30 No = this is suspicious, delete

40 Goto 10

An ethical understanding of a situation often triggers extreme caution in humans. It might be a useful tool for machines too.

 

An old question, a coming problem

Ethical questions have always surrounded the idea of artificial intelligence. The term robot is a loose slave metaphor from Czech that appeared in Karel Čapek’s 1920 play R.U.R.

While ethical questions have always been prime fodder in science-fiction for human dilemmas, we talk less often about how important ethical thinking may be in real-world programming. Bulletproofing a customer’s security is a massive chunk of what technology companies are forced to spend their time on. Bot attacks are faster and more intense than human hackers ever could be.

 

The high stakes of IoT

Let me be alarmist and hypothetical for a moment. It’s 2025. You’re just outside of Yuma on the I8 from Phoenix to San Diego, scanning tech news while the car does the driving. Two long-haul trucks, both self-driving, pull up behind you and begin to pick up speed. A headline comes up about malicious A.I.s controlling vehicles to cause the maximum amount of damage to both civilians and the economy.

At 65mph, can you wait for a security patch to be developed and installed? Wouldn’t you rather those transport trucks and, for that matter, your car, have that innate suspicion of malice that you had when you didn’t send your bank account information to that dethroned prince?

Asimov’s three rules are a good start.

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In the science-fiction scenario laid out above, these rules would probably act as a decent stop-gap to prevent the worst disaster. Asimov was no slouch. Armies of writers have had a crack at invalidating these rules. But an army of writers spending six months on a book is very different from limitless bits of code trying variables at computer-speed until they find a way to convince a system that the rules are actually detrimental to humans.

Will Asimov’s three rules stop hack-bots from infiltrating the system that sees, hears and controls almost everything in your home? They are an attempt to sketch some basic ethical principles we constantly use but only address a narrow range of issues an A.I. is likely to face.

Technology is being too thoroughly integrated into our lives to think we can set out numbered rules for devices to follow. We need to start developing machines which apply some pretty broad sets of ethics to situations not yet conceived of and evolve new ethical standards quickly to fend off malicious intelligences that will also be evolving at a rate beyond what humans can follow.

We will no longer be able to offer our guidance to devices as to what is the right or wrong choice. We will need to program intelligences that can make those choices the way we should, only faster.

 

Conclusion

Trust is going to be difficult in the age of IoT, for B2B and B2C alike. The rewards of engaging A.I. will be revolutionary, but the risks will move faster than anything we’ve seen before. In the past, if a threat got to be too much, we could pull the plug and walk away; now we will be bound to A.I. technology.

The company that will lead the new era will be the one who is most able to soothe fears of the marketplace, whether they are rational or not. The best chance of creating confidence in your product may be to let them know that you’ve got not only your best people on the job, but your best A.I. as well.

We are putting a lot of faith in these machines; we’d better be able to tell people, and ourselves, that it isn’t misplaced.

 

You need a marketing team who understands technology. Contact us.

Share on facebook
Share on twitter
Share on linkedin
Share on stumbleupon
Share on reddit
Share on email