Skip to main content

The first chatbot arrest, but what are the implications?

The first chatbot to be arrested in late 2015 brings up new questions.
Image Credit: Shutterstock.com/BortN66

Imagine the police arresting a bot and releasing it after months of custody and investigation. This is not a scenario from a futurist’s blog — it actually happened in Switzerland last year.

What were the charges against the globe-trotting Swiss bot and its owners? Its name gives you an idea: Random Darknet Shopper. Created by a couple who are both artists, RDS shopped in the wrong places and bought illegal goods on the dark web, also called the “darknet”, “deep web,” and “darknet markets.” Worse still, the pre-programmed bot sent illicit items back to its owners in Zurich to show off.

It wasn’t entirely the bot’s fault; its coders deliberately deployed it on the criminal dark web with $100 in Bitcoin to spend each week. Random Darknet Shopper did its job well and sent back all sorts of counterfeit and illicit (if not illegal) items. A well-known art gallery in St. Gallen called the Kunsthalle then displayed the ill-gotten loot in a public installation, describing the campaign by Carmen Weisskopf and Domagoj Smoljo as “awareness raising” in the name of art through their activist collective, !Mediengruppe Bitnik.

It was a hit — until the police thought it wasn’t funny anymore when the gallerist and artists nailed a packet of ecstasy to the wall as the latest part of the public exhibition. Bang! The police then nailed the bot and confiscated the drugs. The Swiss artists retreated to their studio, afraid of what would happen next.

After several months of international news coverage and national debate, it all ended happily when the Swiss authorities released the algorithm and accepted that the human couple behind it acted in the name of performance art and public education. The artists then made the statement: “Yes, the bot is fine. He even still has some Bitcoins left. And no, unfortunately this does not mean that it is now legal to consume XTC in Swiss art spaces.”

The artists reached their goal of global media coverage to raise awareness about the ethics and liabilities of bot usage, as well as “issues of control in society.” The controversy encouraged Weisskopf and Smoljo to release Random Darknet Shopper again, but this time “he” moved to Great Britain, as did the couple.

The busy bot then went on a Christmas shopping spree on the dark web yet again. This time it had its questionable bounty sent to a London gallery running an exhibition “live” about its illicit activities.

Strangely enough, after a flurry of international press about the U.K. installation in late 2015, there has been no further coverage of Random Darknet Shopper this year.

Legal questions

The international incident raises some very good questions:

  • How do you regulate bots when the owners have a good excuse for their creations being “out of control”?
  • Are bots above the law if they are independent agents or actors?
  • What are the international laws to cover, protect, or exempt the developers in jurisdictions outside the country where they create the algorithm and bot avatar/personality?
  • Are bot developers exempt from liability if ownership of the algorithm cannot be conclusively proven, as I explored in my last article comparing “compilation bots” with in-house “advanced bots”?
  • Could the developers be made liable for crimes committed by the bots if they can’t convince the authorities that the codified avatar (bot persona) was acting autonomously or in the public interest?
  • What if the developers argue the bot developed artificial intelligence outside their (original) control, such as while operating independently in open source environments and interacting live with humans who influenced its bad conduct or caused it to make bad decisions?

These situations could happen more often and sooner than we anticipate. It is best to start building the regulatory framework now.

A barrister in the U.K. I talked to says he thinks there should be an “international accord” on how to handle rapidly developing, artificially intelligent bots coded by humans; governments worldwide likewise need to take a close look at increasingly independent algorithms that enable deep learning ability and autonomous decision-making.

Can bots be controlled?

I use the term “A.I. bots with organic memories” as explained in my Kindle ebook on the subject because we need to differentiate the bots that are likely to cause trouble.

The basic chatbots won’t be a problem because they are too simplistic. The complex ones are more troublesome because they are often deployed as spambots, and it’s already hard to trace ownership of the algorithm back to the culpable humans. They might be best dealt with by bot blockers that would filter out nuisance bots, such as the cold-calling or non-opted-in chatbots on instant messaging platforms. One influential British commentator even says Facebook’s “spammy chatbots” could lead to customer boycotts and “loathing” of the brands backing the avatars deployed on Messenger to date in 2016.

A.I. bots are the ones to watch out for in the future. They require immediate regulation globally if we, the end users, are to build positive relationships with these new, evolving cyber creations instead of the burned-out response of wariness, mistrust, and intense dislike of them on social media.

VB Daily - get the latest in your inbox

Thanks for subscribing. Check out more VB newsletters here.

An error occured.