This blogger recently had an encounter with Artificial Intelligence bots, regarding newspaper deliveries. (No names, in part because the story could relate to any customer service function.)
At-home newspaper delivery has become an inconsistent thing in the past decade, which will surprise no one. Actual delivery services are very small enterprises, not always reliable, probably not well paid, and transient. They are also invisible to the customer. At least one nationally circulated paper has a customer page on its site, but it’s hardly transparent. If the customer has a delivery complaint, there is a menu to choose from – no delivery, late delivery, paper was wet, etc. There’s no way to know if it was the publisher that failed to provide the hard copies, the local delivery service that didn’t perform, or what.
Not too long ago, the paper’s service page offered a new chat facility for these problems. It was fortuitous, as the latest delivery problem was unusual – the papers were left at the end of an access drive to a multi-unit building, rather than at the door as had long been the norm. So your blogger signed up for the chat function and texted the details into the box.
A reply came back saying they’d look into it. Then the phone rang, which caller ID said was a robo-call. But there was a voice mail, and a certain Robert said they were responding about the papers dropped in the wrong place, that there had been a change of delivery contractors, and to feel free to call back. When I returned the call, a certain Michael answered, in the same amiably neutral inflections as Robert. Upon taking my ID information, Michael said, oh yes, we see this, we have contacted the contractor, we expect the problem should be fixed. Thank you, I said, and Michael extended the invitation to call with any issues.
The next day the paper came to the door. The security guard happened to see the courier, who said, oh yes he had a note on his printout, to make sure to deliver to the front door. Then a text from the paper company showed up on my phone, asking for a rating on the experience with the virtual assistant.
Robert and Michael were AI bots.
The whole delivery system, at least from the publisher to the delivery contractor, had now been computerized.
I felt a bit stupid – of course Michael and Robert were bots, they had the same voice – and a bit relieved that maybe deliveries would be more reliable.
But then again – why couldn’t things have been this efficient without the computerized systems, with just people? (And by the way, we will see what happens when the delivery service turns over again – or maybe with computers replacing people, they can spend more and hire a larger company to do the deliveries. Maybe?)
But the question arises – are we unable to train and organize people, in their human foibles, to run a simple service like newspaper delivery? Why bother, if we can train a computer to do what those damn people can’t figure out in their inconsistent, irrational, inefficient indiscipline?
As AI improves, managers may have the option to dictate instructions into their phones, for seamless transmission, right down to the newspaper carrier. A lot of us can literally become dictators, and leave actual dealings with the undisciplined hoi polloi to the computer. Someone may have to hire and monitor contractors, but given time, even that might admit of automation to bypass the bother of people’s problems.
Two questions arise here. First, as noted in an Atlantic article, does the offloading of human functions, finding a date in the article’s case, threaten to “sap the ability of human beings to do human things?” Do we risk diminishing the species and the things we value about “us?” Second, do we like being able to dictate to a bot that, for now, doesn’t have quirks, doesn’t talk back, and doesn’t lose the memo or forget?
What kind of people do we become if we follow the path that the newspaper case puts in front of us? Notice that it is the simple jobs, for which we pay the lowest compensation, to which this computerized process has been applied. True, delivery people are participants in a market. True also, that people are the biggest cost in an organization. But the biggest buyers of labor seem to start at the bottom when trying to make management easier or more efficient.
When, for instance, will AI bots replace hedge fund managers? Portfolio management should be a great place to use AI capabilities. There is data galore to process, and a lot of funds follow an indexing strategy, mimicking the market’s composite measures. How much would be saved for investors by cutting out the layers of highly paid managers, from the top down? What rationales are there for not making this transition?
Of course we all have the visceral fear that computers will become conscious, perhaps to “replace us,” even to act against us – “us” being humans. But we are already doing the replacing, and in some cases making other humans’ lives harder. If we value individual agency, first, can each of us pay attention to how and why we use AI, at work or in play? Our human development may depend on our all too human choices now. What kind of people do we want people to be?