There are legitimate questions about the ethics of employing AI in place of human workers. But what about when there’s a moral imperative to automate?
It is by now, well-known that artificial intelligence will augment the human worker and, in some instances outright take jobs once handled by humans. A 2019 report indicated that 36 million U.S. workers have “high exposure” to impending automation. For businesses, the opportunities of AI mean they must scrutinize which tasks would be more efficiently and cost-effectively performed by robots than by human employees, as well as which ones should combine human and AI resources. But along with these considerations are ethical ones, since a heated public debate over the morality of job displacement can easily impact company reputations and profit margins, especially if those enterprises are seen to be behaving unethically.
But the debate over the ethics of automation misses a key question that both the public and companies need to consider: When is it unethical to not replace — or augment — humans with AI? In the cost/benefit analysis of automating jobs and tasks, it should become integral for business leaders to identify the areas where AI should be deployed on ethical grounds. Based on my own experiences as an AI strategist, I can identify at least three broad areas where the ethics of employing AI are not only sound but imperative:
1. Physically dangerous jobs
2. Health care
3. Data-driven decision-making
By Bret Greenstein