If you didn’t already notice, machines are deciding what news we see, what “facts” we learn, and what music we listen to next, but also who will be denied credit or who should be detained based on a predicted risk of criminal activity (and they are doing it poorly). As algorithms get more sophisticated and data becomes more plentiful, machines will do even more: deciding on medical procedures, educational opportunities, and how our cars drive.
In other words, machines will direct a substantial part of your life. This presents a problem because you want the decisions to be fair and right, but what does that mean? We all think we know what is right, but it is only our opinion and personal perspective. People are diverse. We live in different cultural, religious, political, and economic spheres, and those shape our worldviews, whether we are aware of it or not.
Morality is constructed by humans. There is nothing in nature that determines what is universally right or wrong; there is no objective measure. Thousands of years of philosophy didn’t bring us any closer to the unified truth, only to constant conflict. However, with the rise of computing, we are forced for the first time to extract the decision-making model from our minds and put it somewhere else. We will have to quantify philosophical musings and the way we do that will define our future.
Two extremes define the range of possible outcomes:
- We agree on objective measures of human fairness and incorporate them in algorithms that will run our lives.
- We do nothing and let machines pick up our biases in data they analyze.
The extreme outcomes sound either improbable or undesirable. Is there a middle ground that will push the humanity forward and not leave anyone behind?