Context: I’ve spent around 15 months learning statistics and machine learning (ML). I’ve written a post about the journey and resources I’ve used. Here I try to share my thoughts and observations about what ML is in essence, why we experienced the recent hype, and some of the potential dangers and opportunities that await us in the future.
Machine learning is pattern recognition
If I had to summarize what ML is as practiced today, I would say pattern recognition. Many of the algorithms try to find patterns in data or build patterns from rules and the environment. Many things are a pattern in some form or another. For example, home prices usually increase with size, a disease has similar symptoms across a population, people in the same life situation buy similar things (young families buy a lot of baby stuff), similar objects have recognizable shapes and color, traffic participants respond to changes in the environment similarly, and speech is just a collection of patterns of sound waves. There are more examples than I can think of, and that’s the beauty of ML—you can use it for so many things. It doesn’t mean it will replace everything we do today, just that it can help us in more areas than it does at the moment.
Machine learning is not new
Fun fact: ML techniques we’re using are old. We’re looking at the 50s and 60s up to the 80s and 90s. Nothing revolutionary has happened recently. Even though this is not the first artificial intelligence hype in human history, why is it happening again? I think there are two factors at play.
The first factor is access to data. More than half of the world’s population owns a smartphone or a computer with the Internet connection, and all of them can take a photo, write a message, or record a short audio clip; most of us do it daily. Additional sources of data come from a passive collection. For example, your GPS location and your fitness tracker, observations and measurements in scientific experiments, or traffic cameras. Most ML techniques today need a lot of data to built and maintain a model. (Aside: You can think of a model as a part of a computer program that was generated with a specific ML technique. A model is a decision maker: it determines if it’s a dog or a cat on the photo, which song is playing on a radio, or how to translate your words into Spanish.)
The second factor is computational power. Some ML techniques—like complex neural networks—require a lot of number crunching. Historically, hardware was expensive, and fast and specialized hardware was prohibitively expensive for most. Fortunately, this situation has been changing in the last couple of years.
I have a consumer-grade graphics card in my desktop computer that would have been considered a supercomputer not that long ago. I’ve bought it to play games, but it turns out I can use it for more than that. I’ve trained a few neural networks on an Intel processor and an NVIDIA graphics card as a comparison. The graphics card trained the neural network 30x faster. Thirty times! Waiting for results for one minute or half an hour transforms how you work. So depending on the situation, you can buy one graphics card and save a lot of money by not buying 30 computers with regular processors. And there are more than 20 companies today building specialized ML hardware.
Even if you don’t own specialized hardware, it’s easy to rent it today. Cloud computing gives quick and affordable access to all the computational power you need. Run a few small prototypes on your local machine and when you’re ready, rent 20 machines with graphics cards for a few hours to do the heavy lifting. Instead of paying $100,000 for purchase and maintenance of hardware, pay $1,000 for renting that power only when you need it. The statement “cloud computing will become a utility” wasn’t clear to me before, but now I finally get it. We pay electricity and water by how much we use them—we can pay for computation the same way.
Killer robots and superintelligence won’t rise to exterminate us
Killer robots gaining consciousness and turning against humans is a popular theme in media. The technology is so far out that it’s still sci-fi. The most significant danger to humans in the near future is: other humans. In the last 100 years, we’ve built nuclear, biological, and chemical weapons capable of eliminating most of life on this planet. We’ll soon add ML-augmented weapons to that list. I wish that we never build autonomous weapons. Unfortunately, nations and military groups will do it out of fear that someone else might build them first. But even then it’s critical to remember that we still have the power to decide how to use them. Only we are responsible for our future. Not gods, not robots. Only us.
Dangers of bias and unfairness are real
I’ve already written about dangers of bias in ML—how models we build and use can have unintended consequences. After completing my basic ML education, I’m convinced even more that this is a huge problem. And I think most of the issues will arise out of the complexity of the area and the lack of critical thinking, not malicious behavior. I repeatedly asked myself throughout every ML project if what I’m doing was correct and if the outcome will always be as expected. I could never be 100% sure. Some models I could interpret and explain, but many stayed black boxes.
If bias is one side of the coin, fairness is the other. What is fair and what is right is an ethical question. The tech industry is not the first one to grapple with it; philosophy traces our struggles with the question for hundreds, if not thousands of years. That track record tells me we’re not going to find the right answer anytime soon, but we have to discuss it because the consequences are real.
Let me give you an example. Imagine someone is building an ML model to select students from tens of thousands of university applicants. What would be a fair selection rule: merit (grades, competitions, achievements) or pushing for equal opportunities (giving people from underprivileged groups a chance at higher education)? It’s not a new question; it’s been here before computers. But for ML to work you have to be explicit in setting your goals and objectives, and that forces you to define your values in code. If everyone else on the receiving end of an ML model agrees on your values, you’re good. But chances are people will feel differently. That’s why digital platforms that serve billions of people around the world carry a huge responsibility. The platforms are imparting their values—and with that the notion what is fair and ethical—on everyone else.
I expect us to reduce bias and uncertainty over time but struggle with fairness indefinitely.
Replacing and augmenting humans
In the last two centuries, we’ve invented technologies that have helped us with activities that we didn’t like or that machines could perform better than us. Heavy machinery hauls dirt and rocks, and computers multiply thousands of numbers. The same thing is happening with machine learning: some repetitive and error-prone jobs will be replaced by algorithms, others will benefit from ML technique, and there’ll be new jobs we can’t even imagine today.
Nobody denies that the transitional period will be full of challenges and ambiguity. Many systems, like educational and legal, will need to adapt soon or else people won’t be able to keep up and might be left behind. However, I’m hopeful. Some companies, institutions, and governments have started to look at how ML can help people, not just replace them. Here are a few examples:
- Google’s People + AI Research (PAIR) tries to make “AI partnerships productive, enjoyable, and fair”
- OpenAI is a non-profit AI research company, discovering and enacting the path to safe artificial general intelligence
- UK’s Parliament wants to lead the way on ethical AI
- New York city moves to create accountability for algorithms
There are many more.
In the long term, we’ll adapt to living alongside machines that recognize patterns in the same way we’re living today alongside machines that fly or can take a square root of a huge number. We’ll find a way to combine tirelessness and precision of machines and versatility and creativity of humans to build a better future. We just need to be thoughtful about how to get there together.