AI Special #85
Hello, here’s a special edition on artificial intelligence and machine learning, a topic that I’m very interested in. I’ve put together a few things to read if you want to get a better perspective of the way things are. My personal opinion is that we have many problems to solve before we enter an artificial intelligence ‘singularity’, as some people call it.
If you got this from a friend and want to subscribe, here’s the link.
That’s right: With machine learning, we can learn to get desirable results but without necessarily knowing how to describe the rules that get us there. It’s pure output. No—or hardly any—process in the sense that we can interrogate or clearly understand it. Maybe not even instruct it, exactly.
This piece starts off by introducing that we don’t really understand the working (or process) of many of our machine learning models. It’s a good read because it analyses the repercussions of trusting machine learning and artificial intelligence too much.
He continues: “Algorithms make decisions and we implement them in software. The easy way out is to design them in such a a way as to remove the human from the loop. A perfect system. But, there is no such thing. The universe is complicated, and Things Happen. While software can deal with that (…) we can take a step back and say: that is not the outcome we want. It is not the outcome that conscious beings that experience suffering deserve. We can do better.”
All of these people are data scientists, and they programmed an artificial intelligence to run Numerai’s hedge fund. Nobody therefore really knows how this fund works, since nobody was in charge of every single part.
So, to sum up: They aren’t privy to his data. He isn’t privy to them. And because they work from encrypted data, they can’t use their machine learning models on other data—and neither can he. But Craib believes the blind can lead the blind to a better hedge fund.
This is a piece about Doug Lenat, who believes that the blind approach to machine learning that we are currently employing is not the complete one. So he’s built a system called Cyc which is essentially a series of common sense instructions for a computer program.
“AlphaGo’s victory isn’t a defeat for humans — it’s an opportunity”
A story of AlphaGo. And also an extension of “I don’t play Go, will this help me?”
We usually think of AI as faster and better versions of human brains, but if we want technology to progress by leads and bounds, we must make AI that is like nothing else on Earth, says digital visionary Kevin Kelly.
Perhaps the most stunning overreach is in the company’s claim that Watson for Oncology, through artificial intelligence, can sift through reams of data to generate new insights and identify, as an IBM sales rep put it, “even new approaches” to cancer care. STAT found that the system doesn’t create new knowledge and is artificially intelligent only in the most rudimentary sense of the term.
The story of the ImageNet dataset, one of the most comprehensive and best-suited datasets for machine learning algorithms. It has 3.2 million labeled images categorised into 5200+ categories and 12 subtrees. Researchers then started finding out that their algorithms worked better when working on the ImageNet dataset. Voila, sometimes big (and better) data is more powerful than big (and better) algorithm.