Boring machine learning is where it's at

It surprises me that when people think of "software that brings about the singularity" they think of text models, or of RL agents. But they sneer at decision tree boosting and the like as boring algorithms for boring problems.

To me, this seems counter-intuitive, and the fact that most people researching ML are interested in subjects like vision and language is flabbergasting. For one, because getting anywhere productive in these fields is really hard, for another, because their usefulness seems relatively minimal.

I've said it before and I'll say it again, human brains are very good at the stuff they've been doing for a long time. This ranges from things like controlling a human-like body to things like writing prose and poetry. Seneca was as good of a philosophy writer as any modern, Shakespear as good of a playwright as any contemporary. That is not to say that new works and diversity in literature isn't useful, both from the perspective of diversity and of updating to language and zeitgeist, but it's not game-changing.

Human brains are shit at certain tasks, things like finding the strongest correlation with some variables in an n-million times n-million valued matrix. Or heck, even finding the most productive categories to quantify a spreadsheet with a few dozen categorical columns and a few thousand rows. That's not to mention things like optimizing 3d structures under complex constraints or figuring out probabilistic periodicity in a multi-dimensional timeseries.

The later sort of problem is where machine learning has found the most amount of practical usage, problems that look "silly" to a researcher but implacable to a human mind. On the contrary, 10 years in, human mimicking computer vision (e.g classification, segmentation) is still struggling to find any meaningfully large market fits outside of self-driving. There are a few interesting applications, but they have limited impact and a low market cap. The most interesting applications, related to bioimaging, happen to be things people are quite bad at; They are very divergent from the objective of creating human-like vision capabilities, since the results you want are anything but human-like.

Even worst, there's the problem that human-like "AI" will be redundant the moment it's implemented. Self-driving cars are a real challenge precisely until the point when they become viable enough that everybody uses them, afterwards, every car is running on software and we can replace all the fancy vision-based decision making with simple control structures that rely on very constrained and "sane" behaviour from all other cars. Google assistant being able to call a restaurant or hospital and make a booking for you, or act as the receptionist taking that call, is relevant right until everyone starts using it, afterwards everything will already be digitized and we can switch to better and much simpler booking APIs.

That's not to say all human-like "AI" will be made redundant, but we can say that its applications are mainly well-known and will diminish over time, giving way to simpler automation as people start being replaced with algorithms. I say its applications are "well known" because they boil down to "the stuff that humans can do right now which is boring or hard enough that we'd like to stop doing it". There's a huge market for this, but it's huge in the same way as the whale oil market was in the 18th century. It's a market without that much growth potential.

On the other hand, the applications of "inhuman" algorithms are boundless, or at least only bounded by imagination. I've argued before that science hasn't yet caught up to the last 40 years of machine learning. People prefer designing equations by hand and validating them with arcane (and easy to fake, misinterpret and misuse) statistics, rather than using algorithmically generate solutions and validating them with simple, rock-solid methods such as cross validation. People like Horvath are hailed as genius-level polymaths in molecular biology for calling 4 scikit-learn functions on a tiny dataset.

Note: Horvath's work is great and I in no way want to pick on him specifically, the world would be much worse without him, I hope epigenetic clocks predict he'll live and work well into old age. I don't think he personally ever claimed the ML side of his work is in any way special or impressive, this is just what I've heard other biologists say.

This is not to say that the scientific establishment is doomed or anything, it's just slow at using new technologies, especially those that shift the onus of what a researcher ought to be doing. The same goes for industry; A lot of high-paying, high-status positions involve doing work algorithms are better at, precisely because it's extremely difficult for people, and thus you need the smartest people for it.

However, market forces and common sense are at work, and there's a constant uptick in usage. While I don't believe this can bring about a singularity so to speak, it will accelerate research and will open up new paradigms (mainly around data gathering and storage) and new problems that will allow ML to take centre stage.

So in that sense, it seems obvious to postulate a limited and decreasing market for human-like intelligence and a boundless and increasing market for "inhuman" intelligence.

This is mainly why I like to focus my work on the latter, even if it's often less flashy and more boring. One entirely avoidable issue with this is that the bar of doing better than a person is low, and the state of benchmarking is so poor as to make head-to-head competition between techniques difficult. Though this in itself is the problem I'm aiming to help solve.

That's about it, so I say go grab a spreadsheet and figure out how to get the best result on a boring economics problem with a boring algorithm; Don't worry so much about making a painting or movie with GANs, we're already really good at doing that and enjoy doing it.

Published on: 2021-10-20










Reactions:

angry
loony
pls
sexy
straigt-face

twitter logo
Share this article on twitter
 linkedin logo
Share this article on linkedin
Fb logo
Share this article on facebook