installing supercomputers in the boardroom

Before you let IBM sell you a computer to help lead your company, consider the downsides of outsourcing your strategy to a machine learning algorithm.
data center

With all the buzz about IBM’s claims of simulating neuron clusters compared to a cat’s cortex and the furious rebuttals from a rival brain modeling team which has good reason to be mad, you may be wondering what’s the big deal. Why are there so many computer scientists interested in creating models of the brain on a giant supercomputer and what practical use is it supposed to have? It’s not like most of us need to calculate some some of climate model or simulate nuclear blasts in our spare time while paying a few million per month for a warehouse-sized array of servers, hard drives and processors. And that’s true. These experiments in cortical modeling aren’t intended for your benefit just yet. However, in the long term plans of some researchers, your company will rely on computers which follow logical patterns and actually think through complex problems.

And here’s the most interesting and confusing part. These experiments are supposed to make a smarter and more nimble computer by creating processors or operating systems which function like mammalian brains, a move that some call essential to overcome the limitations of today’s machines and vital for the future…

Businesses will simultaneously need to monitor, prioritize, adapt and make rapid decisions [from] ever-growing streams of critical information. A cognitive computer could quickly and accurately put together the disparate pieces of a complex puzzle, while taking into account context and previous experience, to help business decision makers come to a logical response.

If you’ve worked on business systems, this passage should send chills down your spine. We’re talking about not just automating data gathering and processing to reduce human error, but automating and outsourcing a critical part of business decision-making to a nascent artificial intelligence. Instead of enterprise solutions we use to keep track of the basic day-to-day humdrum, we’d have to build OmniApp that will essentially become a management team in its own right as its output is consulted for every major strategic choice. And considering that it would be based on mammalian intelligence, we’d basically be making fallible computers which would be helping equally fallible humans with complex choices. The whole reason we even build enterprise apps is to take routine, repetitive tasks out of human hands as much as possible, and then present managers with a cold, hard readout of what’s going on in their business. To build a recommendation engine with uses context and personal experience to come up with solutions to problems runs counter to the whole concept.

Personal experience in a particular situation isn’t always the best way to handle a situation and to exaggerate the problems with relying on what worked in the past to plot your way into the future by encouraging managers to use the hypothetical OmniApps as a crutch in decision-making, would be doing companies everywhere an immense disservice. Sure, we could program a contextual algorithm of some sort since we’re dealing with a limited amount of options in very specific scenarios, something even today’s von Neumann architecture would handle very well. Many business applications already do this to some extent when they go though some types of claim or order processing all by themselves. But if we also have the application make decisions based on what it did before, we would end up with a machine which makes the same decisions again and again. We’d basically end up with exactly what we have now, only with a fancier engine under the hood, a kind of cybernetic Ferrari that does the work of a pickup truck while we already have a perfectly good truck standing by. It’s a fun challenge for computer scientists, but the idea itself is highly impractical.

In the tech world, we walk a fine line between pushing the technology as far as we can, exploring the absolute extremes of computing systems, and providing practical solutions to clients. A good example of when extreme computing meets practical are Sandia’s models of nuclear blasts based on the mechanics of warheads. They allow scientists and engineers to figure out how well the bombs should perform based on certain factors, as well as see what deterioration would do to the weapon, all without actually testing them in violation of treaties. But does a business that needs to automate its logistics and keep track of sales numbers require a full blown artificial intelligence project? Probably not. If anything, businesses are already starting to suffer from the sheer overflow of data being generated by countless dashboards and fancy reporting software of very little use. With an overabundance of data comes a condition known as “analysis paralysis” in which managers can’t make a decision because they keep running models and analyzing data again and again. Having computers either do the same thing for them, or function as decision crutches, won’t help.

# tech // cognitive computing / computer models / computer science

  Show Comments