Advertisement

The other day I was helping a 79-year-old artist friend of ours who was having computer problems. A few months earlier, I had set up her new wireless printer. Then her Internet service provider replaced her router, and after that she wasn't able to print anything. On my latest visit, I found that the only driver software available for the printer would no longer work with her laptop's old operating system. But it had worked when I set it up for her just a few months ago.

This incident is a small example of a big problem that we are going to be dealing with more and more in the future: our need to develop and use systems that are so complicated that nobody truly understands them. Our artist friend was caught between three complex systems: the world of laptops, the world of printers, and the world of Internet service providers. And each of these worlds is so complex and many-faceted that no one individual exhaustively understands everything about them. 

Samuel Arbesman, author of Overcomplicated: Technology At the Limits of Comprehension calls our time of living with incomprehensibly complicated systems the Age of Entanglement. Arbesman is a complexity theorist who works at the interface between computer science and philosophy, and says that we'd better get used to situations like what my artist friend experienced. The systems we use are so complex, both within themselves and in their interactions with other complex systems, that the old ways of understanding them no longer work.

Early in the book, Arbesman cites the case a few years back of Toyotas that mysteriously accelerated out of control, despite the driver's attempts to stop. The problem caused several deaths and was investigated thoroughly, yet no one was able definitively to explain exactly what went wrong in the baroquely complicated control software of the vehicles involved. This type of frustrating conclusion, Arbesman says, is something we may have to get used to in years to come.

Historically, the ideal of engineering thinking has been the physics approach of simplifying a model until it can be predicted with mathematical precision. A steel bridge, for example, can be analyzed by considering the stresses in each of its members, and if none are overstressed, the bridge won't fall down.

 But what if you're dealing not with a bridge, but a mind-bogglingly complex network like the Internet, or a power grid, or even the control systems for a single large artifact such as a jetliner? There are so many parts, each of which has its own layers of complexity, that it is literally impossible for one person to wrap his or her head around the whole thing. 

 In such a situation, Arbesman says we should act more like biologists than like physicists. Biologists don't try to model a whole ecosystem exhaustively.  Instead, they might pick a single species and tinker with it by changing its diet, or turning off a geneā€”in other words, doing one simple thing that is easy to control and understand on its own. And then they sit back and see what happens. Sometimes the result is entirely unexpected, as when researchers trying to get a darker-purple petunia got a white petunia instead. In their attempts to understand this counterintuitive result, they discovered a new type of genetics manipulation technique that is now a standard part of biology's repertoire. 

Having been trained in the old-school physics-model abstraction approach to engineering, I am uncomfortable with the idea that not only is it very hard to understand sufficiently complex systems, but we shouldn't even try to understand them in the conventional sense. Instead, we should act like biologists, poking and prodding them and hoping that we can understand just enough to get the system to do what we want without going off the deep end of radical misbehavior. 

This approach may be adequate for non-life-critical systems.  If our artist friend can't print anything for a while, it's not the end of the world. But when I realize that some autonomous driverless vehicles now being tested are operated by artificial intelligence systems that even their designers can't explain in detail, it gives me pause. 

How do you feel about designing or using systems that are beyond any one individual's comprehension?  Have you had technical problems with systems that no one seems to understand?  Send responses to kdstephan@txstate.edu.

Advertisement
Advertisement