Respect for Trial & Error, & Success
There are many ways we transform questions and uncertainty into confidence, new products, or innovative solutions. Experimentation and trial is one that many seem compelled to avoid. Give respect to the power of trial and experimentation, and to its risks.
Right now I’m working on a project with a friend of mine and it strikes me how differently we approach the development of our vision. He finds it simpler and quicker to run calculations and probabilities. I prefer to lie out a scenario and try it to see what happens.
Throughout the various realms of product development, innovation, and process improvement we experience similar differences in preference. It seems that many prefer to find ways to model the problem or the solution and run simulations to arrive at an answer; the minority will prototype, test, and experiment.
Even in the realm of Six Sigma experts, where designed experiments are not only taught as a powerful tool, but experience with them is mandated for many to earn their Black Belt certification. Because of the expense and time required to develop and conduct careful, scientific experimentation, it seems to be disfavored. We try to find simulations instead.
So, which preference is best? Which is most effective? Are those that prefer experimentation and trial holdouts to a bygone era before computing power enabled rapid and reliable simulation of complex and dynamic scenarios?
Of course, we all know the answer intuitively or experientially. Both simulations and experimentation have their place. One is not better than the other in general, just in specific instances. At the risk of insulting a reader, or a reader’s manager, those who take a one-sided view probably have little experience with either.
The challenge, and the primary point of contention between camps, is not acknowledging that simulations and trials are both valid methods, but knowing which one is the right one to use at a given time. Let’s explore some real observations and turn those into a guideline for us to follow.
Some years ago, at a time I was a manager of a product design group, our engineering function was turned over to a new engineering director. The new director came from a different industry and brought many different ideas with him.
One of those ideas was to “strongly encourage” us to stop using trial and experiment as a development tool, and to rely on simulation instead. I recall being one of several voices that tried to explain that we did not believe his direction was correct, or at least that it was given without complete understanding. Well, when driving change, one must stop allowing old behaviors to persist. Our arguments were not welcomed.
So, we did as directed and began our best efforts to build simulations to supplant trials as a means of determining if our developing designs were valid. Immediately, and not coincidentally, one of our most important and urgent projects fell drastically behind schedule. We were forced into a design review even though the engineers knew we did not have the critical questions answered.
Of course, the design review revealed how many unanswered questions we had, and when the collective functional leaders drilled into the cause, it was observed that we were developing our models to simulate product and component performance, so we had not progressed on the product itself. It turned into a very embarrassing moment for our new functional leader because he had already explained to his peers that simulations were faster and more efficient than trials and that under his leadership our engineering organization would become faster and more predictable.
Instead we were slower, his leadership was questioned, and our engineering organization appeared incompetent. The reasons were simple. Simulating complex mechanical designs, integrated systems, or even complex software systems with a multitude of input combinations takes a great deal of thought and time and work, especially if you want the models to be re-useable. Also, simulation is not always faster, less expensive, or more dependable than trials.
The components and designs we needed to validate were complex mechanical and electronic systems to be used in dynamic ways; however, the components were small and relatively simple. We could build and destroy many samples in much less time and for less expense than we could develop meaningful simulations.
Building dynamic computer simulations is a difficult science, especially if data is lacking. The models must also be validated in order to provide credibility, and model validation requires experiments and data. Alternatively, we could speed up the process by purchasing some very exclusive software, but the expense and time to learn how to properly use it was equally deterrent.
Our leader, in this example, came from the automotive industry, an environment where the products are extremely complex, the samples very costly to build (in fact they aren’t samples or prototypes, the test samples are production-run vehicles), and the models and simulations have been developed and evolved over decades with vast data. He didn’t anticipate the start-up cost and time for his vision.
Another example that sheds light on the value of trials and experiments occurred when I was a member and leader of an engineering team that developed security devices. Some clever individuals in the world had developed a relatively quick and simple means of thwarting almost every device of a common basic format, by every manufacturer. Because of the Internet, the method had become common enough knowledge to be a serious problem.
Naturally, we set out to modify our products to counter the method others had devised to thwart our products’ security. It wasn’t just a business need; it was an ethical demand. We began by researching what the experts had to say about the method and the phenomenon that took place that enabled the method to bypass our device’s designed security.
The assertions of the experts matched our own theories, and they made perfect sense. There was one doubt, however. If our theories were correct, then certain designs should not be vulnerable to the method, but they were. We constructed computer simulations that modeled our theories and the expert assumptions, and they showed that the designs in question should not be vulnerable, but our experience dictated otherwise.
We decided we needed to know for sure. We bought and rented some data collection and sensor control equipment and a high-speed camera. We modified our products so we could install the sensors and the camera, and we conducted some trials. We witnessed, with the high-speed camera, a phenomenon that no one thought likely or even possible. It was not the phenomenon that everyone believed took place.
The sensors gave us data to update and improve our models and simulations. Why, though, did our computer simulations not show us the phenomenon that actually occurred? Because our models were built from our assumptions.
All models are built from assumptions. Some assumptions we get from textbook formulas. Some we get from direct observation and data. Some are simply our best guess at what really influences, or doesn’t influence, the outcome.
In the end, after scouring the patent databases, extensive product teardowns of competitor systems, and thoroughly researching the information available on the Internet, it became clear that our group was the only one that really knew or understood what happened inside the systems that allowed them to be thwarted. We could tell because what happened didn’t make intuitive sense and the designs others produced to prevent thwarting would not stop what we observed.
Now this could have gone two ways: We could have built numerous designs based on concepts and tested them all to discover that none of them stopped the bypass method. That would have been trial and error with a great deal of waste. In the end we would have had an unsolved mystery. Instead, we used trials and experiments to directly observe what truly happened.
The trick with successful trials is to identify what you need to learn and design an experiment that will show it. Tests that produce a works/doesn’t outcome are of very little value. If we want to use trial-and-error as a means of furthering our designs, then they must teach us something. All other testing is generally wasteful.
Last week, when my friend and I met to compare notes and share our progress on our shared project, his method of mathematical calculation trumped my experimental one. He had accurately predicted, in a few minutes, the same outcomes that I observed with several trials. The trials took much longer than his calculations. At that stage of development and with the questions we needed to answer, his calculations and models were a superior method to my trials.
There is a very good reason why models and simulations might be preferred over trials. If we already have the software, knowledge, and skills to build a meaningful model, then it is often much less costly and less time consuming. Models operate in the perfect laboratory environment because we dictate so. Even when we try to control the variables a trial might experience, we sometimes get anomalous results and we must re-do a trial.
Trials can be very costly when they don’t produce meaningful results. I’ve often seen a trial produce an outcome that no one expected and then everyone sits around trying to make sense of a new mystery. This happens most often when the trial or experiment is not carefully planned, when an influence we did not respect or anticipate drives results.
Trials that generate more questions than they answer can be very important because they inform us what we need to address or understand about our system, but they are not always good. They lead to more trials, which means more time and expense. Trials can be dangerous and wasteful if they don’t produce the information and knowledge we need. For this reason, trials should be respected, not only for their potential to produce knowledge, but also for their potential to cost us greatly.
If we decide to use trial-and-error, experimentation, or testing to help us progress our designs and solutions, we must plan those trials, experiments, and tests very carefully. Spend the extra time it takes to thoroughly think through the approach and what we expect or need to learn, and what can influence the outcome, before spending resources to conduct the exercise.
While my friend perceives that calculations and simulations are far less exercise than trials, and while I still find trials to be far more entertaining than mathematical drudgery, our conversation last week culminated in the following observations. I’ll paraphrase:
Models and simulations are useful when we know enough about the phenomenon to credibly reproduce it in an artificial environment. They are faster and less expensive to conduct than trials when we already have the tools and methods established. Simulations can be very useful in eliminating questions and minimizing the number of trials we should perform.
Data and observations from trials can be used to develop models for future use to further reduce time and expense on future trials. Investing time in developing better, more versatile or accurate models can certainly be time well spent if it will credibly reduce, or eliminate, expensive testing or experimentation.
Models lose value if we do not have the tools or information we need to construct them. It can take extraordinary resource dedication to develop complex or specialized models from scratch. Models are not always quicker and less expensive than prototypes and trials.
Trials, experiments, and testing, are excellent ways to decipher and observe what truly happens. They are especially important when customer perceptions and product or solution success depends upon how a customer feels, perceives, or experiences the product or solution. Simulations will not provide meaningful information about such things.
Trials and tests are risky. They waste resources if they do not produce results we can use. Therefore, they should be conducted with respect for their potential to waste time and money, and should be carefully planned and conducted in a manner to mitigate that risk.
In short, do not summarily subscribe on one camp, either that of simulation, or that of experimentation. It is wise to use both tools at the correct time, for the proper need. Consider, for each case, if testing or modeling will be faster or less expensive. Which one will reveal what you need to know?
Trial-and-error, experimentation, and testing are important tools for developing new solutions and for innovation or product development. Do not forget to consider their importance or usefulness. Do not conduct them without proper respect for their potential to devour your development time and budget.
Stay wise, friends.