Top Hats

Dave thumbnailI saw Steven Spielberg’s new movie, “Lincoln” and thought it was terrific. Daniel Day-Lewis gives a career-defining performance as Abraham Lincoln, and the supporting cast is outstanding. For slightly over two hours I felt as if I had been transported 148 years back in time, and was living through all the political turmoil at the end of The Civil War. But this isn’t a movie review. It’s a story about a hat. In particular, it’s a story about the formal hat that almost all of the men in Washington DC in early 1865 were wearing–the tall, stovepipe-shaped hat that we know as the Top Hat.

The history of the top hat doesn’t have a clear-cut beginning. One story has the top hat invented in Florence  around 1760. Another has a Chinese hatter making the first top hat for a Frenchman in 1775. What is known for certain is that an English haberdasher named John Hetherington caused a riot the first time he wore a top hat in London in 1797.  According to a contemporary newspaper account, passersby panicked at the sight. Several women fainted, children screamed, dogs yelped, and an errand boy’s arm was broken when he was trampled by the mob. Hetherington was hauled into court for wearing “a tall structure having a shining luster calculated to frighten timid people.”

It was much ado about nothing, really; Hetherington’s top hat was simply a silk-covered variation of the contemporary riding hat, which had a wider brim, a lower crown, and was made of beaver. There was initial resistance to Hetherington’s silk top hat from those who wanted to continue wearing beaver hats. But around 1850 Prince Albert started wearing Lincoln in tophattop hats made of “hatter’s plush” (a fine silk shag), and that effectively settled the beaver versus silk fashion question. Of course, the fact that beaver trappers desire for beaver pelts to sell to hatters in America and Europe had  all but wiped out the beaver population in America may also have had a lot to do with the switch from beaver to silk. Whatever the case, by Lincoln’s time the silk top hat was the de rigeur headwear for both informal and formal occasions. [The photo at left shows President Lincoln meeting his generals at Antietam in 1862.]

Throughout the nineteenth century, men wore top hats for business, pleasure and formal occasions — pearl gray for daytime, black for day or night. The height and contour of the top hat fluctuated with the times, reaching a pinnacle [pun intended] with the French dandies  known as “the Incroyables,” who wore top hats of such outlandish dimensions collapsible tophatthat there was no room for them in overcrowded Parisian cloakrooms, until Antoine Gibus invented the collapsible opera hat in 1823. Nearly a century later, financier J. P. Morgan approached the same problem from another angle; ordering a limousine built with an especially high roof, so he could ride around without taking his hat off.

monopoly-manMy personal favorite top hat milestone  was achieved in 1814 by a French magician named Louis Comte, when he became the first conjurer on record to pull a white rabbit out of a top hat. But by the early decades of the 20th century, the top hat was no longer uncle sameveryman’s hat, but had become a symbol of the aristocratic and powerful, most famously evidenced by Rich Uncle Pennybags from the  Monopoly game, and  America’s Uncle Sam, a symbol of US power who is always shown with a top hat.

Every US President since Lincoln wore a top hat to his inauguration, until Dwight D. Eisenhower broke with the tradition, which was briefly reinstated by John F. Kennedy at slashhis inauguration in 1960, and then abandoned by Lyndon Johnson and all the presidents who followed. Alas, in spite of its storied history, the top hat has largely gone out of fashion. There are, of course a few exceptions, like the iconic top hat that Slash, the guitar player from Guns & Roses adopted as part of his persona.  But what was once commonplace has now become a rarity. It is, of course, still possible to purchase a silk plush top hat, though you’ll likely be buying a reconditioned model, re-conformed to fit your head, since very few silk top hats have been made since French production largely ceased in the 1970s.

I’m not sure what, exactly, led to the demise of the top hat. Perhaps top hats were simply too much trouble to take care of, what with the need to find suitable places to store them wherever you went. Can you even imagine a gentleman walking onto an airplane with a top hat, and trying to find space in the overhead compartment to store it so that it wouldn’t be damaged? I think that our world today is, in a rather uncomfortable way, too crowded to allow for men wearing top hats. It’s too bad. I think they’re pretty cool. I can picture myself, swirling my cape, with a silver knobbed walking stick in one hand, doffing my top hat as I head off to take on the world….then, poof, the image evaporates as I remember that, most days, I wear my pajamas into my home office in the morning.

Landing in the Fog

Dave AvatarWhile Albert Hammonds was almost correct in his 1972 song, It Never Rains In Southern California, that doesn’t mean the sun is always shining. We call our own special brand of fog marine layer. It’s a dense layer of fog that rolls in off the ocean in Southern California, drawn in by the warm air over the desert, pulling cold damp air from over the Pacific Ocean. When the desert air warms up in the Spring, and the ocean water is still very cold, the resulting fog layer lasts late into the morning, and returns early in the evening, giving rise to the phrases “Gray May” and “June Gloom” to describe the weather pattern. After a few years you get used to it, even if you don’t like it. After all, it’s simply fog, and it’s nothing more than a minor inconvenience–unless you are trying to land an airplane. Then it can be very scary.

Flying home to Orange County, California from San Francisco on Thanksgiving weekend, we were thrilled to be upgraded to first class on a United 757-200 jet, even for the brief, 60-minute flight south. We’d spent an enjoyable few days with our daughters and son-in-law, and were ready to get home and begin preparing the house for the holidays. When we took off shortly after 9:00 p.m., we drank a glass of red wine, then buried our noses in our eReaders [Kindle for me, iPad for my wife]. I calculated that we’d be home around 10:30, barring any delays picking up our luggage at baggage claim. All was right with the world, at least for the moment.

The flight to Orange County had been relatively uneventful, with only a few minutes of fog jetvery mild turbulence. But everything changed when we made our approach to John Wayne Airport. As the airplane descended below 2,000 feet on final approach to the runway, we entered an area of very thick fog, so dense that I couldn’t see a single light shining up from the streets of Irvine, the crowded city where John Wayne Airport is located. I searched for the 405 freeway, which runs perpendicular to the runways at John Wayne, but literally couldn’t see even one pair of headlights–nothing but fog.

Even the noise from the aircraft’s engines didn’t sound right to my ears; I’d made that landing over two hundred times in the past fifteen years, and this definitely sounded different. Then, at last, the runway lights came into view as we appeared to be no more than 100 feet or so from the ground, but instead of landing the aircraft, the captain powered up the engines and took us back up to a safe altitude. The First Officer apologized over the intercom, and advised that we were going to circle around and make a second attempt to land the plane. If we couldn’t get down at John Wayne, we’d divert to Ontario, CA Airport, about 40 miles further inland.

Around we went, and the second attempt led to a similar result–my wife clenching the armrests and bracing for a crash, but too much fog to see the runway clearly, so back up in the air we went. Surprisingly, the First Officer announced we were going to make a third attempt, since the visibility through the fog seemed to have improved slightly from the first to the second attempt. This time, passengers throughout the cabin seemed to be either hyper-alert or quietly resigned to their fates, but–much like in the movie “Groundhog Day,” the pilots started the approach to the  runway, then pulled up and flew away, finally landing at Los Angeles International Airport around 11:00 p.m. I won’t bore you with the details of our long wait for the airline-provided bus to take us the 40 miles back to John Wayne airport, where got our car and drove home through the dense fog, arriving tired and emotionally depleted around 2:00 a.m.

Okay, it was a bit unsettling and a lot annoying, but we were safe. We landed safely, in spite of the fog, even if the plane didn’t arrive at the airport we expected. Most importantly, it was not an accident [no pun intended] that our landing happened the way it did. The entire landing process was guided by the FAA’s Instrument Flight Rules (IFR), a set of regulations that dictate how aircraft are to be operated when the pilot is unable to navigate using visual references–exactly the situation we encountered that night at John Wayne Airport in Orange County.

300px-ILS_illustrationWhen flying under IFR, once the plane is established on its final approach, it is guided by a highly sophisticated Instrument Landing System (ILS) which provides precision guidance to help the pilot get the aircraft properly aligned for a landing. [If you are interested, you can find out everything you wanted to know but were afraid to ask about Instrument Landing Systems and how they work by clicking here.] The pilot is not permitted to descend below a specified minimum “decision altitude” unless the visibility requirement is met and the pilot has the required visual references in sight for the runway where he intends to land. The decision altitude will vary, depending upon the runway length and location, and the capabilities of the specific aircraft. At 5,701 feeet, the runway at John Wayne is the shortest of any major airport in the United States. Accordingly, the decision altitude for John Wayne is set at 200 feet, the highest minimum decision altitude under IFR.

From the pilot’s standpoint, the decision process was very straightforward. He couldn’t establish satisfactory visual references, so, at 200 feet altitude, he aborted the landing at John Wayne, and pulled away to try again, and again, and yet again, finally diverting to LAX, with much longer runways, ranging from 10, 285 to 12,091 feet. Plan A was always, of course, to land at John Wayne. But the decision to go to Plan B– the contingency plan to abort the landing and try again or divert to another airport–wasn’t left to the judgment of the pilot. It was automatic, triggered by the failure to see the runway lights from at or above 200 feet.

When you think about it, it is surprisingly difficult to come up with other examples of an automatically triggered Plan B, especially in the business world. (If you can come up with some good business examples of the automatically triggered contingency plan, please send a Comment and share them.) All too often there is only Plan A. Moreover, when there is a contingency plan, it’s not at all clear when to abort or give up on Plan A and activate Plan B. No specific decision rule exists to tell the business leader that it is time to put the contingency plan into operation, before it is too late. Left to individual judgment, too many times Plan B isn’t activated in time, and the business crashes.

Safe sorryThe scarcity of decision triggers for contingency plans should seem surprising, given the difficulty, some would say the impossibility, of accurate predicting the flow of future events. But we are generally supremely overconfident about our plans, whether or not that confidence is justified. We also tend to disregard any evidence that would suggest that our plan is in trouble, so any contingency plans we may have made are rarely taken seriously. I’m not sure that any amount of pleading will improve that situation. In the meantime, I’m glad that we didn’t attempt an overly risky landing, and got down safe and sound, even if we were a few hours late. It’s a trade-off I’d make every time.


The Single Greatest Risk to Our Economy?

The 2012 election was about the economy. Much was said about the government’s role in supporting the context for job creation via tax, debt and trade policies. Most of the focus however, was on the role of large companies–whether they will ship jobs overseas or keep them here. Even less attention was paid to small business, and none at all was given  to a threat that has less to do with government and more to do with entrepreneurial initiative. We think this was a mistake.

We frequently hear that small business is the engine for job creation and economic growth, and the data supports that proposition. According to the Bureau of Labor Statistics, the data shows that there are 30 million small businesses in the US, collectively accounting for over $10 trillion in personal wealth. These small businesses employ 53% of all US workers, and are responsible for creating 64% of all net new jobs [70% of all jobs created in the last decade]. This sector offers a huge opportunity for the market-led economic recovery we need, but it is threatened: not by trade or tax policy (though getting these factors right is important for the sector to thrive), but by the very entrepreneurs who created all this wealth, all these jobs, all this economic growth and innovation. And unless they get this right, the loss of wealth and economic vitality that took place as a result of the 2008-09 “recession” could pale in comparison. We’re talking trillions of dollars of squandered wealth.

Here’s what we mean. Sixty percent of small business owners were born before 1964–they are part of the Baby Boomer Generation. So–get ready for this–a Baby Boomer small business owner is turning 65 every 57 seconds, and that will continue for the next 17 years.

That is a startling number. Every 57 seconds another small business owner gets ready to retire, or at least scale back on their involvement in business. You’d like to think that, similar to large, Fortune 1000-type companies, these business owners are thinking ahead, making plans for the continuation of their businesses, with someone else at the helm after they exit. Old age will inevitably mean that even the most hearty of us start to slow, and health issues will take their toll as we age.  Though many of us are motivated to keep earning to support adult children and even grandchildren–we only last so long. Yet, even though 95% of small business owners acknowledge the importance of exit and succession planning, only one in eight have a written plan for leadership continuity; and without such plans the odds that the business will disappear along with the current owner are far too high.

What accounts for this failure to protect and secure what it took a lifetime to build? How is it that the men and women who took risks, learned hard lessons, and displayed ingenuity and tenacity their entire lives, act like fearful procrastinators when it comes to managing the business risk of retirement and succession? Have they all gone mad?

We don’t think a bout of mass hysteria is a plausible explanation.

Many business owners simply refuse to quit, and this takes many forms. Some would like to quit but don’t trust anyone to do the job well after them. Some are tangled up by unrealistic expectations from family and business partners. Others wonder how they will be treated once they retire, and what they will do with themselves. Many simply don’t see how they can attract a buyer or arrange a buyout or realize other acceptable transitions.

It all stems from one thing: they don’t act because they don’t really know what they want to do next, so rather than tackle it like any other challenge in their business, they let it slide because business is business, but this… this is personal. Besides, there are always issues more urgent in a thriving business, (though none more important). But successful transition of a business across many generations is possible.  The owners of the Montreal Canadiens NHL Team are the Molson family; they are on the 12th generation of Molsons and have been central to Montreal’s economy for two centuries.

Figuring out what to do is difficult because no cookie cutter solution speaks adequately to the intricacies of each business and each ownership situation. The owners’ usual advisors tend to see things as accountants, as lawyers, as financial planners, etc.  They often don’t see the whole picture or will discount aspects for which they don’t have a ready solution.

If politicians aren’t looking at this issue, others are. For example, in our book Changing Places: Making a Success of Succession Planning for Entrepreneurs and Family Business Owners,  co-author Moss A. Jackson, PhD and I provide a path for business owners to develop and execute their own customized succession plans to secure the future for themselves, their families and their employees. Jack Beauregard, founder of the Successful Transition Planning Institute, in his book Finding Your New Owner: For Your Business, For Your Life, tackles the same issues with a structured transition planning approach for Baby Boomer business owners.

We encourage Baby Boomer business owners to wake up to the reality of their situation, and face up to the challenge. There is too much at risk to simply do nothing and hope for the best. If you own a business, start planning for the “life-after-exit” you want to lead, making sure that your business, with all that it creates for your family, employees and the economy, survives for generations to come.

[This post was co-authored with Alan Engelstad and Karl Moore. It first appeared, in somewhat shorter form and with a different title, on December 7, 2012 in Karl Moore’s blog at Alan Engelstad  designs innovative management transitions with Designed Outcomes and is an Adjunct Professor at the Desaultel Faculty of Management, McGill University. Karl Moore is an Associate Professor at McGill University, Montreal, Canada, and teaches and writes about how leadership must be rethought.]


Enhanced by Zemanta

Seeing With Fresh Eyes

French novelist Marcel Proust wrote that, “The real act of discovery consists not in finding new lands but in seeing with new eyes.” To me, this sounds like the exact opposite of déjà vu. We all know that déjà vu feeling. It’s the distinct feeling that, even though we are in a completely unfamiliar place, somehow, we’ve been here before. According to, the noun déjà vu (from the French: “already seen”, also called paramnesia) first appeared in 1903, and has two meanings:

  1. the illusion of having previously experienced something being encountered for the first time
  2. disagreeable familiarity or sameness: The new television season had a sense of déjà vu about it–the same plots and characters with new names.

The late comedian George Carlin, in one of his comedy routines, coined the phrase, vuja de to describe the feeling that “none of this has ever happened before.”  Vuja de hasn’t made it’s way into the formal lexicon but the Urban dictionary provides a few attempts at a definition, all of them a variation on Carlin’s theme of coming to a familiar place and finding something new and different that you’ve never seen before. Proust might have balked at being linked with George Carlin, but I think Carlin’s phrase captures the essence of Proust’s quote, since vuja de must be akin to having a fresh set of eyes to see the same thing as everyone else, but understand it in a unique way. As an aside, Carlin’s ability to bring that sense of vuja de to his observations of everyday life were the essence of his comedic genius.

Abraham Wald’s Airplanes

An often quoted example of this “seeing with fresh eyes” is the story about Hungarian statistician, Abraham Wald, who worked during World War II with the UK Air Ministry. British bombers were being shot down over Germany, and it made sense to reinforce the planes with armor. You can’t armor plate the entire aircraft, because the plane would be too heavy to get off the ground. Wald was asked to perform a statistical study to answer the question, “Where should we place the armor?” Records of the planes returning from Germany showed where they had been hit, sometimes with very large holes in the aircraft extremities. So the Air Ministry wanted to put armor plate on all of the areas that showed heavy damage. But Wald pointed out that there was no data on bombers that didn’t return from Germany. He then carefully noted the few areas of the bombers where holes were NEVER found. These were the areas that Wald said needed heavy armor, because any bomber hit in those areas must not have been able to make it back to England. Obvious, but only after someone who looked at the data with fresh eyes pointed it out in a way that caused the Air Ministry to see that what had previously seemed “obvious” to them was obviously wrong. The folks at the UK Air Ministry were fooled, because they saw what looked like a pattern and made a wrong interpretation. They ignored the pattern that they couldn’t see; the pattern of bullet holes in all of the bombers that were shot down.

Our Pattern-Matching Minds

The human mind is an amazing pattern matching machine. Our ability to recognize and act upon patterns from a small sample of data characteristics allows us to transit the exigencies of our daily lives without having to consciously think about everything that we do. We can develop heuristics–rules of thumb–that help us make decisions almost without conscious thought. Remember what it was like as a child learning to brush your teeth, or a teenager learning to drive. Every action was strained and difficult, requiring careful thought. These are actions we perform today almost automatically. We’ve learned the physical patterns of handling the toothbrush and steering the vehicle.

We do the same kind of thing in our thinking processes. We observe patterns, and make fast decisions based on what those patterns tell us. Unfortunately, as with Wald’s Airplanes, our pattern-matching isn’t always correct. In fact, it’s often disastrously off the mark. Our minds are subject to a broad array of conceptual and perceptual errors that assail our ability to truly think straight. (If you are really interested in exploring how well, and how poorly, our thinking processes work, I highly recommend you take a look at Nobel Prize winner Daniel Kahneman’s latest book, Thinking Fast and Slow. If that sounds familiar, it may be because I mentioned his book in a previous post.)

But what’s maybe even more disconcerting than what we get wrong in our thinking, is what we fail to see that is right there in front of us–the things that are so obvious once they’ve been pointed out. Studies show that airport security staff miss a huge proportion of the weapons that authorities send through x-rays as a test. Should we be horrified: how could something as unambiguous as a gun not stand out? Psychologists like Kahneman know the answer:  because guns are so rare the screeners don’t see them, the bags with the weapons simply merge with the “pattern.” of bags without guns.  So, beguiled by pattern, the screeners can’t see with fresh eyes, and they fail to notice the gun.

But sometimes, people find ways to look with fresh eyes. Companies were making cell phones and PDAs for years, but Apple, with the iPhone and apps, looked beyond the communications functions of the device and saw it as the smartphone, personalized and loaded with our apps and data until it has become almost indispensable, more like a part of our body than an appliance.  The Amazon Kindle and its copycat eReaders are a similar phenomenon, as are wheels on suitcases, though we put a man on the moon before we figured that one out. Obvious once you’ve seen it.

A vuja de Switch

Perhaps what we need is a “vuja de switch” that we could turn on in our brains when we get too bound up in conventional thinking. That seems to a major theme of the book, Practically Radical, by Bill Taylor, co-founder of Fast Company magazine.  Perhaps, as Oliver Burkeman wrote in his review of Taylor’s, book published in The Guardian, “‘Think outside the box’ has been put back in its box. Vuja dé is in.”

Unfortunately, as Burkeman points out, recommendations for vuja de thinking are essentially replacements of old thinking routines with new and different routines. The key word here is routines. Again quoting Burkeman, “The point of vuja dé is to think outside preworn grooves, but a book telling you how to think is to some extent by definition a preworn groove.”

There are some possibilities, though. Thinking about a problem in different physical circumstances seems to help, which perhaps explains why BFOs (Blinding Flashes of the Obvious) often strike when we leave our desks and step outside for a walk in the fresh air. Selecting the reference frame of a different person–an engineer, an actor, a cook–and considering what they would do can also be a very powerful way to bring a fresh sense to our own eyes. But if the key is to randomly shift perspective to trigger new outlooks, we are in trouble without that vuja de switch. Without an outside intervention to jolt us, our pattern-seeking brains will follow the familiar and well-worn pathways, and ignore what is right in front of us, if only we would look for it.

The best advice on this subject may be found in the book, Zen Mind: Beginner’s Mind, where Zen teacher Shunryu Suzuki wrote: “In the beginner’s mind there are many possibilities. In the expert’s mind there are few.” If we could approach the world with the wonder of a child, unburdened by the weight of our worldly experience and the numbing patterns it triggers, what might me be able to discover?

Stomping Grapes or Making Wine?

If you’re anywhere near my age, you’ll remember a very funny episode of the “I Love Lucy” show, where, en route to Rome by train, Lucy is spotted by a famous Italian cinema director and chosen to play a part in his new movie “Bitter Grapes.” Lucy sets out to immerse herself in the role. When she nonchalantly wanders into a vineyard inhabited by a motley assortment of Italian-speaking women, she is dispatched to the wine-making area to crush grapes with her feet. [Here’s a link to the episode–Season 5, Episode 23, Lucy’s Italian Movie. The grape stomping scene starts around 19:50 into the episode.]

Grape stomping,is part of a method of maceration used in traditional winemaking, wherein the grapes and stems are mashed together, releasing not only the juice from the grapes, but also the phenols and tannins that provide color and acidity.  Rather than using a wine press or other mechanized method, grapes were crushed by foot in open vats to release their juices and begin fermentation. The French word pigeage is also often seen in connection with grape stomping, but pigeage, which literally means “punching down the cap,” describes the pushing down  of the grape skins that float to the surface of the fermentation vats, forming a “cap.”

Grape stomping probably goes back to the very beginnings of winemaking. Historical evidence shows that grapes were stomped at least as far back as Rome in 200 BC. One of the earliest existing visual representations of the practice appears on a Roman sarcophagus which depicts a group of demigods harvesting and stomping grapes at a rural Roman festival.

For centuries grapes were picked by hand and grape stomping was the universal method used to extract juice from the grapes to make wine. In America, most grape stomping by human feet was legislated out of existence by the end of the twentieth century, the concern for public health outweighing  tradition. Most other countries eventually banned grape stomping too, but there are still places where you can stomp grapes. If you are really serious about grape stomping, you can compete in the World Championship Grape Stomp at the Sonoma, CA Harvest Fair.

But there is a lot more involved in making wine than stomping grapes. A vintner starts by deciding which type or types of grapes she wants to grow. She has to consider soil, geology, topography, and climate/microclimate. Praying for good weather–the right blend of warm sun and invigorating rain, the vintner selects the optimal time for harvest, when the sugar level in the grapes is exactly where she wants it to be. The grape crop is then harvested, usually by machine, but sometimes by hand in carefully selected bunches. Then the grapes are rushed from the vineyard to the winery.

In the winery, the grapes are crushed and the crush is placed in fermentation vats, where the vintner adds yeasts, carefully selected to deliver the desired flavors.  During fermentation, which can last from a few days to a few months. the winemaker carefully monitors acidity and alcohol levels, and when she determines that the wine is ready, it is transferred to barrels for storage and aging. Even the barrels are selected with care, since the different woods–French oak, American oak, old oak or new oak–will impart different flavor elements to the final product. Finally after three months to as much as three years of barrel aging, the wine will be bottled and distributed, finding its way to store shelves and wine cellars around the world.

It’s obvious that making wine isn’t easy; making a good wine is hard work and a bit of luck; and  a great wine is the result of hard work, luck, and a high degree of both skill and artistry. There’s a huge gap between a grape stomper and a wine maker.

Unfortunately, we often fail to see the difference. In almost every field of endeavor we encounter people who actually know very little about their real work, but do know how to put on a good show–filled with all the right buzzwords and catch phrases, posing as experts. But unlike true experts, they haven’t put in the time and effort practicing their craft. Unlike true experts, they haven’t learned hard lessons from failed efforts, then come back to try again with improved techniques. Unlike true experts they haven’t mastered their craft in the crucible of experience. They are grape stompers posing as winemakers.

Winemakers love their craft; most of them aren’t in the game for the money. [You’ve heard the old joke: “How do you get a million dollars making wine? Start out with five million.”] They study; they experiment; they ask lots of questions. And if they make smart choices and have a bit of luck, they sometimes produce a truly great wine.

It doesn’t “just happen” for winemakers. It won’t just happen for you. Are you willing to work for it? Do you want to be a wine maker, or will you settle for being a grape stomper?


Four Rules for Building Powerful Teams

Last week, my business partner Moss Jackson and I  finished up a leadership development program that we had created for a long-time client. The program brought together senior leaders from different business units and corporate functions within the company, who worked in small teams on a variety of challenges. All but one of the challenges was short-fused, and required the team members to quickly formulate an approach to the problem, adopt appropriate roles, then execute smoothly to complete the assigned tasks. We added an immediate feedback loop, wherein the teams were scored by a set of judges on how well they completed each of the assigned challenges and presented their results. During the course of the program we mixed up the teams, so that each individual program participant worked with every other participant at least once. We also added an element of personal competition to the program, with each participant’s score being the total of the points earned on each of his or her team assignments.

During the final debrief, several participants commented on how well they had been able to work on the team challenges with individuals from other parts of the business with whom they otherwise rarely interacted. Members of the company’s Executive Committee, who served as judges for the program’s final business challenge, noted that they were both pleased and a bit surprised at the quality of the work product delivered, considering that the challenge had been  assigned less than a day earlier. As the discussion continued, I was particularly encouraged that the question the group asked itself was not the negative formulation, “Why don’t we behave like this and perform like this all of the time?” but rather the learning-based inquiries: “What happened here?; How did we behave?; How can we act as role models for our own work teams so that we  improve our level of team performance?

We identified four significant factors at work that led to the extremely strong performance of these ad hoc teams, which we converted into rules for quickly building high-performing teams.

Rule 1: Trust others first. The fastest way to build trust is to show that you trust the other members of the team. While it may be difficult to offer up your own vulnerability, in most situations it works a whole lot better than asking, “Trust me.” Our program participants emphasized the critical importance of trust building to their ability to work together on team projects. What really hit me was how often they said that they now felt that they had strong connections with people throughout the organization; people they could go to for help at any time, and on any issue where they needed help.

Rule 2: Defer judgment. Getting quick results required the teams to generate a large number of possibilities in a short period of time. They seemed to understand that nothing will slow down anyone’s problem-solving work in a team environment faster than getting their ideas shot down, so they withheld judgment until lots of ideas were on the table. Even after the teams had generated many ideas, their analytic approach allowed them to gravitate toward solutions with the greatest likelihood of success. Further, because they built on each other’s ideas rather than promoting a personal agenda, they all felt invested in and committed to the solution path they chose.

Rule 3. Build and share a common language. This one was the trickiest to implement, but turned out to be extremely powerful. The language was less a technical jargon than a language of purpose and vision that the company had created a year earlier during a senior leadership retreat to re-examine its mission and vision. The process used to develop the specific phrasing of the company mission, vision and values statements and roll them out to the entire organization was thoughtful and inclusive, so that language became part of the everyday work context of the organization. All of the program participants, regardless of business unit or functional affiliation, were connected by the common language of mission, vison and values, enabling them to quickly forge a team identity around each of the challenges presented to them.

Rule 4. Create a sense of urgency. Nothing creates task focus better than a hard deadline. When the clock is ticking down against a deadline, everyone’s attention is sharpened. The rush of adrenaline makes it easier to block out distractions. No one wants to stand around idly, so each team member searches for aways to use her skills and abilities to complement the skills and abilities brought to the task by other team members.

These rules may not be universal, but they certainly worked well in this lab-like environment, and are likely to prove successful when implemented more broadly throughout the organization.

What other rules would you suggest to people interested in quickly building high-performing teams?


Predicting Progress

I read something the other day that appealed to the techno-geek in me. It has to do with techniques for predicting technological progress.  The go-to rule for predicting progress in the technology realm, at least what I always thought was The Rule, was attributed to Gordon Moore, co-founder of Intel Corporation. The rule is commonly known as Moore’s Law. Moore’s law has been the benchmark measurement for technical progress in electronics for decades. 

Moore’s Law was a prediction that the number of transistors the industry would be able to place on a computer microchip would double every year. While originally intended as a rule of thumb in 1965, it has become the guiding principle for the industry to deliver ever-more-powerful semiconductor chips at proportionate decreases in cost.  In 1995, Moore updated his prediction to once every two years. The doubling of transistors on a chip translates to a doubling of computing power and so–it was believed– Moore’s law “explains” why people today can carry a computer in their pocket, the ever-present smartphone, that is far more powerful than the computers used to control the Apollo moon missions.. The rub is that, since Moore’s law applies only to electronics, it can’t be used to forecast technological progress in other areas.

Researchers from the Santa Fe Institute now argue that a theory, proposed by Theodore Wright in 1936, called Wright’s law, is actually a better reflection of technological progress than is Moore’s law. In their working paper, “Statistical Basis for Predicting Technological Progress”, the Santa Fe Institute researchers detail how they looked at technological progress rates from 62 different technologies, including chemical compound manufacture, mechanical engineering, etc., and found key similarities. In effect, they found that economies of scale trump time in the race to drive down costs.

“Moore’s law says that costs come down no matter what at an exponential rate. Wright’s law says that costs come down as a function of cumulative production. It could be production is going up because cost is going down,” said Santa Fe Institute lecturer Doyne Farmer told The Futurist magazine in a recent interview.

More importantly, Wright’s law can be applied to a much wider variety of engineering areas, not just transistors. That will give technological forecasters a new way to measure and predict progress and cost for everything from airplane manufacturing (its original use in 1936) to the costs of building better photovoltaic panels, used to provide solar energy. This is what got me excited about Wright’s Law.

“It means that if investors or the government are willing to stimulate production, then we can bring the cost down faster. In the case of global warming, for instance, I think that a massive stimulus program has the potential to really bring the arrival date for having solar energy beat coal a lot sooner,” said Farmer.

This argument mirrors my own view, which was previously unsupported by scientific evidence, that if we are serious about America developing alternative energy resources, like solar power, we need some type of significant stimulus to production. The stimulus could take a number of forms. For example, a tax on fossil fuels would make alternatives, like solar, more cost competitive; and if fossil fuel and solar options were offered at the same price, I have to believe that the demand for solar would increase exponentially. If Wright’s Law holds true, the increased production, driven by demand, would have the happy result of lowering the cost of solar, making it even more attractive when compared to fossil fuels.

Farmer and his colleagues are expanding their working paper into more expansive study that further details the relationship between costs and the rate of progress.  In the interview with The Futurist, he indicated that he and his colleagues are trying to make solid, probabilistic forecasts for where costs for solar will be with and without stimulus, aas well as a probabilistic distribution of time frames for cost reductions that will occur with business-as-usual approach, compared to various stimulus scenarios.

I wish them luck.


Too Many Choices?

While my wife and I were visiting my father-in-law in Pittsburgh, we did some grocery shopping at the local Giant Eagle supermarket. I enjoy walking around the store, which is huge, checking out the almost unbelievable variety of  products available in virtually every category of grocery. There is an entire aisle–50 to 60 feet of four-high shelves on both sides–devoted to nothing but cookies. Another aisle holds snack crackers and chips. You name it, they have an aisle for it; and every aisle is stocked with a dazzling array of choices for each and every product.

In the bread aisle, the rep from Thomas’ English Muffins was busy restocking the shelves with the 15  different varieties currently baked by Thomas’. Although I generally favor the Original, with all of the “Nooks and Crannies” which, when properly toasted to an almost-burnt crisp,  hold delicious gobs of melted butter, this day I selected the Limited Edition Cranberry. Thomas’ also had a Limited Edition Pumpkin-Spice English Muffin available that day, but somehow, the thought of a pumpkin-flavored English Muffin didn’t work for me, though the Thomas’ rep told me that they are very popular and he has a hard time keeping the shelves stocked with Pumpkin-Spice.

The experience got me to thinking about choices, and the wide array of choices we are presented with as consumers. I couldn’t help but be reminded of the scene in the 2006 movie “Borat: Cultural Learnings of America For Make Benefit Glorious Nation of Kazakhstan,” where Sacha Baron Cohen‘s Borat character is in an American grocery store examining the–to Borat–unbelievable number of different types of cheeses and packages of cheese for sale. [Here’s a link to the entire 4-minute cheese scene, which was reduced to about one-minute in the theatrical release of the movie.]

Giant Eagle is simply following the  conventional wisdom among retailers, supported by scientific research,  that consumers prefer large selections and are lured by more options and greater variety.  But not every store believes that variety of product offerings is the surest path to success. Unlike most retailers, Trader Joe’s drastically restricts customers’ choices. They don’t carry household-name brands,  and within any particular product category—pasta, for example—there are only a few options, [mostly house brands like Trader Giuseppe Pasta], compared to the myriad choices offered at the Safeway or Albertson’s down the street. As an aside, if you are interested in the behavioral economics explanation of TJ’s success–over 300 stores and growing, with the highest sales per square foot of any grocery chain in the US–check out this story, “Trader Joe’s, Where Less is More,” by Kay-Yut Chen and Marina Krakovsky, authors of the book, Secrets of the Moneylab: How Behavioral Economics Can Improve Your Business. Clearly though, when it comes to providing consumers with a plethora of choices, Trader Joe’s is an exception to the rule.

Moving beyond the world of grocery shopping, into the larger domain of choices in general, we have to consider: why it is that sometimes, presented with a wide array of choices, rather than being exhilarated by the degree of choice-freedom provided, we can become paralyzed by the task of assessing options, and choose not to choose at all?  It seems odd that choice aversion might ever happen  in America, where, from the nation’s inception, autonomy, individuality, and self-determination have been foundational values; where, from the time of Washington and Jefferson to today with Bush and Obama, politicians and the voting public generally assume and act under the presumption that choice always provides social benefits. After all, don’t we strongly resist attempts to use public policy to restrict our personal choices? [Curiously, this doesn’t seem to stop interest groups of all persuasions from attempting to push their personal/ideological views as public policy that would constrain the behavioral choices of others with differing views, for example: Pro-Choice/Pro-Life, Gay Marriage/Sanctity of Marriage, ObamaCare/Free-Market Healthcare.]

There is, indeed, some evidence that a presumption that choice is always good may be incorrect, at least in the realm of public policy. “The Dark Side of Choice: When Change Impairs Social Welfare,” an article by Simona Botti and Sheena S. Iyengar published in 2006 in the Journal of Public Policy & Marketing, points out the sometimes detrimental effects of choice in the public policy realm. The authors ascribe three elements to the “dark side” of choice: information overload, a higher likelihood of dissatisfaction with the choice made, and a propensity to overpay for options that don’t have a commensurate increase in personal happiness.

Another recent study, conducted at Washington University in St. Louis, “Choosing Here and Now Versus There and Later,” corroborates the notion that when choosing among products, we [in our role as consumers] prefer having more options over having fewer options. But that’s not the case when we are making choices about the more distant future, such as when we are considering insurance, retirement plan options, or vacation plans six months in the future. The authors of the study, Joseph K. Goodman, PhD, and Selin A. Malkoc, PhD, both assistant professors of marketing at Olin Business School, suggest that psychological distance is what accounts for the difference in behavior. Psychological distance, both temporal and geographical, increases the similarity of the options in a category, making them appear more substitutable. When making decisions related to the future,the authors conclude that, “… consumers tend to focus on the end goal and less about how to get there and this leads to predictable changes in consumer behavior.”

I found my own favorite take on the “too many choices” problem in the book, The Paradox of Choice – Why More Is Less, published in 2004 by American psychologist Barry Schwartz. Schwartz ranges far and wide in his study of choice behavior, but I was drawn to his views on  the ideas of psychologist Herbert A. Simon from the 1950s, and how Schwartz relates Simon’s theories to  the psychological stress that  consumers face today when confronting seemingly unlimited choices. He notes some important distinctions between what Simon termed maximizers and satisficers.

A maximizer is like a perfectionist when it comes to making a choice, someone who needs to be assured that their every purchase choice was the best that could be made. The only way a maximizer can know for sure that his choice is the perfect choice is to consider all the alternatives he can imagine. This can’t avoid being a psychologically difficult task, one that become even more daunting as the number of choices increases. The alternative to maximizing is to be a satisficer. A satisficer has criteria and standards, but, rather than searching for the perfect choice, satisficers are perfectly happy with an option that is “good enough.” A satisficer doesn’t worry about the possibility that there might be something better available, somewhere. Ultimately, Schwartz agrees with Simon’s conclusion, that satisficinglooking for good enough rather than demanding perfect–is, in fact, the maximizing strategy. Interestingly, it seems that a satisficer–willing to be happy with good enough–is much less likely to second-guess her own choices than the perfectionistic maximizer.

In this regard, I think about my own recent purchase of a new mobile phone. With so many brands to choose from, and so many model choices within each brand, each with a dizzying array of features, the choice-decision further complicated by carrier incentives and  complex rate plans, I opted for simplicity. No change of carrier or mobile operating system for me; simply an upgrade to the best available Android phone [Samsung’s Galaxy SIII in case you were wondering]. No doubts. No second thoughts. I ‘m convinced I made a good enough choice.

So, what do you do when you want some mustard, and the mustard shelf in your grocery store  has 55 varieties of mustard? Probably best to act like a satisficer–set guidelines for price, style of mustard, type of container, acceptable brands to filter through the possibilities– and make a fairly quick and painless  choice. Not so easy though, if you are looking for a movie to watch, a book to read, or a song to listen to. Millions of possibilities. Hard to even imagine what filters to apply. Personalized recommendations are particularly valuable in these situations. That’s why we love Netflix and Amazon and Pandora; they do the filtering for us and present us with a set of movies, books and songs that should satisfy our tastes.

But even if the recommendation suits our tastes, there’s no guarantee that the choice will make us happy; that we’ll be flly satisfied by the choice.  Perhaps, as Kevin Kelly suggested in his blog post  “The Satisfaction Paradox,” ultimate choice–in the form of virtually unlimited options–may be ultimately unsatisfying.

Fast Food is Really Fast!

Fast food. It’s fast. You order it in a hurry; it’s ready in a hurry; you usually eat it in a hurry. Other than the risk of indigestion–and, if you believe some of the medical studies, obesity– you don’t expect fast food to have any affect on you at all. But that may not be the case. In fact, eating fast food may have some interesting, and certainly unintended consequences for those speed diners who frequent McDonald’s, Burger King, Wendy’s, KFC, Taco Bell, et al.

Now, while burgers and fries are prototypical fast foods, the real essence of fast food is not what you eat but how you eat. Everything about fast food is designed with the goal of saving time. Fast food allows us to fill our stomach as quickly as possible and move on to other things–and the other things are always things that we deem to be important, even urgent. It shouldn’t surprise us that the concept of fast food is considered by many to be representative of  a culture that emphasizes efficiency and immediate gratification, and places a high value on our time.

To shed some light on fast food and its effects on our behavior, we turn to the work  of Sanford DeVoe, an Assistant Professor of Organizational Behavior and Human Resource Management at the Rotman School of Management, University of Toronto. DeVoe’s current research is focused on the psychological dimensions of incentives within organizations, including looking at the tradeoffs between time and money, and how each is valued.

In a  2006 paper co-authored with Jeffrey Pfeffer at the Graduate School of Business, Stanford University, titled “When time is money: The effect of hourly payment on the evaluation of time,” the authors discuss the effects of hourly pay on the way people value time and money.

It is a common belief  that people value time differently than they value money. Lots of reasons for this are posited, but the most likely explanation is that people simply have more difficulty accounting for time than for money. This difficulty is less apparent for individuals working for hourly pay. DeVoe and Pfeffer cite earlier research by Evans, Kunda, and Barley, whose ethnographic study of technical contractors including engineers, software developers, technical writers, and information technology specialists who sold their services to firms in exchange for an hourly wage. Being paid by the hour and the corresponding requirement to bill client firms for the number of hours spent working (i.e., billable hours) led these technical contractors to develop an appreciation for the microeconomics of time. Billing for their hours provided these contractors with extensive practice in accounting for their time and its value. By being paid by the hour, unlike salaried employees, contractors could put a precise value on every hour of the day—their hourly billing rate.

DeVoe and Pfeffer’s experiments went further, examining the effects of hourly pay practices for non-contract employees on the time/money tradeoffs employees make. Unconsciously, this can have a pernicious effect on other aspects of life. Similar to working hours, leisure time gets a value put on it. It isn’t hard to imagine a lawyer asking himself, “Is it worth $450 times three hours for me to see my son’s soccer game?”  Even more troubling, it can begin to reverse the delayed gratification response; as individuals make more per hour, they want to bill more hours, and unconsciously become more impatient. Perhaps that’s why more and more organizations are moving away from hourly wage pay plans–to remove the secondary negative effects.

But it is  the research paper You Are How You Eat: Fast Food and Impatience, published in 2010, where DeVoe and fellow researcher Chen-Bo Zhong wrote about a series of experiments they conducted to test the theory that even incidental exposure to fast food can lead to impatient behaviors and choices outside of the eating domain that provides real cause for concern about how we are dealing with time.

Simplifying the theoretical structure, their argument worked as follows:

  • Social behaviors can be primed or set up by environmental cues. Much recent research in this area, known as behavioral priming, supports this concept. For example, people who cast their votes in school buildings are more likely to support school funding initiatives than people who vote in other polling places.
  • Because fast food embodies time-saving as a goal, behavioral priming research suggests that exposure to fast food related concepts may automatically increase speed and time preference.
  • The results of the fast food to speed/time preference link are not context sensitive, so the speed/time preference may not always be positive. As an example, walking faster is time efficient when you are late for an appointment; it is a sign of impatience when you are taking a stroll along the beach.
  • So, even though fast food has contributed to a culture of time efficiency, exposure to fast food might also promote impatience.

The first experiment examined whether a subliminal exposure to fast food logos can increase reading speed. They found that even an unconscious exposure to fast-food symbols can automatically increase participants’ reading speed when they are under no time pressure. In the experiment, participants exposed to subliminal flashes of fast food logos performed a reading task 20% faster than a control group.

In the second experiment, the researchers manipulated exposure to fast food related concepts and examined time-saving preference and impatience in consumer choices after the exposure. They found that thinking about fast food increases preferences for time-saving products, or time-saving features of products, despite the existence of potentially many other product dimensions to consider.

Finally, the third experiment examined whether priming behavior via exposure to fast food logos induces impatience in financial decisions–an activity about as far from eating a Big Mac as you can get–as reflected by people’s unwillingness to postpone immediate gains in order to receive greater future returns. This time the researchers found that the participants primed by exposure to  fast food logos were much more likely to accept a smaller payment now rather than waiting for a bigger payment in a week, compared to those in the control condition. Fast food priming seems to have made people impatient in a manner that could put their economic interest at risk.

DeVoe and Zhong’s research clearly indicates that the way people eat has far-reaching (often unconscious) influences on behaviors and choices unrelated to eating. Other research experiments have shown that exposure to fast food logos caused difficulty enjoying music and photographs–they felt that the experiences lasted too long and were boring.

I’d like to say that we have easy ways to defend ourselves against these unconscious environmental influences. But it appears that the effects of hourly pay rates, fast food symbols, and who knows what other factors, are all driven below the level of conscious thought. We probably have to learn to expect continuing exposure to various stimuli that speed us up and make it harder and harder for us to simply “smell the roses.”


Why Are We in Such a Hurry to Make Up Our Minds??

Perhaps you’ve wondered: Why did banks and traders make such bad decisions leading up to and during the 2007-2008 financial crisis? Frank Portnoy [a former derivatives trader and current professor of law and finance at the University of San Diego] , was apparently wondering the same thing.  In Professor Portnoy’s case, it led to the writing of his most recent book, Wait: The Art and Science of Delay

Answering the question about banks’ and traders’ bad decision-making leading up to and during the financial crisis turns out to be rather complicated. It wasn’t because the bankers were ignorant about decision-making, and decision-making flaws imbedded deeply in the human psyche. In Wait, Portnoy describes how the top executives at the now-defunct investment bank, Lehman Bros. commissioned a special training course in decision making, inviting some of the top writers and researchers on the subject, including Malcolm Gladwell, the author of Blink: The Power of Thinking Without Thinking,  to come to their offices in New York and teach them how to make more good decisions, while avoiding the really bad decisions.

Lehman’s top two dozen execs listened to what the highly-paid speakers had to say,  then they marched off–taking their copies of Blink to the trading floor–and proceeded to make a series of disastrous snap decisions that led to the firm’s downfall. Apparently, the Lehman execs only read the first two-thirds of Gladwell’s book. Blink has been interpreted as saying that making snap decisions is a good thing. The first two-thirds of Blink makes the case for that idea, but the last one-third, which people typically don’t read, is about the problematic aspects of snap decision making. Ooops!

Looking deeper, Portnoy found that little research had been done about the timing of decisions; when is the best time to actually make any decision?  It seems that for every situation and every person there is an optimal amount of delay. I kid you not. The very idea of delay in the context of decision-making is, for me, rather unsettling. But the science behind the concept is compelling.

So, what is it about decision making in different time frames that is good and bad? Portnoy starts this journey of discovery in the world of high frequency trading (“HFT”). HFT is the use of sophisticated technological tools to trade securities like stocks or options. It has a number of interesting characteristics, but for our purposes, the most significant is that HFT is very sensitive to the processing speed of markets and of the HFT traders’ own access to the market. I’m personally not a big fan of HFT, for a number of reasons. Here’s a link to a recent NY Times article that might make you think twice–or three times–about HFT and its impact on financial markets. Regardless of whether you love, hate, or are indifferent about HFT, you’d think that in such an environment, faster would always be better. That proves to be a bad assumption; in HFT faster isn’t necessarily better. Getting faster can actually make you get worse.

A high frequency trader called UNX in Burbank, CA, had been at the bottom of the league tables, which rank investment firms relative performance. UNX brought in new CEO, Scott Harrison, who instituted a new HFT algorithm, flipped the switch and moved to the top decile of the league tables. Their execution model allowed for a trade execution time of 65 milliseconds (“m-sec”).  Harrison then thought, if 65 m-sec is good, faster would be even better. The only way to get much faster , given that electronic trading instructions travel at the speed of light, was to move the trading computers closer to the stock exchanges in New York City [Southern California to NYC at the speed of light is about 30 m-sec].  UNX packed up and moved its computer trading operations to New York, to co-locate next to the New York Stock Exchangeflipped the switch, and, as expected, lowered execution time to 35 m-sec. What they didn’t expect is that with a 35 m-sec execution time, their rank on the league tables would drop to the bottom decile.  Harrison then rigged the trading computers to slow them down to 65 m-sec; performance leaped back to the top of league tables.

Portnoy suggests that there is an optimal delay time for everything. In another example, we consider that humans don’t perceive speech delays of less than 150 m-sec. Telephone companies, at considerable expense, have the technological capability to reduce the speech transmission delay to as low as 1 m-sec. But it wouldn’t be worth it, since the optimal delay–the slowest speed at which speech delay is imperceptible–is just under 150 m-sec; getting faster isn’t necessarily better.

Now, it’s true that the optimal delay for high frequency trading may not stay at 65 m-sec. Trading systems and trading algorithms are always subject to improvement. Human aural systems may also evolve and improve, reducing the optimal delay for voice transmission systems.  Optimal delay is generally going to be a moving target; it may also depend on your strategy.  For example, how should you manage emails? Should you respond immediately, wait an hour, wait 3 hours, set aside a special time every day to deal with email? Your answer will depend on what you are trying to accomplish, and how you try to balance competing needs.

And no one is suggesting that you delay simply for the sake of delay. What you want to do is maximize the time you get to make a decision. For further insight into decision timing, Portnoy turns to the world of  super fast sports: professional tennis players returning 120-mph serves, or Major League Baseball hitters swinging at 98-mph fastballs. In this super-fast world, the entire act is completed in less than 500 m-sec. (As a point of reference, the speed of a blink of an eye for humans averages between 300 and 400 m-sec.) The easy answer is that the best players in super-fast sports simply have faster reflexes. But the best serve returners and batters don’t necessarily have quicker reaction times. So what is it that makes them the best?

In a 500 m-sec world, if you have the ability to execute the baseball swing or tennis service return faster, say in 100m-sec instead of 200 m-sec, you wind up with much more time to gather information before you execute. You’ll be able to hit that fastball out of the park, or make a great service return. Very small delays can be hugely important. In the world of professional tennis a 50 m-sec improvement in service execution speed can mean the difference between a world class professional and a very good amateur. Of course, all of this super-fast sports activity is so fast that it is clearly sub-conscious.  What about the slower world that most of us make our way in? How do we determine optimal decision delay in that world?

Well, no one studies decision making more than the military, especially tactical decision-making. John Boyd, a brilliant military strategist, developed the jet fighter tactics now taught at the Top Gun school for fighter pilots. Boyd’s tactical decision-making approach was Observe, Orient, Decide, Act, often referred to by its acronym, the OODA Loop. Boyd said that the F-15 fighter jet should be like a switchblade in a knife fight. It is very fast, but the true value of its speed is that it gives the pilot more time to observe, orient and decide, before acting. The F-15 pilot can wait until the last possible moment before committing to action, knowing that his aircraft is so much faster and more agile than his opponent’s aircraft. Action and execution speed is every bit as valuable in the 60-second world of jet fighter dogfights as it is in the 500 m-sec world of tennis and baseball.

But what about the “normal” world of personal and business decisions? What is the optimal amount of delay in the world where most of us live and make our decisions? If you don’t make a decision as quickly as possible, you’re likely to be labeled a procrastinator. And procrastination has become a very bad word. Yet sometimes procrastinating–managing delay in decision-making–is precisely the right thing to do.

It might be time to suck it up and admit that procrastination is not necessarily a bad thing. Procrastination was once actually associated with wisdom. In it’s original usage, procrastination simply meant putting off until tomorrow what belonged to tomorrow, and was considered a virtuous and wise path. That is no longer the case. Going back to the days of the Puritans, aspects of American culture have made us feel guilty about procrastinating.  The dictionary definition of procrastination today clearly infers that it is a bad thing, delaying or putting off something that urgently needs to be done.  

Business culture has become almost obsessively action-oriented, rather than focused on making good decisions. I remember working for an executive in the 1980’s who was proud of his Ready-Fire-Aim approach. He was a big fan of the Peters and Waterman book, In Search of Excellence, wherein the authors identified seven factors that contributed to organizational excellence, with the top factor, Bias for Action, considered more important than the other six combined.

In the end, the discussion about optimal delay times for decision making doesn’t provide any simple answers. I think it becomes, more than anything else, a question of priorities. There are always too many things to do; so what will we do now, what will we do later, and what will we never do? Then we have to strike a reasonable  balance between “do it fast,” and “wait as long as possible to decide.”