Number 1 Way to Retain Knowledge Workers

Manager and knowledge workers

 

Do you have a manager of knowledge workers who cannot keep staff? Did this manager come from a technical field or do they have a business or finance background or MBA?  There is 1 big thing that knowledge workers, especially Information Technology staff, need to have not only in their immediate superiors but also across several layers of the management chain.  Smart, talented critical thinkers who perform have a great number of choices, and part of your job is to retain staff.  Recruiting, hiring and training are expensive and time-consuming.  What has research determined is the most critical factor?  Hint: It is not compensation.

Today I read an article from Harvard Business Review entitled "If Your Boss Could Do Your Job, You're More Likely to Be Happy at Work." To me, this seems obvious, but then again, many things are obvious once know the answer.  When looking at the job market after exiting my last startup, I found many were confused by experience and could not? categorize me.  With so many years managing a business unit at a public company, owning a P&L and SOX compliance, then successfully launching a startup, I must be an experienced executive.  Any technical skills I list must have been from projects I managed or oversaw, not hands-on experience, and the position du jour needs someone with solid hands-on experience.

The other part of my résumé reads like an Enterprise or Infrastructure Architect, cloud expert, very experienced with distributed systems, high availability and disaster recovery design patterns and strategy.  There is no way this person was performing a "real" management or executive function.

I feel old when I say this, yet I've been recruiting, hiring and managing knowledge workers since 1990.  I had not completed university and was not studying leadership; I learned by watching successful managers and absorbed their lessons, whether through success or failure, then adapted my approach.  The very first thing that I learned as a technical manager at an SI/VAR was that to be successful with recruiting, hiring, and management, I needed to know everything my staff knew plus more.  At the time, that was easy, yet technological innovation happens faster than normal time. This seems contrary to General Relativity, which says that the faster one goes, the slower time flows. Then again, the speed of time is relative.  Moore's law is still going strong.

When I took on turning around a load testing and consulting business unit at Keynote, 2003 had just happened. As I began to evaluate the state of the business to formulate a new strategy, I began retraining the existing consultants. The first thing I told them was that to be a consultant to software engineers at Enterprises, one had to think like an engineer, use engineering principles, but not forget to include the business side of our customer.  As situations arose in consulting, as they always do, my team would come to me for answers.  I would help them through their logjam, not by giving them an answer, but through asking thought questions or, if there were a time crunch, by doing the work myself.

There are many ways to measure the success of managing knowledge workers, and most are subjective and ambiguous.  I grew the business by 30% CAGR, had almost no employee turnover in eight years, and had the highest revenue output per employee of any unit in the company.  Those all sound like success but it is neither a ranking nor does it say what my staff thought about me that they may not have verbalized.

I like to believe that employees, especially knowledge workers, know that they have a safety net in their manager; instead of berating an employee for a mistake or logjam, a manager should challenge them and help them learn.  I am a life-long learner and I become grumpy (no, not that grumpy) when I do not learn new things every day.  Besides the myriad information and knowledge sources I get each day, I frequently learn from my staff and I coach and mentor them.

Thank you for this research Benjamin Artz, Amanda Goodall, and Andrew J. Oswald.

 

 

Chris Dixon, DNA and new technology: How do we relate to new things?

I was reminded by Chris Dixon this morning how much I miss Douglas Noel Adams (DNA). Chris's article I've come up with a set of rules that describe our reactions to technologies just posted a snippet of, and, thankfully, the link to the original article on DNA's website.

douglas adams inspired "Hitch hikers guid...
douglas adams inspired "Hitch hikers guide to the galaxy" H2G2 www.hughes-photography.eu (Photo credit: Wikipedia)

The fun part of it all is that it is as true today as it was in 1999 when DNA wrote it. No matter what generation you happen to be, be it Boomers, X, Y, Millennials, the forthcoming post-Millennials, etc--you have the things you're accustomed to which are your "normal", and things which you did not experience at the right time of your life. These things are abnormal to you.

I think of myself as a "learning machine". I'm addicted to learning new things, refreshing knowledge by triggering on something that was mentally filed away, my linguistic hobby, etc. A day in which I didn't learn something was a day wasted.

Duke vs Rhode Island basketball
Duke vs Rhode Island basketball (Photo credit: Steve Rhode)

Now we have March Madness upon us. I've been watching part of the ACC Basketball Championship, I will watch selection Sunday and I'll be quite distracted during the NCAA Basketball tourney. Even during those times, I fill my time during commercial breaks to soak up new information, hopefully useful information.

What does this tell me about being a CEO, an entrepreneur?  I'm not sure.  We focus on the message for the business users, who are usually over 35, yet not set in their ways.

Google Glass
Google Glass (Photo credit: lawrencegs)

What does this make me? I'm not sure about that either. I turn 44 this month and I've adapted to everything I've come across in life so far. So far I grok things innovations created after 2005. I hope to keep doing that in the future.

Enhanced by Zemanta

Free chocolate = service outage

M & M Candy
M & M Candy

M&M Mars company offered a coupon for free chocolate, but didn't bother to think through the effects of this promotion on their web site.  Any marketer should know that you get negative brand awareness if you offer something positive, like free chocolate, and then make the experience painful.  This is what Mars company did this morning.  The RealChocolate.com site asks you to register to win free chocolate each week for the summer and if you tried before noon or so PDT today, you probably only got frustrated.  While this promotion may not have as much immediate draw as Oprah, it certainly garners a lot of attention from the deal sites such as Consumerist and DealNews.

This sort of negative publicity is easily avoidable.  Proper testing methodology includes flash crowd testing from the open internet, performing an end-to-end transaction.  Their IIS servers can be made to scale, if configured and built out correctly, but it needs to be proven before your customers tell you that it didn't work.

Just as important is timing.  Good testing methodology means good communication between the marketing-communications teams to the web operations teams.  Testing like this needs to be done far enough in advance that you have time to fix something or correct an issue before you go live--in other words, testing the day before isn't good enough and is more or less a waste of time and resources.  Start at least a week in advance, find out what happens under potential load scenarios, practice remediation strategies, etc.

So M&M Mars, next time call me first.

Liza Minelli crashes web site

Lize Minelli crashes web site
Lize Minelli crashes web site

Liza Minelli, the famous daughter of the famous Judy Garland, causes more traffic than the Sydney Opera House web site can handle and crashes.  The article doesn't say how much traffic they received, only mentioning that the technicians took hours to get the site operational again.  That tells me that the crash wasn't just because of a high traffic spike by itself, because otherwise the site would have recovered after the traffic left.  Moreover, the appear to not have had a monitoring service, so they may not have even known that the site was experiencing problems until customers starting calling to complain.

It is ironic that firms set up websites to handle customer traffic to lower costs and reduce the amount of operator staff to take calls.   This crash flooded their call operators and caused negative publicity.

Proper load testing takes time and money.  The Return On Investment is usually rather easy to see when you compare it with the damage caused by the web site crashing during an importent event like this.  This was probably one of the most popular events to be at the Opera House in a while, and I doubt that the Opera House management performed end-to-end load testing as they should.  I see this so often and it doesn't have to happen.

Metrics-concurrent users versus rates

I frequently see confusion regarding concurrent VU's versus VU's per hour, or what should be called sessions per hour or transactions per hour. When modeling web traffic on the open Internet, the rate-based metrics are better suited to finding out what will really happen. If a site slows down, do your users know that *before* they arrive at your site or after they get there. The concurrent user model assumes that a new user doesn't arrive until the previous user leaves. If the last user in the queue isn't leaving, that is the users is stuck on the system trying to perform a task, then no new user arrives. This simply doesn't happen. A user doesn't know and frankly doesn't care how many other users are on your site until the user gets to the site and discovers that it is about to die--or that the user would rather die than use this site.

Transaction rate and the number of virtual users concurrently on the system affect the application server differently.  Transaction rate primarily takes CPU to process the delivered pages, while the number of concurrent users primarily affects memory.  Both are important, but they are independent variables.  If the site performs well and the scripts are modelled correctly, then the transaction rate and total number of concurrent users will match your web analytics.  If the site degrades, CPU is still maxed out, but memory may not immediately be maxed out.  However, as the number of concurrent users increases, memory utilization will also increase, as well as database connections, etc.

This means that applying a rate-based metric to drive the load combined with the right scripts and use cases will drive the best load to allow you to see the behavior of your application under high loads.

Does Geographic Distribution of load really matter?

The question of geo-distributed testing is really 3 questions. The first question is where you should be generating the load, second how many locations are required to generate load and third, can I use sample agents in some locations instead of having load generators everywhere.

The reasons for geographic distribution of load is both simple and complex. On one hand, you are testing outside of the firewall for 2 primary reasons: 1) that is where your users are located and 2) to do end-to-end testing.

If you are only testing externally to have an end-to-end test, then you could just as easily do the load test in a loop-back scenario, i.e. generate the load on a circuit that sends the traffic out on one interface/circuit and it comes back in through the primary ingress point(s). If you have enough bandwidth and load generation, this is pretty simple and you can even use NISTnet to try to emulate latency. However, it is really only half of the reason for performing external tests. A loop-back test doesn't really tell you about latency, even if you try to emulate it.  Moreover, you assumed that your users were sitting in your data center or lab, which is pretty unlikely.

If you wish to discover the customer's experience of the site under load, you need to have real geo-distribution. For SUTs where there is only 1 bandwidth provider and the volume of the test is relatively small, 2 locations will probably suffice. This is especially true for situations where the customer base is centrally located in a small number of locations, for example a local retail chain that is only present in a few states. If you have customers coming to you nationally or internationally, then you need more. Given the demographics of North America, I recommend either of the following options: 2-3 load generation sites distributed across the time zones and on different ISPs with 5-6 sample locations spread out among the rest of the high-traffic areas, or my standard practice of 9 load generation locations domestically—3 east, 3 central and 3 west. If you are international, then you'd need to think about whether your traffic is European or APAC. This also lets me avoid crashing individual Content Distribution Network POPs, although it still happens. For some reason, they get annoyed at me for this.

So think about the reasons you're even testing outside of the firewall. If it is only to do a simple end-to-end test, then don't bother paying a provider or anyone else and just loop the traffic. If you want a good representation of your traffic, plan properly and distribute the load as well as you can.  Professional load test service providers do more than just deliver some hits.

The worst thing you can do to your home page — don’t slow it down on purpose!

This one will be short, because there simply isn't that much to say. Your home page is one of the most important pages on your site in terms of the visitor's experience. If your site requires registration, authentication or identification, nearly all users must go through this page. It is the proverbial front door to your site and application.

On a recent load test, the test had to be aborted after 9 minutes, while they were only at 25% of the planned total load level of 385,000 sessions per hour. They were using a LAMJ architecture, and each home page hit generated a long running SQL query. Even very patient users, who are tolerant to slowdowns and errors, will not stick around if the home page takes several minutes. However, this site didn't even do that! After 2 minutes into the test, the pages simply said

" Whoops! The social network is currently down for maintenance. Please be patient, we're working on it! "

As you may imagine, their home page is now very fast--0.07 seconds in fact. That is a very fast error message that every user is seeing on the home page, and it would deliver the same for every other page too if the user actually made it that far. I don't think I need to mention the usefulness of all users seeing that error message.

What caused this slowdown and crash you may ask? I'm glad you did. The long running queries exhausted the JDBC connection pool and maxed out the available number of connections, which is what caused the immediate error page.

The only good thing I can say is at least they didn't just print stack traces with DSN information contained in them. I've been shocked at the content of some of the stack traces I've seen on production sites when they encounter an error, but that's another post.

Things to avoid if you’re sending me a resume

The job market is on the decline.  Some major web sites have either closed completely or are decreasing staff/closing stores.  Fox's article on Squawkfox 6 Words That Make Your Resume Suck is spot on with what I look for.

Go read the article, then come back here.

Alright, now that you're back, think about what I want to see.

  • Don't tell me you're a good communicator, show me your're a good communicator.  Spelling and grammar errors turn me off faster than Douglas Adams' fetid dingo kidneys. In today's electronic medium, communication is more often via bits than spoken word.  A poorly written resume is not worth the time you took to send it.
  • Don't tell me you're experienced.  Especially don't tell me you have "extensive experience" with something.  That could mean that your 2 weeks of letting the guy in the cube next to you code in Mono makes you have the equivalent of 5-8 years of someone else.  I don't buy it!  I want lists of your experience, when you gained the experience and what projects you used it on.
  • The Squawkfox article mentions "Team Player", but I'm interested in prospects who can work independently.  I like that you can work in a team, and I want you to think about the big picture, but I don't have time to baby sit and I cannot stand micro-managers.  If I have to micromanage someone, it will only be for a very short time while I give them careful directions to HR.
  • Be ready to explain gaps in your work history, or even worse, why you worked on so many projects.  If you worked with several customers while at your consulting job, list them under the same job heading.  I want someone who will stick with my project and complete it, not use me to help you find another job or even worse, work on your own company on the side.

Having a good resume is not hard, and I actually read most of the ones I get.  Be concise, be confident and remember that the true purpose of a resume isn't to get you a job but to get you an interview.

Understanding Web Usability

One of the great things about the Web has always been its democratic nature. Anyone can participate. But once you do, your contributions are wide open to public scrutiny. Good or bad, someone will evaluate your Web content.

People Recognize Poor Design
In the Web's early days, when people were adjusting to this new medium, most online critiques applied to a site's design. If you created a really ugly site, before long your handiwork would end being featured on a site dedicated to bad design, or even included on someone's all-time list of bad sites. Today, the collaborative features of the Web 2.0 environment such as blogs, forums, and widely used folksonomies practically guarantee that truly awful design will receive the recognition it deserves. Such criticism can be constructive; people can learn from good examples of what NOT to do.

Today, blogs and wikis extend their democratic and educational influences beyond site design to site content. This can be tough on the writer who wants to explore new ideas. But as Wikipedia, one of the best example of online collaboration, advises all contributors:

If you don't want your writing to be edited mercilessly or redistributed by others, do not submit it.

This is excellent advice, because the collaborative search for truth online is not subject to parliamentary or academic niceties. If someone advances a weak argument, people will be quick to point out its flaws. And an unpopular opinion can produce flaming responses.

Is Web Usability a Sham?
For example, last week Ryan Stewart, who has his own blog, and also writes a ZDNet blog about Rich Internet Applications (RIAs), wrote that Usability on the web is a sham, arguing that ...

While accessibility and standards are great for the Web, the concept of usability has been overblown. Usability as we define it is basically the rules for how the Web should behave while in the confines of the Web browser. But Web applications don't have to exist inside the browser and relegating them to these antiquated notions of usability is bad for progress.

To support this argument, he used the Back button as an example of a browser usability problem that RIA developers could eliminate altogether.

I think the central idea behind his argument was that the RIA technologies -- because they offer the developer more options -- can be applied to deliver usability improvements in Web-based applications. But I'm afraid he expressed it very poorly, first by over-generalizing in his criticisms of the concept of Web usability, and second by trying to use the Back button -- one of the most intuitive and widely-understood browser conventions -- as an example of poor usability.

Naturally, his post has been generating a small firestorm of responses ranging in tone from expletive-laden outrage to carefully-argued disagreement. In their different ways, both those examples (along with other responses) argue that Stewart's post is full of opinions and assertions that are not supported by any evidence, and that (to put it bluntly) Stewart doesn't really know what he's talking about.

Marshaling the Right Skills
Unfortunately, this is a common problem in the field of Web design. An effective online application must be available, responsive, easy to navigate, and deliver value to its user. This demands a wide array of skills rarely found in a single individual. As a result many sites are designed and built by people who are aware of only a small subset of the issues they should be considering. And all too often, someone in the site development process -- a business manager, someone in marketing, a Web designer, or a site developer -- makes key design decisions without understanding the consequences. And the challenges are even greater when developing a Rich Internet Application.

Simply getting more people involved isn't the solution. Some really bad Web site designs have been the result of design by committee. Even if you follow a good checklist, like this one by Vitaly Friedman, you will overlook some important aspect -- often site performance. The problem is the sheer breadth of knowledge required to do a good job. For evidence, read the Wikipedia entry on Web Design. It's an unbalanced, poorly organized, collection of information that offers little help with the process of creating an effective site.

The only answer is to get better, more knowledgeable, people involved. Start with a good overview of the process, like Usability for the Web: Designing Web Sites that Work by Tom Brinck, Darren Gergle, and Scott D. Wood. As they say in the book's introduction:

To ensure high usability on our own Web projects, we defined a development process that incorporates proven techniques from software and usability engineering, graphic design, project management, and other disciplines. The process had to be practical and lean. It had to allow us to work on multiple projects of varying sizes with fixed budgets. It had to help us keep track of the details that can kill usability and destroy profitability. This book is about that process.

Then find people who understand what the book is talking about to do the work -- and don't interfere!

Tags: , , , , , , .

Web Design and Mouse Rage Syndrome

Have you ever been frustrated at a Web site that downloads with the speed of an Alaskan glacier? Or become angry when a favorite site, or your Internet connection, is down. Have you experienced any of these symptoms:

  • Faster heart rate?
  • Increased sweating?
  • Furious clicking of the mouse?
  • Simultaneous clicking and cursing the screen?
  • Bashing the mouse?

Come on now -- admit it! Maybe some of them, just once or twice? Because if any of this sounds familiar, you're not alone.

Mouse Rage Syndrome
The consequences of poor Web site performance don't usually make news, unless there's a big outage on Black Friday or Cyber Monday. Then the story invariably focuses on the business lost by companies whose sites were overloaded or down. But what about the effects of poor performance on customers? A recent study by the Social Issues Research Centre (SIRC), an independent, not-for-profit organisation based in Oxford, UK., provides this perspective.

Researchers found that badly designed and hosted websites cause stress and anger, and coined the term "Mouse Rage Syndrome" (or MRS). They concluded that five key IT flaws in the way websites are designed and hosted may lead to harmful health effects.

Those five problems will come as no surprise to any regular user of the Web: slowly loading pages, confusing layouts, excessive pop-ups, unnecessary advertising, and site unavailability. But as I was saying in my previous post, it is not fair to blame IT for these problems, when most of them are the result of poor design choices by Web site designers and developers.

Damaging Customers' Health
The study combined data from a YouGov poll of 2,500 people with physiological tests on a separate sample of Internet users, who were asked to find information from a number of different websites. Tests measured physical and physiological reactions to website experiences, looking at brainwaves, heart-rate fluctuations, muscle tension and skin conductivity. According to the report:

When the test participants came to the 'problem' sites that we had deliberately chosen as comparisons for the 'Perfect Website' evaluation exercise [a prior study], responses changed quite dramatically in most, but not all, cases. While a few managed to stay calm and simply 'rise above' the problems presented by crazy graphics and slow-loading pages, others showed very distinct signs of stress and anxiety.

Some changes in muscle tension were quite dramatic...While this was happening, the participant's faces also tensed visibly, with the teeth clenched together and the muscles around the mouth becoming taught. These are physically uncomfortable situations that reduce concentration and increase feelings of anger.

These reactions, if not managed, can eventually lead to other, more serious, health problems.

Poor Designs Can Kill
According to the SIRC report, "users want Google-style speed, function and accuracy from all of the websites they visit, and they want it now. Unfortunately, many websites and their servers cannot deliver this". The result is consumers seeking alternative websites in a bid to avoid undue stress and Mouse Rage.

Jacques Greyling, managing director of Rackspace Managed Hosting, who commissioned the study, commented:

We believe that businesses that are selling online have a duty to their customers to ensure that the experience is as stress free as possible. The public has shown that it wants to buy online, ... (so) businesses need to provide simple and easy to navigate layouts, whilst focusing on speed and uptime.

If more studies confirm these findings, which will probably seem obvious to anyone who spends a lot of time online, maybe more Web designers will get the message that their poor designs are not just killing the business. They could be killing people -- really.

Performance Matters!

Tags: , , , , , , , .