Posts

Data, and by extension data analytics, are becoming increasingly important for business. At the same time, the data deluge makes making sense of it all a bigger challenge every day.

Here are three trends to should keep in mind for 2016.

1. Don’t shoot from the hip

Numbers are becoming more popular for most people, but the more numbers we get the more useless most of these seem to be. Especially if they are drawn out of a hat; why would you take that into consideration in your decision-making process?

Polling your audiences is fine.
But that is not a statistic that adds up exactly to something like 97%, is it?
Or are you keeping tallies of your straw polls and then doing the statistics?

Comment by DrKPI on Adrian Dayton’s Clearview Social blog

[su_custom_gallery source=”media: 2884″ limit=”7″ link=”image” target=”blank” width=”792px” height=”473px” Title=”Adrian Dayton Clearview Social claims 93% of lawyers… based on straw polls – should we trust this, does it make a difference?” alt=”Adrian Dayton Clearview Social claims 93% of lawyers… based on straw polls – should we trust this, does it make a difference?”]

View slide on Flickr – measure-for-impact – DrKPI

Google Flu Trends is an example that illustrates this problem further. For instance, it:

– looks at historical data – descriptive analytics and research, and
– tries to predict what might happen – predictive analytics – with the help of a model that was developed.

The results are supposed to help us better understand how the flu will spread next winter. Unfortunately, in the Google flu trends versus National Institutes of Health (NIH) challenge, the winner is? NIH! Google estimates are simply far off from the actual data the NIH produces for policy makers and health professionals.

2. Bad data result in bad decisions

Publishing rankings or product tests is popular. Since some readers devour such rankings, publishers can sell more copies, which keeps advertisers happy.

A real win-win situation, right? Not so. Wrong decisions can result in outcomes that are not desirable. For instance, attending the wrong college or polluting more than the test results indicate (think Volkswagen and #dieselgate) is not something we want.

Lucy Kellaway felt so incensed about the ever growing acceptance of making errors in corporate circles, that she wrote:

…I would be exceedingly displeased to learn that the bankers to whom I was handing over a king’s ransom were being taught that errors were perfectly acceptable.

This mistake-loving nonsense is an export from Silicon Valley, where “fail fast and fail often” is what passes for wisdom. Errors have been elevated to such a level that to get something wrong is spoken of as more admirable than getting it right.

By collecting data and using flawed methods we produce rankings or test results that will can seriously hurt people. For instance, when drug certification tests are done improperly and the regulator has no idea, unknown side effects can kill people.

Using the wrong test results to approve or certify a car can result in dismal effects as well. Volkswagen is accused of manipulating tests, and the public got more pollution than it bargained for. VW is working on fixing the 11 million vehicles affected by the diesel cheat, but this will not un-do the damage to the firm’s reputation and our health.

3. Check before you trust the method used

It is always wise to take 5 minutes to do an acid test with any study report we see, such as:

– what does the methodology tell us (e.g., we asked university deans to rank their competitors); and

– does the measure or measures used make sense (e.g., one question about how university developed / improved study programs – result = ASU is more innovative than Stanford or MIT… who are you kidding?).

The Art Review publishes an annual ranking of the contemporary art world’s most influential figures. In short, it helps if you live in London or New York so the Art Review editors or journalists are aware of who you are.

I asked for an explanation of how these numbers develop:

Dear Sir or Madam
I would like to know more about the methodology you used for the ArtReview’s Power 100 List.
Can you help… this would be great to use with my students in a class.
I could not find anything on the website that I could show my students.
Respectfully
Professor Urs E. Gattiker, Ph.D.

14 days later I got an answer from the makers of the ranking:

Subject: Re: Message from user at ar.com

Hi,
We are not following a grid of criteria per se, and the list emerges from a discussion between a panel of international contributors and editors of the magazine, who each advocate for the people they feel are most influential in their region. The influence of the selected people on the list is based on their accomplishments in the past 12 months. I have attached here the introduction to the Power 100, which might help you in defining our approach.
I hope that helps,
Best, Louise

A grid of criteria, what is that? Of course, the office clerk answering me has no clue about research methodology used, as the answer indicates. One could start believing that this Top Art list came from a discussion or using a straw poll. Totally chaotic approach.

You can view the attachment that explains this sloppy method below.

[embeddoc url=”http://blog.drkpi.com/download/7/” download=”all” viewer=”google”]

Download the ArtReview criteria with this link.

A friend of mine smiled, and said:

For me this is a great list, Urs. Those on the list rarely if ever represent value for money for serious art collectors. Instead you get buzz and have to pay for their image. The list tells me who we do not need to work with. We use other experts. These give us more value for money. They help us to complement our award-winning collection.

[su_custom_gallery source=”media: 2881″ limit=”7″ link=”image” target=”blank” width=”780px” height=”479″ Title=”Sound research takes plenty of resources” alt=”Sound research takes plenty of resources”]

Bottom line

We all know that data quality is important and frequently discussed. In fact, the trustworthiness of data directly relates to the value it can add to an organisation.

As the image above suggests, doing quality research takes a decent method that results in data that permits careful analysis. Sloppy data are cheap to get, but dangerous if used in decision-making. Such findings are neither replicable nor likely valid.

However, we are increasingly required to present findings in order to attract more readers. Some master this very well like Inc. Another example of theirs I came across was:

Though truly quantifying “best” is impossible, the approach Appelo’s team used makes sense, especially when you read the books that made the list.

The 100 Best Business Books of 2015 by Jeff Haden

And here’s the methodology:
The purpose of our work was to find out which people are globally the most popular management and leadership writers, in the English language.
Step 1: Top lists
With Google, we performed a lot of searches for “most popular management gurus”, “best leadership books”, “top management blogs”, “top leadership experts”, etc. This resulted in a collection of 36 different lists, containing gurus, books, and blogs. We aggregated the authors’ names into one big list of almost 800 people.
Step 2: Author profiles
Owing to time constraints, we limited ourselves to all authors who were mentioned more than once on the 36 lists (about 270 people), though we added a few dozen additional people that we really wanted to include in our exploration. For all 330 authors, we tried to find their personal websites, blogs, Twitter accounts, Wikipedia pages, Goodreads profiles, and Amazon author pages.

So you defer to 36 people and their lists and include those that are mentioned more than once. Fine, if that does then not include the ones you believe should be on the list because you read these books and liked them, no worries. You add a few dozen people (60) and voilà, you have 330 authors (how they ranked them is totally unclear, but interesting – blog reputation, Twitter followers, etc.).

[su_box title=”3 checks you should undertake before accepting a study’s findings” box_color=”#86bac5″ title_color=”#ffffff” radius=”5″ width=”px 700″ ]

1. Evidence-based management and policy advice

A sloppy method is like following no method.
Can you find a method section, and does the method make sense to you? For example, did the study use a long-form questionnaire to get employment data? Or was it just based on scans of Internet job boards? If the latter, the problem lies with double counting when relying on websites or job search engines.

If the method section does not instill you with confidence that it was done properly, watch out. And, most importantly, don’t complain about a study before you read it carefully!

Interesting read: CRDCN letter to Minister Clement – Census long-form questionnaire (July 9, 2010) explains why Statistics Canada needs to get the funds to collect data for the census to provide evidence-based policy data.

2. Minestrone: Great soup but wrong research method

So the study has a decent method section that makes sense and explains things accurately. What are the chances that somebody else could follow the methodology and get the same result?

To illustrate, if it was done the same way I put together a Minestrone (Italian vegetable soup), you can forget it. I take whatever vegetables are in season, plus, each family’s soup is seasoned differently, guaranteed. This neatly illustrates the fact that if no systematic method is used, it is not science. For the soup this means it turns out different each time anyone makes it.

Without a recipe or method followed, you cannot repeat the performance or generalise from your findings.

3. Buyer beware: Click biting studies using navel gazing metrics

Usage of Sainsbury’s #ChristmasIsForSharing being higher than John Lewis’ #ManOnTheMoon by just 4% is interesting. However, Social Bro’s verdict is based on 50 votes (26 versus 24) from a Twitter poll. In turn, the analytics company uses this data to decide on 2015’s Most Creative Christmas Campaigns. What? Are they real, is their analytics work also that sloppy?

Apparently, even analytics companies like Social Bro have to defer to such navel gazing metrics to get more traffic. Such samples are neither representative nor big enough to draw any inferences.

Just because something is interesting or suggests it is a bit better based on 3 more votes on Twitter, does not mean you should invest your hard earned cash that way. Investing your marketing dollar based on such nonsense is plain dumb.
[/su_box]

What is your take?

– what will you change in your data #analytics and #analysis work in 2016?
– what is your favourite example for 20015, illustrating GREAT analytics work and research?
– how do you deal with this data deluge?
– what would you recommend to a novice (ropes to skip)?

More insights about analytics, analysis and big data.

Where do you want to go, reflection is needed.

[su_highlight background=”#fffe99″]Summary[/su_highlight]: David Cameron knows that public approval of RAF air strikes against ISIS in Syria has dropped.
We explain what this teaches Migros, Lidl and Tesco about new product research.

CLICK - CONFIDENCE in measuring ROI of social media and display ads is LOW

Some weeks ago I came across a report (see image) that stated just 29 percent of people feel confident in measuring the ROI (return on investment) of display ads and this drops to just 22 percent for social media marketing.

Accordingly, management is interested in improving its understanding with analyses and analytics when it comes to social media activities. But do managers or politicians understand what we are trying to communicate or convey to them?

If managers read blog entries like this one about how to do surveys, it’s no surprise that they believe it is all easy and cheap to do.

This is the fifth post in a series of entries about big data. Others so far are:

Data analytics: Lessons learned from Ebola
Scottish referendum: A false sense of precision?
– Facebook mood study: Why we should be worried!
– Secrets of analytics 1: UPS or Apple?

Confusion abounds

How are management or politicians supposed to understand the difference between analytics, data and analysis? Can we trust polls or should we learn from the Scottish disaster?

For instance, when we go to a dictionary of statistics and methodology from 1993 (Paul Vogt), neither analytics nor business analytics has an entry, never mind data analysis.

Kuhn: Unless we share a vocabulary, we are not a discipline

However, these days, some would claim data analytics is a science (e.g., Margaret Rouse). Still, if something can be called a science (e.g., physics or neuropsychology), its members share a certain set of beliefs, techniques and values (Gattiker 1990, p. 258).

Do people in data analytics or data analysis share a vocabulary and agree to the meaning of basic terms? Not that I am aware of. Therefore, Thomas Kuhn’s (1970) verdict would be: Not a science (yet).

In web analytics, data analytics or data science as well as social media marketing we agree to disagree. But maybe I can clarify some things.

Sign up for our newsletter; this post is the first in a series of entries on business analysis and analytics.

[su_box title=”2 things business, data, financial and web analytics have in common ” box_color=”#86bac5″ title_color=”#ffffff”]

1. All analytics is art that involves the methodical exploration of a set of data with emphasis on statistical analysis.

2. All analytics include the examination of qualitative and quantitative data.

[/su_box]

Analytics gives you the numbers, but fails to provide you with insights. For that, we must move from analytics to analysis, and we only gain the necessary insights if we do the analysis correctly.

[su_custom_gallery source=”media: 2649″ limit=”7″ link=”image” target=”blank” width=”508px” height=”552px” Title=”Diagram: Analysis versus Analytics versus Data – why the difference matters” alt=”Diagram: Analysis versus Analytics versus Data – why the difference matters”]

The graphic above illustrates that proper data is the foundation for doing analytics that permit a thorough analysis. Accordingly, using a sample that is not representative of our potential clients or voters is risky.

Nobody would draw any conclusions about attendance at next season’s football matches by asking a sample of baseball afficionados. So, go ahead and ask your social media platform users to vote for this season’s favourite flavoured drink syrup. But such a poll won’t give you an answer that is representative of your customer base.

Nevertheless, this is exactly what Migros did in 2015 (see Migipedia – few very young users participated in the poll, less than 10 wrote a comment during January 2015). It then published a one-page ad (among many more, see below) in its weekly newspaper (e.g., November 30, 2015), claiming that the chai flavour was the winner.

Making such a decision based on this type of unrepresentative poll is a risky choice. You may actually choose to increase production of the wrong flavour!

[su_custom_gallery source=”media: 2781″ limit=”7″ link=”image” target=”blank” width=”520px” height=”293px” Title=”Polling online community members gives you data from a non-representative sample of your customers – is that good enough to launch a new product?” alt=”Polling online community members gives you data from a non-representative sample of your customers – is that good enough to launch a new product?”]

Collecting data that is based on a representative sample of your customers is a costly exercise.

So why not use your online ‘community’ to do a ‘quick and dirty’ poll?

Surely a Twitter, Facebook or website / corporate blog poll is economical. You do it fast and easy and voilà, you got what you need, right? NOT.

Okay agreed, doing the above will strengthen your hand with a CEO. They might not grasp basic methodology issues of sampling or survey research. Plus, you got data from your online community, which is another reason to invest more money there.

In the Migros example above, having an online poll on your Migipedia platform achieves 3 things:

1. it allows your marketing folks and community managers to show the platform is useful for something;

2. regardless of which flavour wins and gets produced, you can always push it in your company newspaper. This way you reach 3 million readers in Switzerland – a country that has 7.8 million inhabitants,

3. even if the new product turns out to be a flop, thanks to other marketing channels, you sell 150,000 to 300,000 (or more) 1-liter bottles of chai tea syrup during the Christmas Season.

With its many resources and varied marketing channels (e.g., weekly Migros Magazin), Migros can ‘afford’ to use shabby research. It is in the enviable position to succeed, in spite of ‘spending’ so much.

The company might never learn that its analysis actually led the team to choose the second or even third best choice. Nonetheless, your marketing clout ensures that you can show it to management as an example of having done the right thing. Of course, we know it was done for the wrong reasons, but since management probably won’t find out, who cares – right?

[su_custom_gallery source=”media: 2793″ limit=”7″ link=”image” target=”blank” width=”530px” height=”308px” Title=”Polling: Opinion on RAF air strikes against ISIS in Syria – up and down each week” alt=”Polling: Opinion on RAF air strikes against ISIS in Syria – up and down each week”]

One poll is worse than none

As the above image from last week regarding air strikes in Syria shows, poll results can change quite a bit within a week.

For starters, no pollster wanting to stay in business will use a non-representative sample to get opinions. Using such data is unlikely to give you the insights you need for Hillary Clinton or any other candidate to succeed during next year’s US election.

[su_custom_gallery source=”media: 2801″ limit=”7″ link=”image” target=”blank” width=”485px” height=”445px” Title=”Polling: YouGov’s Will Dahlgreen never answered this question – so can you trust these results?” alt=”Polling: YouGov’s Will Dahlgreen never answered this question – so can you trust these results?”]

I left the above comment at the end of the blog post (it has not been published by YouGov so far). I asked about things that a good pollster will always publish with the poll results.

For instance, I asked how data were collected, whether the sample is representative, and what the margin of error was. I could not find any information about any of that. Of course, trust is not improved when one fails to publish a reader comment that raises method issues about your poll.

“YouGov draws a sub-sample of the panel that is representative of British adults in terms of age, gender, social class and type of newspaper (upmarket, mid-market, red-top, no newspaper), and invites this sub-sample to complete a survey.”

How exactly this happens with YouGov we do not know, since the methodology outlined on its website is not very detailed.

But David Cameron knows that while 5 million people have joined the ranks of those opposed to airstrikes in Syria in the past seven days, that could change next week. Polls are more interesting when they show a trend, so Mr Cameron still has hope that the opposition even more.
[su_box title=”5 key pointers for explaining the analyst’s work to your management: The case of survey research or polling” box_color=”#86bac5″ title_color=”#ffffff”]

Collecting quality data is followed by analytics, which subsequently require analysis to draw the proper insights. Analysis requires words in addition to looking at the numbers.

To tackle this challenge successfully, we need to do some preparation, as outlined below.

1. Do you have a strategy or a plan?

What is it you want to collect data for and why? This must be explained in a few sentences.

How will these data help you win the election, get the contract or sell more product?

2. How will data help you execute the plan?

You must know what data you need or the rationale for wanting them (see point 1).

What three steps will you take in the next quarter or six months to execute your strategy?

3. Are the numbers complete?

Most monitoring services can tell you everything about Facebook or Twitter.

But what about smaller websites from climate change activist groups, ISIS sympathesizers or peace activitists’ blogs?

Make sure you get the data you need. Is your sample representative of those whose opinion you must know?

4. Do you need social media monitoring?

Knowing what people say about your brand or company is a good thing. The Volkswagen emission scandal (remember #dieselgate) teaches us that in a crisis, simply monitoring the flood of tweets and status updates on Facebook or LinkedIn is of little use.

Like Volkswagen, you can decide to ignore the social media noise. Change your behaviour and communicate openly and directly (click for German-language radio report).

Unless you use social media monitoring to take action after the data are in, why collect it?

5. Do you have data from your customers?

If you have less than 1,000 employees, don’t make a big fuss about social media monitoring.

Focus on things that matter, such as what your clients report regarding warranty service, and the quality of phone support or user manuals. A tweet matters little.

Feedback can be collected in many ways, including customer surveys, discussions with clients or comments on your corporate blog.

Analysing these data provides insights that help improve product, service and so forth.

What it means

Focus on collecting data that help you serve your customers better. Getting a daily digest about the most important key words regarding your brand (e.g., we use DrKPI, #DrKPI, DrKPI BlogRank, #metrics #socbiz) is probably all you need. Instant data may not be needed unless you are a FT Global 500 company.

Restrict yourself to collecting only those data you absolutely and definitely must have.

Make sure that they meet some minimum quality standards. Only this will enable you to trust the analytics and analysis resulting from that work.

Actionable metrics are what matters

Unreliable or invalid data from clients, social media monitoring and opinion polls is a waste of resources.

Please keep in mind, just collecting data without taking action is a navel-gazing exercise.

[/su_box]

Bottom line

Always ensure that analytics leads to analysis that goes beyond navel-gazing metrics. Answer these questions truthfully:

A. What will be done with the findings: Unless you take action based on your data, why measure and collect information at all?

B. What kind of data was collected: Make sure you understand how data were collected. Can this polling data be trusted to be representative of the population (e.g., consumers in my country)?

How was something like influence (e.g., Klout) measured (what kind of proxy measure was used)?

If it is not transparent to you, move on and do not waste your time with such a measure or index.

Keep points A and B in mind before you collect data and / or use somebody else’s findings.

‘Total X’ combines xyz Labs’ proprietary Rambo social media measurement tool, and WalkBack®, the leading measurement source of WOM marketing from the Sambo Group, a Laughing Stock company.

Okay, what does the above mean? Who would want to trust this gobbledygook? If marketers or pollsters cannot explain things clearly and precisely, they tend to cover it up in jargon that tells you nothing.

Regardless, 2016 will mark the year where Lidl, Migros and Tesco will do more of these utterly useless polls, to find another ‘winner’ for a new flavour of drink syrup, mustard or soft drink.

Even though social media, community and marketing managers will claim a victory this year, with so much additional marketing around, who is surprised? Put differently, regardless which syrup the company – Migros – would have produced, I dare to claim it would have flown off the shelf anyway.

Combine all the ads and marketing push, and if it tastes okay, success is in the bag. Unfortunately, those that hate research will attribute part of this success to a useless online poll.

Next time you read something like the above, claiming to rank something, check the methodology. Cannot find anything? Just move on because it is probably hogwash.

Interesting reading

Vogt, Paul W. (1993). Dictionary of statistics and methodology. Newbury Park, CA: Sage Publications. For information see https://uk.sagepub.com/en-gb/eur/dictionary-of-statistics-methodology/book233364 (5th edition 2016).

2 great reading lists for additional resources about research, polls, survey data and much more:

1. http://guides.library.cornell.edu/c.php?g=31819&p=201525
2. http://www.lse.ac.uk/methodology/study/Preliminary-Reading-List.aspx

Join the conversation

  1. Do you have an example of a great poll / study?
  2. What is your favourite marketing measure?
  3. What research methodology would you recommend?
  4. Other ideas or concerns you have about marketing research, please state it here.

Of course, I will answer you in the comments. Guaranteed.

CLICK - DrKPI for improving college marketingIt is again the time of year when parents and prospective students pore over recently published university rankings.

“…US News asked top college officials to identify institutions in their Best Colleges ranking category that are making the most innovative improvements in terms of curriculum, faculty, students, campus life, technology or facilities.”

But should we recommend such rankings, like the one above from the US News & World Report?  Are college rankings good, bad or ugly as Yale’s former Dean of Admissions suggests?

Or could it be the single worst advice we could possibly give a high school student?

Fact 1: College rankings generate revenue for publishers

Media houses know very well that university rankings are of great interest to prospective students and parents. So, newspapers like the Financial Times (FT) feature a weekly special section on education. In addition, the paper publishes numerous rankings throughout the year.

Then there are the various feature reports (see 2015-11-03 FT Special Report on Innovations in Education). Of course, they carry also advertising like the one below from Thunderbird.

[su_custom_gallery source=”media: 2517″ limit=”7″ link=”image” target=”blank” width=”519px” height=”380px” alt=”FT Special Report Innovations in Education – Thunderbird promotes itself as innovative leader – half page – front page color ad $120,000″]

Looking at the FT Special Reports Ad rates shows that getting involved with educational institutions pays well for media houses. Universities are forced to get increasingly famous to attract more resources and qualified students. In turn, advertising in special editions about education is a sure way to reach more of your target audience.

Marketing 101

Publishing college rankings makes sense from a publisher’s perspective. Advertising brings in the revenue needed (FT Special Reports Ad rates), and people read the stuff because as Langville and Meyer (2012, p. 1) suggest:

In America, especially, we are evaluation-obsessed, which thereby makes us ranking-obsessed given the close relationship between ranking and evaluation.

Fact 2: Schools love to use rankings

Everyone certainly loves rankings when they place in the top 10. And regardless of whether we agree with the findings, if we are the top dog, we let the whole world know about it.

The great thing is such rankings are based on a third party’s opinion, which lends it all credibility when we advertise our achievement.  Arizona State University (ASU) continues to tout their top rankings in the US News & World Report list of most innovative schools.

[su_custom_gallery source=”media: 2514″ limit=”7″ link=”image” target=”blank” width=”519px” height=”381px” alt=”US News & World Report – Most innovative school ranking 2015 – an advertising bonanza”]

Marketing 101

You don’t have to be brilliant or innovative, you just have to convince others that you are. Of course, if you have an external reference point that ranks you highly, such as a well-known publication, so much the better for your recruiters.

Can prospective students trust these school rankings?
Are they useful when choosing a university/college or program of study?

Fact 3: This stuff is less useful than you think

It is best to look at the methodology used in a ranking. What measures were used to conclude that ASU should be considered more innovative than Stanford and MIT? Fair question – let’s see.

US News & World Report asked deans and presidents to rank their peers. The magazine wants to compare apples with apples. Hence, national schools such as ASU, Stanford and MIT are ranked with their peers.

By the way, did you know that the US News & World Report puts the United States Naval Academy into the category of national liberal arts colleges?

So how does one measure the innovativeness of a university? We are told:

“…2015 survey that received the most nominations by top college officials for being the most innovative institutions. They are ranked in descending order based on the number of nominations they received. A school had to receive seven or more nominations to be listed.”

In plain English, this means you need to get as many high level university administrators as possible to nominate your university for innovation.

Accordingly, if you manage to make everyone perceive you as innovative, you are. That is all there is to it. Isn’t that wonderful?

Of course, we have no idea if whether a product innovation or a process innovation helped you rank highly. In either case, to claim to have made an invention, and thereby become an innovative university, you should answer things like: Why is this curriculum change an invention? They can be evaluated according to:

  • novelty (new),
  • inventiveness (i.e. must involve a non-obvious inventive step), and
  • industrial applicability (can be used).

Of course, in this case we have no clue what makes a curriculum change a simple change and what makes it an invention.

Marketing 101

US News & World Report rankings illustrate very well that how you measure things matters little. It just has to come across as making sense because 90 percent of readers do not bother to read about your methods or the fine print.

However, if you invest several years of your life in attending a school, while paying through the nose for tuition, fees and so on, you are well-advised to ensure the ranking makes sense to you.

Of course, even if the measure is bad, this does not necessarily mean ASU and Stanford are bad schools. They’re great, but

the US News & World Report’s attempts to measure innovativeness is a useless vanity exercise, to put it politely.

Fact 4: Using just one ranking is the worst

You basically have to do the homework. The five points spelled out in the table below will help you make better sense out of any ranking.

Please keep in mind – the perfect ranking does not exist. Each one has strengths and weaknesses, but you can only learn what those are by following these steps.

[su_box title=”5 critical things to do before trusting a college ranking.” box_color=”#86bac5″ title_color=”#ffffff”]

1. Take the time and make the effort to learn about the methodology. Where is the description, and how thorough is the ranking we are looking at? An example of a good method section is PEW Research‘s study on multiracial Americans, which explains how data were collected, weaknesses of the study, etc. This is also easy for the uninitiated to understand.

If you have done this homework, you know better how much weight you should give the rankings in front of you. That is a great start.

2. Does the study measure what it is supposed to (also called validity)? What criteria were used to make up a component in the ranking? Do these make sense to you?

3. Are there components of the ranking that particularly interest you? We may look at costs as an important factor. It could be interesting to understand how, for instance, a university degree (e.g., undergraduate or graduate) affects one’s career prospects and / or income 10 years after we graduate.

3 very good examples of included interesting factors:

4. Come up with a set of criteria that are important to you (see also image below), such as:

4.1 – location (e.g., which country and what area of the country/city), and
4.2 – costs (e.g., tuition, fees, health insurance, accommodation).

5. Write down a set of criteria that are not that critical to you, such as:

5.1 – GPA of incoming class,
5.2 – number and value of student scholarships, and
5.3 – diversity of faculty (e.g., gender, race, country and language)

The above makes it clear that using just one ranking is plain stupid. Using two is risky and using three or more allows you to pick and choose, thereby empowering you to make the decision that best suits you.

If the ranking uses those criteria that are of limited importantce to you (see point 5), you know what to do – ignore it.

[/su_box]

[su_custom_gallery source=”media: 2536″ limit=”7″ link=”image” target=”blank” width=”520px” height=”414px” alt=”Balancing the worth of education with outcomes.”]

One must balance the resources put in and the outcomes we hope for. This also indicates that we need to look at several rankings to choose the right university.

Who is number 1?  Create the best ranking

Of course, in addition to US News & World Report and the Financial Times, others do not want to be left out of this lucrative business. For instance, The Economist (a weekly magazine) also produces a ranking of MBA programs. So does the Wall Street Journal. Of course, even more rankings exist, such as the best 100 Employers to Work For or the Best Consulting Companies (German-language Handelsblatt).

In the case of the Best Consulting Companies, participants are asked three questions about the firm and voilà, we have the 2015 rankings. This may indicate more about how much you advertise (helps increase brand recognition) than how satisfied your clients are with your work.

These examples illustrate, everybody and anybody can create a college ranking. However, to avoid becoming a laughingstock, I urge you to follow the nine steps outlined below.

Join the 3,000+ organizations using the DrKPI Blog Benchmark to double reader comments in a few months while increasing social shares by 50 percent - register now!

How exact and thorough we are when addressing each step will, in turn, affect the overall quality of our rankings.

[su_box title=”9 steps to develop your favorite ranking system for just about anything.” box_color=”#86bac5″ title_color=”#ffffff”]

1. Write a one-page summary of why this ranking is needed and explain its purpose (to help readers… lose weight, pass the certification exam, purchase the best car, etc.).

2. What can readers do with these data? For example, does studying these data help improve performance? Does it show one’s weaknesses? Does it outline how one can improve (see DrKPI BlogRank)?

3. Come up with some indicators or measures that allow the collection of data from individuals (e.g., salary three years after graduation), the institution (e.g., faculty with doctorate), and possibly other indicators (e.g., inflation rate, purchasing power parity (PPP) data from the International Monetary Fund (IMF) to adjust salaries).

4. Use the indicators to make up components that make sense to the uninitiated (e.g., career progress, quality of faculty).

5. Add up the indicators to attain the overall score for each component the school, firm or student achieved.

FT uses three indicators to make up the “idea generation” component of its MBA rankings.

  • percentage of faculty with doctorates,
  • number of doctoral students that graduated last three years, and
  • research output created using a set of 45 journals (no Chinese or Spanish research journals need apply).

6. Convert the component scale to a common one such as 0 to 100, whereby the best gets the top score and average performers hover around 50.

7. Determine the importance of each component.

In many cases, some components are weighted higher than others. That is a value judgment that warrants an explanation. The same goes if you weigh each component the same! Explain your decision to the uninitiated reader.

8. Compute the aggregate score as the weighted sum of the previously calculated scaled component scores.

9. Present the aggregate score from the desired scale, such as 0 to 100.

Thanks to Fung (2013, p. 22-23) for inspiring me to write up this list.

Whenever looking at a university or any other ranking, keep the above in mind. Is the methodology spelled out, explaining the issues raised above? If these things are not made transparent, caution is called for.

[/su_box]

Don’t forget: Subscribe to our newsletter!

What is your take?

What’s your favorite ranking (e.g., sports) AND why do you like it?

– Which university ranking did you use when you applied for college?

What do you like the most about rankings?

What advice would you give a high school student regarding college rankings?

FT Global MBA Ranking

As I pointed out above, each ranking has something we might be able to use for our own purposes. The one below shows which business school provides you with the best value (i.e. current income minus tuition, books, lost wages while attending the program, etc.).

Surprising, is it not? The best known schools rank low. But maybe you want to use different criteria to rank… Check it out yourself.

[su_custom_gallery source=”media: 2552″ limit=”7″ link=”image” target=”blank” width=”497px” height=”650px” alt=”Financial Times Global MBA Ranking – Value for money”]

FT Global MBA Ranking – the winner based on value is the University of Cape Town – Graduate School of Business.

Things worth reading

1. Fung, Kaiser (2013). Number Sense. How to use big data to your advantage. New York: McGraw-Hill. Available on http://www.mheducation.co.uk/9780071799669-emea-numbersense-how-to-use-big-data-to-your-advantage

2. Kenrick, Douglas, T. (September 30, 2014). When statistics are seriously sexy. Sex, lies and big data. Psychology Today online. Retrieved November 2, 2015 from https://www.psychologytoday.com/blog/sex-murder-and-the-meaning-life/201409/when-statistics-are-seriously-sexy

3. Kenrick, Douglas, T. (June 20, 2012). Sexy statistics: What’s the one best question to predict casual sex? The science of sex, beer and enduring love. Psychology Today online. Retrieved November 3, 2015 from https://www.psychologytoday.com/blog/sex-murder-and-the-meaning-life/201206/what-s-the-one-best-question-predict-casual-sex

4. Langeville, Amy N. & Meyer, Carl D. (2012). Who’s #1? The science of rating and ranking. Princeton, NJ: Princeton University Press. Available from http://press.princeton.edu/titles/9661.html

5. Rudder, Christian (September 2015). Dataclysm: Love, sex, race, and identity – what our online lives tell us about our offline selves. New York: Broadway Books. Available on http://www.penguinrandomhouse.com/books/223045/dataclysm-by-christian-rudder/9780385347396/

6. Stake, Jeffrey Evans and Alexeev, Michael (October 30, 2014). Who Responds to U.S. News & World Report’s Law School Rankings? Indiana University School of Law-Bloomington Legal Studies Research Paper No. 55. Available at SSRN: http://ssrn.com/abstract=913427

Single worst advice – the answer

After reading this blog entry, it is obvious that using a single ranking is not smart.
Use a few and be aware of each one’s weaknesses and strengths.

Choose the component that helps you the most. If by any chance two rankings use the same component (e.g., salary), compare the numbers and smile.

Nothing is perfect. And since you read all the way to the end, why not write a comment and subscribe to our newsletter?

Do Dolce & Gabbana’s recent statements about gay adoption strengthen their reputation as fashion’s aging enfants terribles?
Are Madonna and Elton John right to be raising hell or just ignorant of the full statements (made in Italian)?
Will all this help sales, while further building Domenico Dolce and Stefano Gabanna’s reputations?
We define the difference between reputation and brand and discuss cases to better illustrate the matter.

Amazon founder Jeff Bezos is credited with this statement:

Your brand is what people say about you when you are not in the room.
Screen Shot 2014-09-11 at 10.58.37

Virgin Group founder Richard Branson is credited with this statement:

Build brands not around products, but around reputation.
Airbus-Virgin-America

What do you think? Do you agree with Jeff Bezos or Richard Branson?
Should we care about brands, or should we focus on reputation instead? Leave a comment below.

Define or stay confused

Before we can answer the above questions, we need to define what these terms mean. A while back I wrote Brand versus reputation: Jeff Bezos, Richard Branson, in which I pointed out that first, brand and reputation are two sides of the same coin and closely related, but nevertheless different concepts. I also disagree with people that say “reputation is part of the brand”. They are related, not the same.

Richard Ettenson and Jonathan Knowles (2008) pointed out the typical factors for a company’s top-notch reputation:

The company has integrity and is reliable, accountable, responsible and quality-conscious.

More formally, reputation is the collective representation of multiple constituencies’ perception of the corporation’s behaviour. Accordingly, reputation is about how efforts regarding brand and what the company has done or delivered are seen by its various stakeholders (e.g., investors, costumers, employees and consumer advocates).

Heads or tails, let us define the terms below.

[su_box title=”Brand is a ‘public-centric’ concept” box_color=”#ff9900″ title_color=”#ffffff”]
It is about relevance and differentiation (with respect to the customer, public opinion, supplier). Brand focuses on what a product, service or firm has promised to its clients.
Brand is what the corporation tells the public or its investors, the news it shares about itself or the product, and most importantly, what it wants and aspires to be.
A brand helps reduce uncertainty for a client. The customer knows what they get, such as a hotel chain’s rooms offering the same features (make-up mirror, good hair dryer) as standard around the globe.[/su_box]
So, what is reputation, then? Glad you asked.

[su_box title=”Reputation is an attitudinal construct and ‘word of mouth- / experience-centric’ concept” box_color=”#ff9900″ title_color=”#ffffff”]
Attitude denotes the subjective, emotional, and cognitive based mindset (see Schwaiger, 2004, p. 49), which implies splitting the construct of reputation into affective and cognitive components.
The cognitive component of the construct can be described as the rational outcomes of high reputation. Examples include high performance, global reach and one’s perception of the company (e.g., great employer).
The affective component of reputation is the emotions that respondents have towards a company. Thus, people talk about these things with friends (word of mouth). Media coverage can also influence how we feel toward a company.[/su_box]
Based on an extensive literature review, Schwaiger (2004) proposed an approach to measure reputation for corporations. He tested this in a preliminary qualitative study. Out if these findings he developed a survey to test his measures with a data set. Findings suggest four indices to explain reputation, namely:

1. quality (e.g., product or service),
2. performance (e.g., has vision, well managed, performs well),
3. responsibility (e.g., sustainability, being a good corporate citizen), and
4. attractiveness (e.g., offices, buildings, as an employer).

The above can be used to explain reputation as measured with performance and sympathy toward the company. Your reputation precedes you. It significantly influences your chances of doing business with somebody.

Curious? Join 1500 other subscribers to this blog’s newsletter and read on!

Does company size matter?

Size definitely matters when it comes to brand. You might have a brand in your part of the woods, but Coca-Cola or Nespresso are still in a different league; they are global. What about your brand? If your company employs less than 250 tull-time employees (what the European Commission calls a small- and mid-size enterprise or SME), you are unlikely to have a global brand.

Your resources will surely not allow you to splash your logo all over the place, so spending money on brand is hard to justify. However, spending resources on keeping your clients happy, while maintaining a good reputation is a no-brainer (i.e. go for it). However, as Emil Heinrich points out, even a SME has a brand in the region where it does business. Hence, this might help recruitment up to about a 100 km radius.

Emil-Heinrich-a-storekeeper-does-have-local-brand

Small shopkeepers do have a local brand.

Are consumer brands becoming less important?

That remains to be seen. Nevertheless, here are two industries with interesting trends.

Food: Craft versus Kraft

In a recent Financial Times article (March 17, 2015 – Craft versus Kraft), Gary Silverman discusses food business trends, in particular how Kraft or Campbell’s Soup are losing market share to small food producers (retrieved March 18, 2015 from http://www.ft.com/intl/cms/s/2/2a238422-c7e0-11e4-8210-00144feab7de.html).

There is a general disinterest in brands.

The millennial generation wants products that are low in salt, sugar or fat. As well, these must be free of artificial flavors and rich in protein or anti-oxidants. This is the result of older American consumers being more prone to obesity, heart disease and other maladies. In turn, the article insinuates that millennials do not want to follow the same path.

The article also points out:

“…how important it has become for food companies to tell consumers an interesting story, replete with details about their products’ ingredients and health benefits. Such narratives give brands the coveted — and elusive — quality of ‘authenticity’.”

[su_box title=”YES – food brands are becoming less important.” box_color=”#ff9900″ title_color=”#ffffff”] In the US, the companies that are winning the game for natural, organic, protein-rich and unprocessed food are quite small.

Accordingly, one’s reputation for being quality-conscious and accountable is increasingly important (remember the neighborhood shopkeeper).[/su_box]

Clothing: #DolceGabbana or #BrandyMelville

The Dolce & Gabbana label came under fire in 2007 for an ad that many felt depicted the gang rape of a woman. The ad was ultimately pulled soon after, but unfortunately, Domenico Dolce and Stefano Gabbana were accused of referring to people who were offended as ‘a bit backward’. Of course, belittling those who took offense is neither acceptable nor in good taste.

Dolce-Gabbana-Gang-bang-ok-IVF-same-sex-marriage-NOT

Dolce & Gabbana do it wrong – AGAIN!

The above image is from Kelly Cutrone’s tweet about the ad, which she tweeted on March 15, 2015. It got a lot of attention in the US, Canadian, UK and German media, partly because of an interview the two fashion icons given Panorama, an Italian magazine.

According to Dolce & Gabbana, and as stated in the printed interview, “la famiglia tradizionale, fatto di mama papa e figli, (a traditional family, comprised of a mother, father, and children). Of course, if one reads the interview more closely, it is clear that the guys are referencing their own upbringing and Sicilian traditions in general. There, this family model is paramount.

What got people like Elton John and Madonna upset was that the fashion designers dared to raise some scepticism about in vitro fertilization and surrogate mothers, mentioning their personal opinions about this. Whilst we may disagree, a democracy thrives on allowing people to state their opinions; castigating them thereafter on social media is an increasing – but worrisome – trend.

Of course we have to forgive Madonna. She is pushing her latest album Rebel Heart, which debuted earlier this month. Sales were lagging until Madonna posted this on Instagram.

Madonna-should-read-things-carefully-before-throwing-stones-at-others

Did Madonna “think before she wrote this Instagram post”? SURE – helping her latest album Rebel Heart to push up its lagging sales….

Domenico Dolce and Stefano Gabbana are also the guys who drew applause for sending a pregnant model down the runway as part of their tribute to mothers.

Similarly, some people got rather miffed earlier this year about Brandy Melville, a clothing brand that offers only size small. It clearly discriminates against people of different size. Of course, it is unlikely you will fit in a small size dress if you are over forty. I do not :-) Again, some social media backlash happened. Questions about the viability of the brand continue (see DrKPI and #BrandyMelville). Can such a brand survive or will it simply die, as Abercrombie & Fitch seems to be?

BrandyMelville-one-size-does-not-fit-all

Size Small does not fit all of us, does it?

[su_box title=”Dolce & Gabbana: Social media talk is cheap” box_color=”#ff9900″ title_color=”#ffffff”]Social media poses a substantial risk that opinions communicated by company officials (e.g., as spelled out in documents or stated during interviews) is taken out of context and spread widely.

Using Twitter and Facebook to share news is fine. But please Madonna and Elton John, check the facts before you share.

(Mr. Dolce: “I am gay. I cannot have a child… I am not convinced by what I call children of chemistry, or synthetic children. Uteruses for rent, sperm chosen from a catalogue.” – see Fashion’s ageing enfants terribles).

Finally, talk is cheap. As consumers, let our actions speak louder than words: Don’t buy!

By the way, negative press and social media coverage is better than none… see Benetton below. And here’s a sucker’s bet: I would bet you most of those people who feel outraged or miffed today will likely continue shopping Dolce & Gabbana and Brandy Melville stuff as early as next month! It is so bla bla, superficial…[/su_box]

Benetton-advertising-with-dead-Bosnian-soldier-wearing--bloodied-shirt-with-bullet-holes

More brands than Dolce & Gabbana or Brandy Melville have raised controversy: in 1994, Benetton took a fallen Bosnian soldier’s uniform, using its red blood and bullet holes for an ad campaign.

Interesting read: Henry A. Giroux (2014). Benetton’s “World without Borders”: Buying Social Change

Source: Dolce & Gabbana: When reputation damages brand

Bottom Line

You read so far, I invite you to join 1500 other subscribers to this blog’s newsletter!

The above examples offer two insights as spelled out below.
[su_box title=”Brand versus Reputation” box_color=”#ff9900″ title_color=”#ffffff”]1 —Corporate brand – reflects what the corporation aspires to be while the me brand reflects what I as an individual aspire to.
Reputation – the other side of the coin – is how people feel about the company or the person.
SMEs should focus on reputation, spending little on building a brand beyond their geographical territory.
Unfortunately, in practice brand and reputation are rarely if ever treated as separate BUT related constructs. This is a dangerous mistake to make.

2 — Corporate reputation is based almost exclusively on perceptions, not real knowledge. Hence, while managing corporate reputation is primarily a corporate communications task, that is not where it ends. Yes, doing good things and talking about them is great, but remember the goal.
To illustrate, companies sometimes appear to spend more money on advertising their good deed than providing money to the cause itself. Not really conducive to a good reputation…
Finally, if you don’t like a brand, its reputation or the owners’ behaviour, don’t just tweet about it, stop buying the product![/su_box]

What is your opinion?
Do you trust your clothing label’s reputation?
Do you care about your brand’s reputation when you shop?

We address 3 questions: 1. What data do we really need answers for? 2. Why is a sound methodology critical? 3. Do metrics that focus on small but useful improvements make sense?

With business analytics, the toughest challenge is collecting data needed for questions one needs answered. My emphasis here is on:

– must-have answers, not – desired answers!

This is the third post in a series of entries about big data. Others so far are:

– Facebook mood study: Why we should be worried!
– Secrets of analytics 1: UPS or Apple?

New technqiues will not do

Often, we focus on predicting or forecasting the future. However, in management it is more important to understand the analytic HOWs and WHYs. These matter more than the promise of prediction. In the past we did not call things predictive analytics but forecasts instead. We used

– time series, as economists still often do, and – tried our luck with multivariate analysis (both part of what is called parametric statistics). These days, we still use the above methods. However, new ones have come to the fore, such as:

– k-means clusters, and – random graphs.

2014-09-08-EPFL-evolution-of-random-graph-Erdoes-Renyi

A random graph is obtained by randomly sampling from a collection of graphs.

Get the latest news on your mobile Subscribe to our award-winning blog: DrKPI – the trend blog

Continue reading “Scottish referendum: A false sense of precision?” »

KEY INSIGHTS
Why little data mean a lot: Incremental innovation is key.
Google Trends shows a spike in searches – iPhone6: Remember the flu trends? Increased searches do not make something a fact…
Constant experimentation and rapid implementation: Strive for lots of small and frequent advances, because that is good enough.

We address three questions

1. What does it mean when Google Trends shows a spike in searches?
2. Should we aim for lots of small wins from ‘big data’ that add up to something big?
3. Do metrics that focus on small but useful improvements make sense?

Get the latest news on your mobile
Subscribe to our award-winning blog: DrKPI – the trend blog

CLICK - Caution - things may not be as they appear. Check the methods.

1. ‘iPhone slow’ and Google Trends

There are three types of business analytics:

Descriptive analytics that look at historical data,
Predictive Analytics that try to determine what might happen, and
Prescriptive Analytics that focus on giving us different options, in which case we choose what we think suits us best, given time and money constraints.

The question remains whether we have the right data… To illustrate this challenge, we can look at the Google Flu Trends (GFT). Using search results from Google, the GFT supposedly indicates how the flu spreads and affects people in various countries.

Continue reading “Data analytics: UPS or Apple?” »

CLICK - Facebook Likes tell a lot about you, such as if you drink beer, have sex regularly and are happy.

Facebook engaged in a large study to see if users’ emotional states could be affected by their news feed content.
Consent of Human Subjects: Subjects not asked for permission first.
Findings: Extremely small effects.
Research methodology: Poor algorithms used, questionable findings.

Key finding: A reduction in negative content in a person’s newsfeed on Facebook increased positive content in users’ posting behavior by about 1/15 of one percent!

We address 3 questions

1. Why did some of the checks and balances possibly fail?
2. Should we worry about the study’s findings?
3. What benefits do Facebook users get out of this study?

Non-techie description of study: News feed: ‘Emotional contagion’ sweeps Facebook

1. Some checks and balances failed

Following the spirit as well as the letter of the law is the key to successful compliance. In turn, any governance depends upon the participants doing their job thoroughly and carefully.

In this case, the academics thought this was an important subject that could be nicely studied with Facebook users. They may not have considered how much it might upset users and the media.

Cornell University has a its procedure in place for getting approval for research with human subjects. As the image below illustrates, the researcher is expected to reflect on the project and if in doubt, ask for help.

CLICK - Why does the media not get the facts right about the Facebook study? #BigData

The university points out that it did not review the study. Specifically, it did not check whether it met university guidelines for doing research with human subjects. The reasons given were that its staff:

Curious? Join 1500 other subscribers to this blog’s newsletter and read on! Pls. use an e-mail that works after you change jobs! Continue reading “Facebook mood study: Playing with your mind” »