ROMI Analysis

[cs_column column_size=”1/2″][cs_image column_size=”1/1″ image_style=”plain” cs_image_url=”http://www.datapanaceaonline.com/wp-content/uploads/2014/09/ROMI.jpg” cs_image_url1=”Browse”][/cs_image][/cs_column][cs_column column_size=”1/2″]Students often ask me after doing MBA in marketing how BA program going to help them. Finance & marketing are two pillars of any business. Understanding customer behavioral pattern, predicting sales and devising marketing & online strategy is one of the focus areas in any BA program.  This kind of programs will help you, increasing sales and reducing marketing cost by doing customer segmentation and creating promotional activities.

[/cs_column]

Case-let to explain the concept:

ABC Inc. client engagement Group is every now and again captivated; to help customers see how their showcasing exertions effect lead generation and sales. These solicitations come in numerous shapes and sizes yet have a tendency to coalesce around:

  • Which media are “moving the needle” and at what spending levels?
  • How do the different media work together?
  • How can I improve targeting for my direct marketing efforts?

The first two questions are typically answered through what we call a Return on Marketing Investment (ROMI) analysis.  The second question focuses on understanding customer behavioral pattern, predicting sales and devising marketing & online strategy.

[cs_column column_size=”1/2″][rev_slider blog-image][/cs_column]

[cs_column column_size=”1/2″]

The Solutions:

What is a ROMI Analysis?

A Return on Marketing Investment (ROMI) analysis is fairly a new matrix, helps organizations understand the effectiveness of their marketing spending.  A ROMI analysis examines business results in relation to specific marketing activity. Online marketing becoming the primary source of lead generation and sales it got to be even more critical to analyze and streamline their expense. The benefit of this knowledge is that it allows marketers to focus their spending on activities that provide the greatest return.

[/cs_column]

When would you use a ROMI Analysis?

The findings of ROMI analyses can help determine:

  • Which marketing activities are generating substantial leads and which are redundant? E.g. single page flyers are not doing well now a days)
  • Which are marketing areas providing substantial revenue at the same time required high level spending. Which required funds to be reallocated? E.g. for Warner bros.  a particular movie required more funds in Europe than Asia for promotional activities.
  • Which external market conditions (e.g. spending capacity of customer varies from city to city ) affect marketing’s ability to generate results?  How does competitive activity impact the required level of marketing investment
  • How should incremental funds be allocated?

A ROMI analysis, using statistical analysis / data mining tools and techniques, can uncover patterns about how and when customers purchase. This data might be exceptionally profitable in predicting sales and formulating relevant marketing strategies.

How Return on Marketing Investment (ROMI) is Different from Return on investment (ROI)

Return on marketing investment (ROMI) is the contribution attributable to marketing (net of marketing spending), divided by the marketing ‘invested’ or risked. It is not like the other ‘return-on-investment’ metrics because marketing is not the same kind of investment.

ROI vs ROMI

Return on investment (ROI) is a measure of the profit earned from each investment. It’s typically expressed as a percentage, so multiply your result by 100. In simple terms, the calculation is:

(Return – Investment) x 100 = _ %
Investment

 

ROMI

ROI calculations for marketing campaigns (Return On Marketing Investment) can be complex — you may have many variables on both the profit (return) side and the investment (cost) side.The tricky part is determining what constitutes your “return,” and what is your true investment. For example, different marketers might consider the following for return:

 

  • Total revenue generated for a campaign (the top line sales generated from the campaign)
  • Gross profit, or a gross profit estimate, which is revenue minus the cost of goods to produce/deliver a product or service. Many organizations simply use the company’s COG percentage (30%) and deduct it from the total revenue.
  • Net profit, which is gross profit minus expenses.

On the investment side, it’s easy for marketers to input the media costs as the investment. Other costs incurred to execute your campaign you should include:

  • Creative costs
  • Distribution costs (such as PAYG email credits)
  • Printing costs
  • Technical costs (such as email platforms, website coding, hosting etc)
  • Management time
  • Cost of sales

 

3 Common & Proven ROMI Formulas:

 

  • Use gross profit for units sold in the campaign and the marketing investment for the campaign
  • Gross Profit – Marketing/Investment Marketing Investment
  • Use Customer Lifetime Value (CLV) instead of Gross Profit. CLV is a measure of the profit generated by a single customer or set of customers over their lifetime with your company
  • Customer Lifetime Value – Marketing Investment/ Marketing Investment
  • Profit – Marketing Investment – *Overhead Allocation – *Incremental Expenses/Marketing Investment

How Analytics Can Transform the U.S. Retail Banking Sector

[cs_column column_size=”1/2″][cs_image column_size=”1/1″ image_style=”plain” cs_image_url=”http://www.datapanaceaonline.com/wp-content/uploads/2014/09/facturas.jpg” cs_image_url1=”Browse”][/cs_image][/cs_column][cs_column column_size=”1/2″]No matter how you slice it, banking is a data heavy industry. But despite the proliferation of data, effective mining of insights has remained elusive. Given the tremendous advances in analytics software and the processing power generated by cloud-based utility computing architectures, the banking industry is ripe for change.[/cs_column] As the industry works its way out of the financial crisis (amid continued uncertainty over the future), retail banks, in particular, must seriously consider using analytics to improve decision-making, uncover unseen innovation opportunities and improve compliance within a more stringent regulatory environment that is emerging through the Dodd-Frank Act and other impending mandates.

 

These regulations place a high priority on transparency and are pushing banks toward enterprise wide data architectures. This will command a significant (and much-needed) move away from the

siloed approach to computing that has defined banking since the dawn of the digital age, toward a more integrated model in which a single version of the truth is needed to drive business effectiveness and efficiency.

 

Such an approach will power the industry’s push to reinvigorate its relationship with customers. In today’s rapidly changing competitive landscape, regaining customer trust is a top priority for banks as they look to boost revenues and profitability to survive and thrive in uncertain times.

 

[cs_column column_size=”1/2″][rev_slider blog-image][/cs_column][cs_column column_size=”1/2″]Following the economic crisis of 2007–2008, consumers have become more frugal. The age of conspicuous consumption has been replaced by needs-based pragmatic purchasing, a transformation

that pundits interpret as a return to traditional American values. The personal savings rate, which had decreased dramatically in the 1990s, is now showing a small but steady rise.

 

Despite shrinking discretionary spending budgets, consumers (especially those in the millennial demographic) have eagerly adopted new technology, especially smart phones. They have also embraced social networks in big numbers, replacing, in some cases, expensive physical world interactions with a free social variant.

[/cs_column]

Their rapidly evolving behavior and preferences cannot be ignored. For banks looking to boost their top lines, these channels offer a simple and powerful way to spread their gospel and build tighter relationships with customers.

 

At the center of this ongoing change is pervasive data — information that banks have possessed all

along but never quite figured out how to exploit. Given that the quality and quantity of data varies

greatly, banks need to prioritize the unique information they hold to accelerate time to insight.

 

By applying new analytical tools and service delivery methods, banks can more quickly convert data into

knowledge to acquire market and service-differentiating capabilities. Such an effort requires the backing

of the organization’s leaders and a cultural shift toward evidence-based decision making.

 

New regulations require banks to provide data that is predictive and risk based. This will require deployment of analytical tools on data aggregated from various business units. Reaching customers effectively via new channels and enhancing the multichannel banking experience will require continuous analysis of the structured customer data residing inside traditional databases and the unstructured bits of data created by customers via mobile phones and social media.

 

In our view, the winners in this unfolding scenario will be those financial institutions that realize the

value of their data and capitalize on it by employing advanced analytics. We believe that banks

should seek to achieve the following through their

Analytics deployment:

  • Predict future scenarios and enhance compliance.
  • Gain insights into what makes them unique and put this insight to use to gain a competitive edge.
  • Drive a customer-centric strategy and improve customer-focused activities.
  • Improve decision-making.
  • Enhance process efficiencies and operating margins by analyzing data to identify inefficiencies.
  • Leverage the emerging analytics-as-a-service model to better manage risk and tap three key resources: people, processes and infrastructure, bundled together to serve as a utility.

Ref: cognizant reports | august 2011

MBA Vs. Business Analytics

“Every two days now we create as much information as we did from the dawn of civilization up until 2003” – Eric Schmidt (Executive Chairman, Google), Techonomy Conference – 2010

The turn of the millennium saw a paradigm shift in the engine of growth, sustenance and innovation. The advancements in computing systems, electronics and social media have transformed how decisions are taken in running a business. Information readily available over the internet has provided businesses with immense insights into consumer behavior and needs. A seemingly innocuous “like”, “tweet” or a “click” on a link becomes a binary code in a database that companies employ to provide targeted marketing, personalized services and better goods for the consumers.

The implications of the availability of data is however not limited to only industries such as retail, consumer goods and advertising but also in telecommunication – to improve services and customer experience, financial services – in identifying stock trends, healthcare – in formulation of drugs, security systems – in identifying crime prone areas, automotive – in developing robust mechanical systems, governments and energy – in developing smarter electricity grids among others. Moreover, the efficiency brought in and the financial implication of timely and correct interpretation of data is estimated to improve the operating margins of companies by around 25 per cent and it is increasingly turning out to be the chief differentiator between organizations.

We are currently in the zettabyte era (approx. 1012 GB) a full 90 per cent of which has been created over the last two years. It is an immense challenge faced by organizations to sift through these enormous volumes of data to identify and exploit meaningful relationships between seemingly uncorrelated data points. It is the role of a data scientist to filter the noise and identify these useful relationships to aid the organization in its daily and strategic decision making. The exponential increase in availability of data and dearth of qualified professionals to assimilate, scrutinize and derive meaning from this data has created an opportunity like never before.

[cs_column column_size=”1/2″ flex_column_section_title=”Main challenges with big data projects”][cs_progressbars column_size=”1/1″ cs_progressbars_style=”strip-progressbar”][progressbar_item progressbars_title=”Security” progressbars_percentage=”51″ progressbars_color=”#1e73be”] [/progressbar_item][progressbar_item progressbars_title=”Budget” progressbars_percentage=”47″ progressbars_color=”#1e73be”] [/progressbar_item][progressbar_item progressbars_title=”Lack of talent to implement big data” progressbars_percentage=”41″ progressbars_color=”#1e73be”] [/progressbar_item][progressbar_item progressbars_title=”Lack of talent to run big data and analytics on an ongoing basis” progressbars_percentage=”37″ progressbars_color=”#1e73be”] [/progressbar_item][progressbar_item progressbars_title=”Integration with existing systems” progressbars_percentage=”35″ progressbars_color=”#1e73be”] [/progressbar_item][progressbar_item progressbars_title=”Procurement limitations on big data vendors” progressbars_percentage=”33″ progressbars_color=”#1e73be”] [/progressbar_item][progressbar_item progressbars_title=”Enterprise not ready for big data” progressbars_percentage=”27″ progressbars_color=”#1e73be”] [/progressbar_item] [/cs_progressbars][/cs_column][cs_column column_size=”1/2″]

McKinsey Global Institute has estimated that there will be only 140000 to 190000 professionals with deep analytical skills to fill the demand of Big Data jobs in US by 2018. Further, survey conducted by EMC – a leading US based data management corporation, had 31 per cent of respondents reply that over the next 5 years demand for data scientists will significantly outpace the supply. Additionally a survey conducted by Accenture of IT leaders regarding challenges of big data projects pegged lack of talent to implement big data projects as their third highest concern.

Major Information Technology companies in India have already identified the opportunity to offer these services and have begun building capacities for such an eventuality. The lack of professionals with the requisite qualifications has pushed up demand and subsequently the salaries.

[/cs_column]

[cs_column column_size=”1/2″]For perspective, a recent survey has pegged the average salary for an MBA graduate to be around 300 thousand per annum however a data scientist is estimated to earn a salary upward of 600 thousand per annum. A further analysis of MBA salaries across the popular domains of Finance and Marketing shows startling results that are tabulated below.[/cs_column][cs_column column_size=”1/2″][cs_image column_size=”1/1″ image_style=”plain” cs_image_url=”http://www.datapanaceaonline.com/wp-content/uploads/2014/09/Salary-1.png” cs_image_url1=”Browse”]Source: Payscale.com[/cs_image][/cs_column]

 

Entry level data scientists with skills that include SAS, SQL, and R could potentially earn as much as 2 times as much as a financial analyst with an MBA, the second highest paying entry level job in the peer set used for comparison.

An analysis of the incremental salaries over the career graph of a data scientist, an MBA in marketing and an MBA in finance shows that a data scientist earns substantially higher than her/his peers through the course of their exciting careers.

[cs_column column_size=”1/2″][cs_image column_size=”1/1″ image_style=”plain” cs_image_url=”http://www.datapanaceaonline.com/wp-content/uploads/2014/09/salary-2.png” cs_image_url1=”Browse”]Source: Multiple – Analytics India Magazine and payscale.com [/cs_image][/cs_column][cs_column column_size=”1/2″]Comparing the salary that an entry level MBA receives with that of the cost of an MBA degree, which averages around 6.00 to 9.00 lakhs from a Tier-II B-School and 12.00 to 15.00 lakhs from a Tier-I B-School, the returns on investment is abysmally low on an average and completely unjustified.[/cs_column]

Contrast that with a degree in data analytics the returns are far more substantial. Moreover, the skills developed during this course of Business Analytics, offered by Data Panacea, transcends industry constraints and domain expertise to open a plethora of opportunities for you.

With our robust course curriculum, superior teaching methods and experienced instructors we at Data Panacea would like to help you in developing the skills sets for a professionally fulfilling career as a data scientist and help you ride the wave into the Information Age.

BIG DATA & HADOOP

Good to know that 73% of online adults now use a social networking site of some kind. In addition, Instagram users are nearly as likely as Facebook users to check in to the site on a daily basis. Want to know more, the list is on:

e-mail:

  • 2.2 billion – Number of email users worldwide.
  • 61% – Share of emails that were considered non-essential.
  • 4.3 billion – Number of email clients worldwide in 2012
  • 425 million – Number of active Gmail users globally, making it the leading email provider worldwide.

Social media:

  • 85,962 – Number of monthly posts by Facebook Pages in Brazil, making it the most active country on Facebook.
  • 47% – Percentage of Facebook users that are female.(Wooh!!)
  • 40.5 years – Average age of a Facebook user.
  • 200 million – Monthly active users on Twitter, passed in December 2012
  • 37.3 years – Average age of a Twitter user.
  • 123 – Number of heads of state that have a Twitter account.
  • 44.2 years – Average age of a Linkedin user

Where does this information come from? Even more than the information, it is the insight that makes this information useful and inexplicably important to organizations. Every day, we create 2.5 quintillion bytes of data — so much that 90% of the data in the world today has been created in the last two years alone. This is big data.

Why Big Data?

It is a buzzword used to describe massive volume of both structured and unstructured data which is difficult to process using traditional database and software techniques. Big data is the recipe of doing business in today’s world. With the upsurge of bulks of colossal data- both structured and unstructured, flooding in every organization on a daily basis, proper management and emphatic insights have become a necessity. Wikipedia defines big data as” indescribably complex enough to be worked upon using traditional data processing applications.”

Big Data Analytics examines this huge amount of data to unleash hidden patterns, unknown correlations and other useful information. This helps companies strategize business decisions with the help of data scientists. They analyze chunks of data and amass meaningful insights that are quite often left untapped by conventional business intelligence programs. However, more than 70% of data all over the world is unstructured. As a result, a new class of big data technology has emerged and is being used in many big data analytics environments.

Apache Hadoop is a form of an open source software framework that supports the processing of large data sets across clustered systems.

Origin of Hadoop

Hadoop is named after the name of a toy stuffed elephant that belonged to a young boy!! Is this all about Hadoop.The answer is definitely a No!

In the 2000s, Google faced a serious challenge to handle the exploding  volume of data coming from ever increasing number of websites. Google’s engineers designed a new data processing infrastructure and termed them as Google File System, or GFS, which provided fault-tolerant, reliable, and scalable storage, and MapReduce, a data processing system that allowed work to be split among large numbers of servers and carried out in parallel.

In2004, a well-known open source software developer named Doug Cutting used the technique and replaced the data collection and processing infrastructure on MapReduce and named the new software as  Hadoop, after a toy stuffed elephant that belonged to his young son. The three trends — a shift to scalable, elastic computing infrastructure; adequacy for the most complex and variety of data available; and the power of deciphering disparate data for comprehensive analysis — make Hadoop a critical new platform for data-driven enterprises.

Why Hadoop?

Since Hadoop is linear scalable on low cost commodity hardware, it removes the limitation of storage and compute from the data analytics equation. Instead of pre-optimizing data in the traditional ETL, data warehouse, and BI architecture, Hadoop stores all of the raw data and applies all transformation and analytics that might be done on demand.The platform is now used to support an enormous variety of applications with three key properties.

Hadoop is a single, consolidated storage platform for all kinds of data. It complements numerous file storage products available in market today by delivering a new repository where structured data and complex data may be combined easily.Hadoop is an excellent alternative to redundant and time consuming ERP systems of organizations to store huge data.

Being an open source software, Hadoop provides more storage at much lower cost. One of the cost advantages of Hadoop is that because it relies in an internally redundant data structure and is deployed on industry standard servers rather than expensive specialized data storage systems, you can afford to store data not previously viable.

Hadoop can consolidate  all data types on a low-cost, reliable storage platform that delivers fast parallel execution of powerful analytical algorithms. Hadoop offers data-driven organizations ways to exploit data that they have simply never had before.

Hadoop is one of the most acceptable data storage and data processing e-hub due to the reason that it has been able to overrule various bottlenecks of traditional analytical solutions.

Career as a Data Scientist

Job opportunities for data scientists and Hadoop specialists are emerging across industries, from web companies and e-retailers to financial services, healthcare, energy, utilities and media. A Big Data Scientist is a business employee who is responsible for handling and statistically evaluating large amounts of data. The success of a Big Data Scientist lies in impactful and comprehensible illustration of bulk data he works upon. A Data Scientist must have a set of technical skills like Hadoop, visualization skills like power point, Excel, Tableau and business domain expertise of ones workplace, understanding and meeting the business needs, knowledge of risk analysis etc.

Career Prospects with Hadoop

Hadoop is mentioned in 612 of 83,122 job listings on Dice.com. Among the companies looking to hire Hadoop software engineers and big data scientists are AT&T Interactive, Sears, PayPal, AOL and Deloitte. Hadoop “is an emerging skill,” says Alice Hill, managing director of Dice.com. Hill says Hadoop is also a good skill for IT professionals with relational database management experience to pursue. “If you really understand data structure and queries, there’s going to be a lot of job opportunities,” she adds.