Home Asia-Pacific III 2015 Data capital, levelling the playing field for Goliath and David

Data capital, levelling the playing field for Goliath and David

by Administrator
Chris ChelliahIssue:Asia-Pacific III 2015
Article no.:10
Topic:Data capital, levelling the playing field for Goliath and David
Author:Chris Chelliah
Title:Group VP &
Chief Architect, Core Technology & Cloud, APAC
Organisation:Oracle
PDF size:228KB

About author

Chris Chelliah is group VP and chief architect for core technology and cloud for Oracle Asia Pacific. He has more than 22 years of experience in the information systems industry, of which 20 years are with Oracle in Asia Pacific, Europe, and North America.

Chelliah is accountable for the core technology (database) license business for Oracle Asia Pacific. Oracle is the #1 market-share leader in this space, and he is responsible for growing the business through the adoption of Oracle’s emerging innovations: in-memory, multitenancy, software in silicon, database security, data-lake integration, unstructured data (NoSQL, Hadoop), and more. He is also responsible for growing the adoption of Oracle’s platform-as-a-service (PaaS) cloud offering throughout the region.

Chelliah has held other roles within the company, where he focused on technology consulting and implementation services in large, global projects across the telecommunications, financial services, and government sectors. In these roles, he was instrumental in meeting the business needs of customers through innovative software solutions.

Chelliah holds a degree in computer and mathematical sciences, with first-class honors, from the University of Western Australia. He has been published in a number of trade and industry journals and is the co-author of Professional Oracle Programming, a Wiley publication. He is a featured speaker on Oracle’s€ vision, strategy, and architecture and consults on all critical and key customer projects throughout Asia Pacific.

Article abstract

The term ‘data capital’ isn’t being used as a metaphor. As my colleague, Paul Sonderegger, Oracle’s Big Data Strategist describes it: ”Data is, in fact, a new kind of capital, on par with financial capital for creating new products and services. What we are seeing is that data actually fulfils the textbook definition of capital, in that it is both produced by a process, and then is a necessary factor of production for goods/services, without which they cannot occur.” In effect you use data to make data – it becomes a virtuous cycle and as a result, it becomes very hard for new comers to catch up.

Full Article

There is no doubt that the move to big data decision-making is inevitable and compelling. Data is an essential resource in the modern economy . Indeed, according to Gartner, by 2020, information will be used to reinvent, digitalize or eliminate 80 percent of business processes and products from a decade earlier.
Already, data centric decision making sits behind many facets of our everyday lives – driving the patterns of traffic light signals, trading systems and increasingly sitting at the heart of new, disruptive companies like Amazon, Pandora Internet Radio and Uber.
The challenge many ‘ordinary’ companies face in seeking to embrace similar practices, is that ‘Big Data’ is not a discrete technology or prescribed solution set, and therefore, is less easily understood and harnessed. Instead, big data is better seen as a phenomenon, and while in simplistic terms, this phenomenon describes the capture and use of data in daily activities, behind this simple description is a much larger economic story, around the rise of data capital.
Data capital, what is it and why is it important?
The term ‘data capital’ isn’t being used as a metaphor. As my colleague, Paul Sonderegger, Oracle’s Big Data Strategist describes it: ”Data is, in fact, a new kind of capital, on par with financial capital for creating new products and services. What we are seeing is that data actually fulfils the textbook definition of capital, in that it is both produced by a process, and then is a necessary factor of production for goods/services, without which they cannot occur.” In effect you use data to make data – it becomes a virtuous cycle and as a result, it becomes very hard for new comers to catch up.
Take Uber as an example. Uber is currently the poster child example of a disruptive organisation that has disrupted the entire taxi industry worldwide. It has gone from nothing, six years ago, to being worth US$51 billion, as of the start of August 2015. A key factor of its success is the dynamic algorithm that informs its ‘surge pricing’ model which balances supply and demand. Its goal is to raise prices during peak demand to a point that makes drivers want to come out and work but not to make the service so expensive as to turn users away.
To run this type of algorithm and get that delicate balance right needs both current and historic data. Past information on performance-to-date from its operations around the world, is continuously improved by the feed of new data. For a newcomer to challenge Uber or any company operating this type of business model, they would literally need to go back in time to get all the data they have missed.
That is the power of data capital at work, it both boosts and disrupts. However, it doesn’t always disrupt in the way you expect it to.
For example, if you look at Amazon, which now covers one in every four book sales, while it has disrupted the book sellers in terms of taking share from the likes of Waterstones and Borders, it has gone much further than just impacting the high street. Behind the scenes it has changed the whole basis of negotiating power of the publishing industry.
Similarly, the move by Amazon into selling groceries and FMCG goods isn’t just disrupting the retail store itself, it is impacting supplier discount negotiations and, as importantly, who gets and ‘owns’ customer data and therefore the relationship.
Defending the customer relationship
Increasingly, it is the customer relationship that really matters. IT, through a combination of big data, cloud and mobile technologies, is changing the customer experience irrevocably. Impatience is the order of the day. We now expect personalised, faster, 24/7 interactions with companies, and if service levels do’tn meet those expectations, we are very quick to take our business elsewhere.
So does this mean a corporate apocalypse and the demise of the world’s largest enterprises? No. While data capital disrupts, it also doesn’t, and while new entrants can outmanoeuvre established organisations, most often David doesn’t beat Goliath. In fact, as often, the incumbent can use data to create barriers to entry.
This is because existing businesses have an advantage through the large volumes of historic customer information and transactional data they hold. The biggest challenge they face is the need to digitise or datify all of this before their rivals do.
Datafication; the key to creating unique data capital
For a start, companies need to get really excited about data and see the opportunities harnessing it will afford. However, this excitement needs to be tempered with a good measure of practicality to ensure resulting benefits.
This is particularly critical given that there’s not just one way to do big data. As, I’ve already said, ‘Big Data’ is not a discrete technology or prescribed solution set. On top of that, enterprise IT environments are already complex, with a range of applications and systems from a variety of vendors, and often a mix of on-premises and cloud systems. And as IDC points out in a recent white paper , most organizations already have multiple data warehouses, data marts, data caches, and operational data stores, which they are struggling to get value from.
In order to capitalize on and datify the unprecedented amount and types of information flowing into an organization today, a holistic approach to information architecture is required. This should start with strategy and the desired business outcome and then go on to lay out the information architecture vision and the technology blueprint accordingly.
There are a number of additional reasons that make this structured approach particularly important…
IT is no longer the gatekeeper. As a Gartner report recently highlighted, initiatives are increasingly originating from financial, marketing and other business unit leaders, who are pressuring CIOs to collaborate with them to make sure the technology aligns with the company’s strategy.
Furthermore, cloud applications are making it easier than ever for line-of-business teams to launch tech initiatives specifically geared for their area—and accumulate huge amounts of data—without IT’s involvement or the need for massive infrastructure investments. Given that IDC recently reported that over the next five years spending on cloud-based big data and analytics (BDA) solutions will grow three times faster than spending for on-premise solutions, then this situation could get out of control. And, as hybrid on/off premise deployments will become a requirement, closely coupled integration will be key; a prerequisite for tight IT involvement.
Another critical component of any successful big data deployment is the need to establish data governance processes and capabilities and ensure compliance with security standards and regulations throughout the company.
Given that there is little end in sight to the current constraints many companies find they have around the availability of budget and skills, the decision around whether to take a buy versus build approach is key. As Hadoop in Singapore is an open-source technology, currently available for free, that can be run on low-cost equipment, the temptation to do-it-yourself is understandable. However, the decision should rather be taken in the context of the strategic information architecture and whether a DIY approach will deliver the desired business outcomes.
A whitepaper commissioned by Oracle, and prepared by the Enterprise Strategy Group, Inc. found that, based on its validation of an Oracle model for a medium-sized Hadoop-oriented big data project, a ‘buy’ infrastructure option like Oracle Big Data Appliance will yield approximately 21 percent lower costs than a ‘build’ equivalent do-it-yourself infrastructure. And using a preconfigured appliance can also greatly reduce the complexity of engaging staff from many IT disciplines to do extensive platform evaluation, testing, development, and integration.
Additionally, when most established organisations think about data, they focus on the transactional data they produce – the system of record stuff – which they’re typically good at working with and analysing. The reality is that there is a whole range of other data out there, that isn’t generated necessarily by the organisation, that isn’t highly structured, and that is potentially really interesting and could offer lots of value and insight to the organisation.
Companies need to start thinking really broadly about any information that could add value, as well as about the technologies and techniques they could bring to that, and where they can get that from. In fact, 70 per cent of large organizations already purchase external data and 100 per cent will do so by 2019 according to IDC predictions reported by Information Management .
Addressing any of these issues outside of the context of the existing IT environment will limit an organization’s capability to maximize its data capital in the long run. In short, the David and Goliath battle will be won by those CIOs who are able to make best use of their data capital by digitizing and datafying key activities with customers, suppliers and partners before rivals do.
For more information on Oracle and big data visit https://www.oracle.com/big-data/index.html, or visit our stand at Strata + Hadoop in Singapore.

Related Articles

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More