Posts tagged ‘analytics’
We’re back with the fourth post in our series on how to get value from your data, including how to ensure that new “data” and “analytics” products are designed for successful delivery to new and existing customers.
In the previous posts in this series, we discussed our methodology and what is required in terms of understanding your target customer—who they are and what they need—as well as making sure you have the right Team in place to work on the project. In this post, we are going to discuss how you build your Data Ecosystem:
- What is needed to ensure that data processes will support the new product(s)?
- How do you identify appropriate data partners and enhancements?
- What privacy- and security-related issues must you be aware of and address?
Unless you’ve been asleep for the past couple of years, you, like us, have heard this phrase again and again: Data is the new oil. It certainly sounds great but what exactly does it mean? Here’s our take: Getting the most value out of your data can make you better at what you do as well as enable you to do more with what you have. In other words, there’s unrealized value in those data silos that all companies have. But make no mistake: the road to realizing data value is paved with good intentions and often times, poor execution and results.
Today, most companies are drowning in data—there’s historical data from operations, data from public sources, data from partners and acquisitions, data you can purchase from data brokers, etc. These companies have read all the research and want to leverage their data assets to make “better” operational decisions, to offer their existing customer base more insights, to pursue new revenue opportunities. Of course, the real value in that data is derived from the business analytics that deliver the insights that drive better decisions. As we’ve said quite often on this blog: Data, without the proper use of analytics, is meaningless. If data is the new oil, think of analytics as the oil drills—you need both to be successful. (more…)
A Sneak Peek at Our New HTML 5 UI and Geek Love for Some of the Libraries Used in Building AnalyticsPBI4Azure
Drumroll please! After nearly a year of development work, we are about to offer early access to the first real-time/streaming analytics solution software appliance for the cloud – AnalyticsPBI for Azure. There will be more forthcoming on the product launch but the new UI is so cool I had to show it off a bit.
We will be following up with a formal launch and Early Access Program (EAP) signups in the next couple of weeks so watch this space and patternbuilders.com for details – the big data analytics market is about to change in a big way! Here’s a sneak peek on what we’ve been working on.
For the geek part of my blog I am going to give a shout out to three libraries that we are using – all have made a huge difference in the product’s performance, scalability, and usability. The first two libraries come from Microsoft – Reactive Extensions and TPL Dataflow. The third library is the open source math and statistics library, Math.Net.
A top-level view of our data project over a series of posts.
By Mary Ludloff
Welcome to the third post in our series on a big data project. Our goal is to walk you all the way through a big data project from its inception through its completion (or depending on the project, through deployment and maintenance). Those of you familiar with our series know that we include our Big Data Playbook rules as we address specific topics—we may repeat some as we go along but if you need to refresh your memory on where we are, go to Part 1 and Part 2.
You now know that we are working with the University of Sydney on a project that looks at the impact social media comments have on a company’s stock and whether this mediates the influence of primary news. Specifically: Is a company’s stock price influenced by both and can we isolate and study the impact of those distinct sources on that stock price? (more…)
Marilyn Craig (Managing Director of Insight Voices, frequent guest blogger, marketing colleague, and analytics guru) and I have been watching the big data “V” pile-on with a bit of bemusement lately. We started with the classic 3 V’s, codified by Doug Laney, a META Group and now Gartner analyst, in early 2001 (yes, that’s correct, 2001). Doug puts it this way:
“In the late 1990s, while a META Group analyst (Note: META is now part of Gartner), it was becoming evident that our clients increasingly were encumbered by their data assets. While many pundits were talking about, many clients were lamenting, and many vendors were seizing the opportunity of these fast-growing data stores, I also realized that something else was going on. Sea changes in the speed at which data was flowing mainly due to electronic commerce, along with the increasing breadth of data sources, structures and formats due to the post Y2K-ERP application boom were as or more challenging to data management teams than was the increasing quantity of data.”
Doug worked with clients on these issues as well as spoke about them at industry conferences. He then wrote a research note (February 2001) entitled “3-D Data Management: Controlling Data Volume, Velocity and Variety” which is available in its entirety here (pdf too). (more…)
I had to miss Strata due to a family emergency. While Mary picked up the slack for me at our privacy session, and by all reports did her usual outstanding job, I also had to cancel a Tuesday night Strata session sponsored by 10Gen on how PatternBuilders has used Mongo and Azure to create a next generation big data analytics system. The good news is that I should have some time to catch up on my writing this week so look for a version of what would have been my 10Gen talk shortly. In the meantime, to get me back in the groove, here is a very short post inspired by a Forbes post written by Dan Everett of SAP on “Hadoopla”
As a CEO of a real-time big data analytics company that occasionally competes with parts of the Hadoop ecosystem, I may have some biases (you think?). But I certainly agree that there is too much Hadoopla (a great term). If our goal as an industry is to move Big Data out of the lab and into mainstream use by anyone other than the companies that thrive on and have the staff to support high maintenance and very high skill technologies, Hadoop is not the answer – it has too many moving parts and is simply too complex.
To quote from a blog post I wrote a year ago:
“Hadoop is a nifty technology that offers one of the best distributed batch processing frameworks available, although there are other very good ones that don’t get nearly as much press, including Condor and Globus. All of these systems fit broadly into the High Performance, Parallel, or Grid computing categories and all have been or are currently used to perform analytics on large data sets (as well as other types of problems that can benefit from bringing the power of multiple computers to bear on a problem). The SETI project is probably the most well know (and IMHO, the coolest) application of these technologies outside of that little company in Mountain View indexing the Internet. But just because a system can be used for analytics doesn’t make it an analytics system…..“
Why is the industry so focused on Hadoop? Given the huge amount of venture capital that has been poured into various members of the Hadoop eco-system and that eco-system’s failure to find a breakout business model that isn’t hampered by Hadoop’s intrinsic complexity, there is ample incentive for a lot of very savvy folks to attempt to market around these limitations. But no amount of marketing can change the fact that Hadoop is a tool for companies with elite programmers and top of the line computing infrastructures. And in that niche, it excels. But it was not designed, and in my opinion will never see, broad adoption outside of that niche despite the seeming endless growth of Hadoopla.
Let me tell you a little secret: I always know when I am talking (and working) with a company that has successfully launched big data initiatives. There are three characteristics that these companies share:
- A C-level executive runs the “[big] data operations.”
- The Chief Data Officer (even if they are the CIO) has a heavy business/operations background.
- The data team is focused on the “business,” not the data.
Did you notice that technology and data science are not reflected in any of the characteristics? Some of you may consider this sacrilege—after all, we are operating in a world where technology (and I happily work for one of those companies) has changed the data collection, usage, and analysis game. Colleges and universities are now offering master degrees in analytics. The role of the data scientist has been pretty much deified (I refer you to Part 1 of this series). And we all need to be very worried about the “talent shortage” and our ability to recruit the “right analytical team” (I refer you to Part 2 of this series).
Yes—technology has had a tremendous impact on how much data we can collect and the ways in which we can analyze it but not everyone needs to be a senior computer programmer. Yes—we all should strive to be more mathematically inclined but not all of us need Master’s or PhD’s in statistics or analytics. Yes—some companies, based on their business models, may have a staff of data scientists but others may get along just fine without one (with the occasional analytics consultant lending a hand). (more…)