Archive by Author

The more things change, the more they stay the same…

16 Jun

Very amusing article – totally agree that the addition of a CDO if too broadly defined begins to confuse things. Is the CDO part of an integrated strategy to move things ahead, or a tactical band-aid response to an organizational problem?

 

Self Service BI

29 May

Good article on Self Serve BI. The term has been around a while, but never seems to get old.

Interesting thought process to identify analytical approaches

29 Jan

Courtesy of a colleague in the medical data management world – check out this graphic. It is missing a few approaches, but lays out the thought process well.

Machine Learning - Cheastsheet

The Booz Allen Field Guide to Data Science has a similar linkage that is useful. That book can be downloaded here

While I am at at, I found this good book Managing Research Data by Graham-Pryor that focuses on managing research data. I continue to be surprised at the approaches taken by “traditional” data management folks to feed the analytical processes. The old school way of dealing with analytics data did not work well which has created some of the organizational work arounds that exist in companies. This only gets worse when dealing with large amounts of data, and data that must work across systems / sources.

Interesting observations on the healthcare system implementation

9 Dec

Many thanks to http://www.bespacific.com/ for forwarding this post. I often get funny looks when I tell people that the expected outcome may not be the desired outcome. With ones analyst hat on, it is easy to say this – one has a hypothesis, and one tests it. If the hypothesis proves false, then we have identified a place not to go, or a refinement in thinking. For an analyst “failure” (as defined below) is an option. For program managers, it must be an option, but one that is so hard to manage – generally the inability to address this issue starts at the top, and is framed within the culture of the organization.

As a project manager, one would think it is an option – identified as a “risk” in PMP speak, and addressed and managed as such. We will not know the details of what happened for a while, but the article below sheds some light.

HealthCare.gov and the Gulf Between Planning and Reality
By Irving Wladawsky-Berger
Guest Contributor, WSJ

 It’s way too early to know what really happened with the botched launch of HealthCare.Gov.  We don’t know how it will all play in years to come and what its impact will be on the evolution of the Alternative Care Act, on election results over the next few years, or on President Obama’s legacy. Depending on how it all turns out over time, this will be just a chapter in future books on the history of the ACA and the Obama administration, or the subject of major books and investigative reports.

 Most everyone who’s been involved with the development of complex IT systems knows how wrong things can sometimes go.  So, when serious problems do happen, we are eager to learn the lessons that might help us avoid similar problems in the future. It’s quite possible that HealthCare.gov and the ACA’s overall IT system are such complex outliers–technically, organizationally and politically–that any lessons learned might apply to few other projects.  But, given the increasing complexity of private and public sector IT systems, the lessons are worth thinking about.

 I like the way Clay Shirky, NYU faculty member as well as author and consultant, framed the problem in a very interesting blog, Healthcare.gov and the Gulf Between Planning and Reality.  He writes about the gulf between those charged with planning the overall rollout of the ACA and health care exchanges and the realities of trying to get such a complex system designed, built and launched in a short amount of time. It’s essentially a tale of failure is not an option versus the messy world of highly complex IT systems. While the blog is focused on the launch of HealthCare.gov, it can also be read as a more general discussion of the kinds of problems often encountered with highly, complex IT-based projects when a management decision to win a deal at all costs comes back to haunt the implementation of the project.

 “For the first couple of weeks after the launch, I assumed any difficulties in the Federal insurance market were caused by unexpected early interest, and that once the initial crush ebbed, all would be well,” he writes.  “The sinking feeling that all would not be well started with this disillusioning paragraph about what had happened when a staff member at the Centers for Medicare & Medicaid Services [(CMS)], the department responsible for Healthcare.gov warned about difficulties with the site back in March.”

 The paragraph responsible for Mr. Shirky’s sinking feeling was part of an October 12 NY Times article, From the Start Signs of Trouble at Health Portal. According to the article, the warnings came from CMS deputy CIO Henry Chao, the chief digital architect for the new online insurance marketplace. In response, his superior told him:

 “. . . in effect, that failure was not an option, according to people who have spoken with him. Nor was rolling out the system in stages or on a smaller scale, as companies like Google typically do so that problems can more easily and quietly be fixed. Former government officials say the White House, which was calling the shots, feared that any backtracking would further embolden Republican critics who were trying to repeal the health care law.”

 “The idea that failure is not an option is a fantasy version of how non-engineers should motivate engineers,” adds Mr. Shirky. “Failure is always an option. Engineers work as hard as they do because they understand the risk of failure.” In his opinion, neither technology, talent, budgets or the government’s bureaucratic processes are the main culprits here.  Rather, this is a management and a cultural problem.  As a result of the huge political pressures they were under, top administration officials did not feel that they could seriously address the possibility that things might go wrong.

 Other articles paint a similar picture, such as this recent one in the WSJ’s CIO Journal:

 “It was on a cold, sunny day in Baltimore last January that Curt Kwak, chief information officer of the Washington Health Benefit Exchange, first realized that the signature feature of President Obama’s Affordable Care Act could be in trouble.  That day, at a status review meeting of CIOs of state health exchanges, he learned that many of his peers were far behind where they should have been.  According to Mr. Kwak, several of his peers hadn’t yet selected a systems integrator – tech vendors who play crucial roles in fitting together the multiple components of health insurance exchanges that allow consumers to select and enroll in health plans.”

 Why did the administration, as well as several states, wait so long to start the planning of the ACA system including the health care exchanges?  Ezekiel Emanuel– oncologist, vice provost and professor at the University of Pennsylvania and former White House advisor on health policy–said in a good article on the subject that the administration did not want to release detail regulations and specifications on the exchange while in the middle of the 2012 election campaign in order to avoid political controversies. “This may have been a smart political move in the short term, but it left the administration scrambling to get the IT infrastructure together in time, robbing it of an opportunity to adequately consult with independent experts, test the site and fix any problems before it opened to the public.”

 But, then came the reality, which Mr. Shirky describes as the painful tradeoff between features, quality and time.

 “When a project cannot meet all three goals–a situation Healthcare.gov was clearly in by March–something will give.  If you want certain features at a certain level of quality, you’d better be able to move the deadline. If you want overall quality by a certain deadline, you’d better be able to simplify, delay, or drop features.  And if you have a fixed feature list and deadline, quality will suffer. . . You can slip deadlines, reduce features, or, as a last resort, just launch and see what breaks. . . That just happened to this administration’s signature policy goal.”

 The inability of a troubled project to meet all three goals simultaneously, almost feels like the complex systems equivalent of the Heisenberg uncertainty principle; that is, it’s impossible to simultaneously determine the exact position and velocity of an atomic particle with any great degree of accuracy no matter how good your measurement tools are. While clearly not a scientific principle, but a set of guidelines based on decades of experience, there seem to be intrinsic limits to our ability to fix troubled IT projects no matter how hard we try.

 In The Mythical Man-Month, noted computer scientist and software engineer Fred Brooks introduced one of the most important concepts in complex IT systems: adding manpower to a late software project makes it later. Brooks’ Law as his concept became known, remains as true today as when it was first formulated almost 40 years ago.

 Over the years, we have learned that there are limits to our ability to pre-plan complex IT projects in advance. You need a good design, architecture and overall project plan, but you also need the flexibility to learn as you go and make trade-offs as appropriate. Most such projects are therefore released in stages, with alpha and beta phases that start testing the system with a select and relatively small number of users. Such early testing uncovers not only software bugs, but also design flaws that users have trouble with.

 Another important lesson is that all parties involved in a complex, high-risk project must have a good working relationship. All available information on the status of the project should be shared, so there are few last-minute surprises. Tradeoff decisions and project adjustments should involve all key members of the team. Behind most seriously troubled projects lies not only a gulf between planning and reality, but a lack of the close collaboration and overall good will necessary to make the project succeed.

 It’s hard to imagine a more politically contentious project than the ACA.  The administration was worried that any glitches uncovered while testing the system as part of the usual staged release cycle would give further ammunition to those trying to kill the ACA altogether. They may have felt that slipping deadlines and reducing features prior to the October 1 launch was not politically feasible, and that they therefore had no choice but to launch anyway and hope for the best.  Did they make the right decisions?  We’ll find out in the fullness of time.

 Irving Wladawsky-Berger is a former vice-president of technical strategy and innovation at IBM. He is a strategic advisor to Citigroup and is a regular contributor to CIO Journal.

 http://blogs.wsj.com/cio/2013/12/06/healthcare-gov-and-the-gulf-between-planning-and-reality/?mod=wsj_nview_latest

The different aspects of BI

5 Dec

http://www.martinsights.com/?p=774

I like the recognition that approaches need to be integrated in order to create useful insights. Valuable insights come from balancing the needs and capabilities of  business strategy, business analysis, business intelligence and advanced analytics.

Big Data and Marketing – Geoffrey Moore

15 Nov

Geoffrey Moore does a good job of explaining big data and marketing – always a cogent explanation of things.

The addition of analytical functions to databases

14 Nov

The trend has been for database vendors to integrate analytical functions into their products; thereby moving the analytics closer to the data (versus moving the data to the analytics). Interesting comments in the article below on Curt Monash’s excellent blog.

What was interesting to me, was not the central premise of the story that Curt does not  “think [Teradata’s] library of pre-built analytic packages has been a big success”, but rather the BI vendors that are reportedly planning to integrate to those libraries: Tableau, TIBCO Spotfire, and Alteryx. This is interesting as these are the rapid risers in the space, who have risen to prominence on the basis of data visualization and ease of use – not on the basis of their statistical analytics or big data prowess.

Tableau and Spotfire specifically focused on ease of use and visualization as an extension of Excel spreadsheets. They have more recently started to market themselves as being able to deal with “big data” (i.e. being Hadoop buzzword compliant). With the integration to a Teradata stack and presumably integrating front end functionality into some of these back end capabilities, one might expect to see some interesting features. TIBCO actually acquired an analytics company. Are they finally going to integrate the lot on top of a database? I have said it before, and I will say it again, TIBCO has the ESB (Enterprise Service Bus), the visualization tool in Spotfire and the analytical product (Insightful); hooking these all together on a Teradata stack would make a lot of sense – especially since Teradata and TIBCO are both well established in the financial sector. To be fair to TIBCO, they seem to be moving in this direction, but it has been some time since I used the product).

Alteryx is interesting to me in that they have gone after SAS in a big way. I read their white paper and downloaded the free product. They keep harping on the fact that they are simpler to use than SAS, and the white paper is fierce in its criticism of SAS. I gave their tool a quick run through, and came away with two thoughts: 1) the interface while it does not require coding/script as SAS does, cannot really be called simple; and 2) they are not trying to do the same things as SAS. SAS occupies a different space in the BI world than these tools have traditionally occupied. However,…

Do these tools begin to move into the SAS space by integrating onto foundational data capabilities? The reason SAS is less easy to use than the products of these rapidly growing players is that the rapidly growing players have not tackled the really tough analytics problems in the big data space. The moment they start to tackle big data mining problems requiring complex and recursive analytics, will they start to look more like SAS? If you think I am picking on SAS, swap out SAS for the IBM Cognos, SPSS, Netezza, Streams, Big Insights stack, and see how easy that is! Not to mention the price tag that comes with it.

What is certain is that these “new” players in the Statistical and BI spaces will do whatever they can to make advanced capabilities available to a broader audience than traditionally has been the case with SAS or SPSS (IBM). This will have the effect of making analytically enhanced insights more broadly available within organizations – that has to be a good thing.

Article Link and copy below

October 10, 2013

Libraries in Teradata Aster

I recently wrote (emphasis added):

My clients at Teradata Aster probably see things differently, but I don’t think their library of pre-built analytic packages has been a big success. The same goes for other analytic platform vendors who have done similar (generally lesser) things. I believe that this is because such limited libraries don’t do enough of what users want.

The bolded part has been, shall we say, confirmed. As Randy Lea tells it, Teradata Aster sales qualification includes the determination that at least one SQL-MR operator — be relevant to the use case. (“Operator” seems to be the word now, rather than “function”.) Randy agreed that some users prefer hand-coding, but believes a large majority would like to push work to data analysts/business analysts who might have strong SQL skills, but be less adept at general mathematical programming.

This phrasing will all be less accurate after the release of Aster 6, which extends Aster’s capabilities beyond the trinity of SQL, the SQL-MR library, and Aster-supported hand-coding.

Randy also said:

  • A typical Teradata Aster production customer uses 8-12 of the prebuilt functions (but now they seem to be called operators).
  • nPath is used in almost every Aster account. (And by now nPath has morphed into a family of about 5 different things.)
  • The Aster collaborative filtering operator is used in almost every account.
  • Ditto a/the text operator.
  • Several business intelligence vendors are partnering for direct access to selected Teradata Aster operators — mentioned were Tableau, TIBCO Spotfire, and Alteryx.
  • I don’t know whether this is on the strength of a specific operator or not, but Aster is used to help with predictive parts failure applications in multiple industries.

And Randy seemed to agree when I put words in his mouth to the effect that the prebuilt operators save users months of development time.

Meanwhile, Teradata Aster has started a whole new library for relationship analytics.

Gartner Magic Quadrant for Operational Database Management Systems is out

11 Nov

http://www.gartner.com/technology/reprints.do?id=1-1M9YEHW&ct=131028&st=sb

I had a conversation with some one the other day, and we agreed that there was no “front end” for hadoop / NO-SQL type data environments. This seems to be a big issue in terms of these systems taking front and center from an operational perspective. More to follow on this.

Analytics keeps moving closer to the data!

18 Oct

http://feedproxy.google.com/~r/dbms2/feed/~3/QOuK0EQFRzs/

Note the list of partners – all have a background in visualization and analyst driven capabilities – not big data munging. Where does this leave the companies that are neither visualization, nor database companies? Companies like SAS.

Competitive Intelligence – A Selective Resource Guide – Completely Updated – September 2013

24 Sep

A compendium of links helpful for web based research from LLRX.com

http://www.llrx.com/features/ciguide.htm