Wednesday, November 13, 2013

Decision CAMP 2013 Gathered Decision Management Rules, Tools, Drools, and Schools (of Thought Leaders)

I had to check out Decision CAMP 2013 down at eBay / PayPal headquarters to see what has changed in the decision science discipline since I studied decision modeling in my MBA program way back in 2001.  I'm all about making good decisions and I usually make better decisions than most people.  Decision science practitioners and theoreticians came to share knowledge on how large enterprises optimize decisions.



Carole-Ann Matignon, the CEO of Sparkling Logic, informed us in her welcome address that what used to be known as "decision rules" and "expert systems" back in the 1990s are now part of the "decision management" (DM) discipline.  I began building a theory throughout this conference that knowledge management (KM) and DM should be linked in an enterprise.  Her talk on the links between Big Data and "big knowledge" clarified that DM should automate decisions that will improve profitability and decrease the costs and time dedicated to routine operations.  I get that DM practitioners should document the soft knowledge in domain experts' heads so it can be codified into business rules.  I still think the KM people who manage taxonomies for storing and retrieving said knowledge need to be involved in transforming that knowledge so users can make sense of the finished rules.  KM people don't always have the technical skills the DM people use to mine Big Data.  They have to work together.

Neil Raden from Hired Brains showed us how to compete on analytics in his keynote.  I was relieved to see that Neil broke down analysis into four types, only two of which require advanced degrees in math.  Types III and IV are the kinds of data assembly and filtering that the rest of us ordinary business types can execute.  The uber-quants in Types I and II will refine business rules based on input from the rest of us provided those rules are focused on the business' key leverage points.  Read all about this typology in Neil's blog article on "Understanding Analytics Types and Needs."  Neil is excellent at breaking down the hard work of analytics into something that business domain experts can use and his presentation on "Big Data Analytics:  The Art of the Data Scientist" is a must-read.  I agree with Neil that there's a huge opportunity for analytics in social benefit analysis and DataKind has project examples the social capital community can use for inspiration.  Neil's request that managers think probabilistically won't go over well with most humans who are wired to act first and rationalize afterwards, but that's what we have to do to remove the layers of abstraction that separate data from reality and create faulty analysis.  Neil showed us the example of Anscombe's quartet to demonstrate that applying the same decision rule to four different business cases just because they have the same regression error will cause real world problems.  Neil also advised us to use A/B testing within an adaptive model (i.e., one that applies continuous improvement) to update models because they degrade over time.  His bottom line is that "type shifting" data work from quant scientists down the chain to relevant domain experts makes it more useful because business domain knowledge matters.  This requires mentoring within the organization so that the analytics typology does not become a lifetime caste assignment.  I'm pretty sure the KM people will be doing much of that mentoring.  Neil recommended studying Daniel Kahneman's Thinking, Fast and Slow along with the Abilene paradox to grok the human factors in DM.  I'll make a mental note to read his book with James Taylor, Smart Enough Systems.

Speaking of James Taylor, he was up next on the agenda to discuss the DM journey.  I opted for this talk as opposed to the other track because I'm a domain dude, not a scientist.  I'm not ready for the double black diamond Olympic downhill ski run so I'm staying on the bunny slope.  James breaks down the DM process into discovery, services, and analytics.  Discovery means decomposing the KPIs that require decisions into identifiable points that can be mapped into matrices using scoring sheets.  I suspect that there are not many good decision templates in many verticals and that a market will emerge for flexible decision models much as the market for data models.  The services stage of the DM journey introduces the business rule management system (BRMS) that works like an IT event processing architecture.  The BRMS "rules engine" archives a log of how rules execute, allowing transparency.  A lot of BRMS design starts with data mining that produces decision trees as output and each tree branch becomes a rule.  The journey into analytics IMHO looks like the hardest part because it requires tolerance of probabilities.  That echoes Neil Raden's thoughts above.  I asked my first question of the conference about who owns the DM process, because I had a suspicion that the KM team would end up with ownership if it wasn't clearly defined.  James answered that KM owns the rules portion and IT's analytics team owns the processes.  He added that in the financial services sector, the risk management business group is often tasked to handle the processes.  I get the part about the analytics team assembling data mining tools but I say the Chief Knowledge Officer (CKO) will have to construct and monitor the analytics workbench.  I also think the CKO will have to monitor whoever owns the enterprise's top-level SWOT analysis so that the BRMS isn't just blindly dumping a "big bucket of rules" and producing GIGO.  I agree with James that DM's value comes from being able to change its processes without changing its inputs, but I'll add that the BRMS processes must support the firm's strategic direction revealed by that SWOT process.  My question scored me a free copy of James' book Decision Management Systems.  Thanks James!

Tuesday's keynote on writing concise rules was mostly over my head.  I learned about Red Hat's Drools BRMS but I noticed that Red Hat is now encouraging migration from Drools to its more robust platform.  I guess Drools is the developer community's DIY solution.  I'll leave the parsing of refraction for rule firing to those even geekier than me.  Drools code makes it easy to write set-oriented rules that are more concise and easier to execute.  The biggest point I got out of this talk is that writing concise rules reduces the time interval between firings.  This implies that more efficient rules can screen more data.  I think that matters for DM programs that screen many Small Data feeds.

The CTO panel's discussion of technical issues made it clear to me that the KM and business domain people who define rules must know some basic coding, at least until providers create BRMS products that are comprehensible to non-statisticians and non-coders.  KM pros will have to learn rules languages like Drools just to make sure domain experts can communicate with IT/s analytics teams.  Building rules gives granular insights into how data is collected and stored in data warehouses.  Analytics tasks and traffic are orders of magnitude larger than rules execution, making rules the tail that wags the dog in DM.  Decisions proliferate at the routine operational level and their effects add up in the aggregate.  Optimizing DM won't get them all correct the first time so continuous process improvement is necessary.  I had never seen rule families displayed before but that's how rules engines group rules for execution, so the panel argued for structured tables where less-technical users could build rules in business language rather than code.  You know something, that's almost like pressing the "easy button."  It's too bad that running decision tables to encourage such user empowerment is so hard.  I wonder whether big ERP providers like SAP and IBM have built such simple solutions.  I'll bet easy-to-use DM products will be disruptive in Big Data and their makers in the startup world will be good buyout targets.  Graphic DM products will fill ERP back-end gaps and manage Small Data streams from IoT deployment.  The panel mentioned that automated rules generation from data is a future possibility but it can be dangerous without deep business knowledge.  Poor data quality makes poor rules with GIGO.  I do not share the panel's pessimism that the lack of generic rules templates stems from a lack of generic object databases, or that reluctance to share proprietary data gives away advantages.  Facebook and Google have published tons of white papers on how they solve technical problems.  I think there's more disruptive opportunity available in building abstract logic templates for industry verticals.  This will probably work best initially for open source systems like Drools or Hadoop to prove the approach is viable.

KPI's talk on using The Decision Model (TDM) in BRMS was more bunny-slope speed effort for non-coders like Yours Truly.  Here's my translation.  Domain people build business process models separately from the business logic that IT analysts build.  These process models produce complex flowcharts and DM consolidates many choice points into simpler flows that manage sequences.  We design DM to identify those choice points that channel a business process into a completely automated channel.  The rule family is a two-dimensional table where several conditions lead to a conclusion.  Business logic sets the conditions within that table.  These rule families link to inferential relationships that that lead to logic-driven decision that are worth managing.  Each business unit tasks its domain subject matter experts to whiteboard skeletal decision models until the unit's business rules are built.  The models are finished when all data needed to fill them is available.  The IT analytics people will then populate the model with Small Data streams.  TDM is a descriptive way of governing this whole process by assigning responsibility, determining workflows, and seeking managerial approval with change documents.

OpenRules presented on building smarter decision models.  The philosophy behind rules-based optimization reminds me of the PERT/CPM coursework I completed many years ago as US Army second lieutenant.  I didn't quite get the stuff about constraint satisfaction problems or the JSR 331 standard.but people more skilled than me use them to solve cost functions in resource allocation problems.  If only I knew some Java I could use these tools to calculate simple data points from automatic feeds that I could display live on my Alfidi Capital site.  OpenRules' Rule Solver has an Excel component that may be more suitable for my needs.

Sparkling Logic's adaptive decision management talk revealed that decision goals are sometimes contradictory.  The tradeoff between constraints and speed requires DM to either predict and optimize or learn and adapt.  The version of A/B testing known as "champion/challenger" testing determines whether an existing model should adapt to revised conditions.  Well-designed challengers move the champion towards its optimal value.  IMHO Big Data will generate huge amalgamated streams.  Adding an adaptive DM layer to all algorithms underlying decision trees and other products will enable the algorithms to adapt in real time to changes in the underlying data.  This moves the DM paradigm closer to the automated rules generation that one of the earlier panels felt was unfeasible.  I also think that identifying errors from adaptive learning shows how analysis can get out of tolerance and produce sub-optimal solutions.  Correcting mistakes keeps business processes on track to an optimal state.

I missed the final Tuesday panels due to a prior commitment but I returned Wednesday for the financial services presentation track.  The brave new world in finance for DM started when GSEs issued multiple changes to their automated systems and regulations in recent years.  Using DM methodologies can help alleviate bottlenecks in mortgage underwriting and enforce quality control standards GSEs will accept.  When I heard someone advocate a rules center of excellence in an organization to break silos, I immediately thought of the CKO's role.  KM reps will have to play a role by reviewing rule documentation standards so that rules are speedily adopted into engines.

PayPal had something to say about detecting and stopping fraud.  Different risk sources (credit cards, account hijacking) imply different rule families are needed for each logic engine that screens each Small Data stream.  PayPal promoted its rule writers from domains and gave them technical training, which proves my earlier point that domain experts must have some technical skills like coding to serve on cross-functional teams.  IMHO no matter where you work, you must apply ROI to DM rules.  Comparing money spent on DM to the revenue loss avoidance likely from a DM effort determines whether it's worth making the effort to catch fraud in an uncontrolled domain.

The talk about insurance claim fraud once again proved that companies traditionally deploy a BRMS solution separately from an analytics solution.  The new trend is that DM increasingly consolidates rules and analytics into linked suites of solutions.  I infer that this is a prerequisite for integrated DM suites to plug into the next generation of ERP systems.  IMHO there must be disruptive gaps where DM solutions link to ERP solutions.  If Salesforce and other SaaS providers don't offer DM solutions, entrepreneurs can create tech that closes these gaps and solve pain points.

The scheduled FannieMae presentation on rule management was cancelled, which is fitting because FannieMae and all other GSEs deserve to be cancelled out of existence as punishment for their roles in blowing up the housing sector.  I wandered into the health care track's presentation on complex event processing (CEP).  I discovered that Drools Guvnor is considered to be a KM system.  Local groups' rule writers can upload their own knowledge definitions.  This kind of knowledge engineering transfer of human understanding into AI systems is the gold standard for the KM-DM interface.  IT pros need tools like Apache Maven and Git to track project workflows as the AIs are built.  I asked whether Drools Guvnor could interface with SharePoint, the only KM tool family I've used.  The answer I got was that there's no direct connection but users can build web-based interfaces.  I sure would like to see them used in tandem with a linkage.

I returned to the finance track to hear NASDAQ OMX talk about how they use BRMS to obtain a competitive edge among stock exchanges.  Their customer segments have different trading needs and the exchange used multiple steps to determine fee changes before implementing BRMS.  It's good that they compared the ROIs of the old and new ways of doing things but I think a more appropriate apples-to-apples comparison would find the old ROI of the old data and old rules together.  Historical data does matter but the new rules generate their new ROI from new data, not old data.  New data/rules versus old data/rules is how I would have framed the comparison but I don't run an exchange.  The NASDAQ guys think rules can identify their most profitable clients and potential growth segments because new pricing strategies drive frequent rules changes.  This talk made me reflect on my fixation with SharePoint and why I still think it's relevant.  Domain rule builders can post their updated rules tables to SharePoint.  That's how DevOps people can easily pull them and re-map data flows into the rules.  See folks, it really does work when you plug KM tools into the DM development process.

The finance track ended with a panel of all of the track's speakers.  Compliance really maters because the exchanges must maintain archives of their rules and metadata of the rules' authorship histories.  Moving data storage to the cloud saves money by allowing the IT department to decommission old data warehouses.  They shared good stories but I came away thinking about how a financial services firm would use DM to innovate.  I gather that DM enables reactions to competitors' moves by driving rules changes.  My concern is how external scanning translates to a strategy pivot and business unit directives.  Who in the enterprise is charged with using SWOT as a scanning matrix?  Who updates the balanced scorecard?  These are integral to ensure that DM reacts appropriately to environmental changes.  Where should the DM center of excellence (COE) reside?  Under the COO or CKO?  It's mainly an optimizer for internal operations but it needs KM input and governance.  In financial services, DM-COE is probably in the Risk/Compliance business group.  Is the CKO important enough to be on equal footing in the C-suite with the COO and CIO, or should the CKO be subordinate to the COO?  What are the professional associations for DM practitioners?  Decision CAMP may be the first purely DM body.  DM combines operations research, KM, quality assurance, and IT functions so it may warrant its own professional home somewhere.  Check out the BPM Institute, the Decision Analysis Society of INFORMS, the Decision Sciences Institute, the Business Rules Group, and FICO's Decision Management Community for the foundation knowledge of this discipline.

The final speaker discussed technical work in Grailog visualization that was way beyond my understanding.  The bottom line is that Grailog combines semantic expressions and social expressions.  In Web 2.0 usage it connects people to data and creates contexts for both of those sets.  I couldn't begin to explain graph inscribed logic if I tried so I let my mind wander back to business topics during the presentation.  Someone should write the DM version of Cloudonomics so the KM and IT communities can see basic ROI calculations for setting up rule families and templates.  I let my mind wander some more and started thinking about all the hot chicks who will be attracted to DM careers once they find out that Yours Truly, the epitome of manliness, can speak articulately about the topic.  After all, gorgeous women just can't help themselves around me.

This was a pretty mind-bending conference.  I got exposure to subjects that have really evolved since I first calculated posterior probabilities for a decision tree and worked out utility functions over a decade ago.  I may not need to learn to code after all if these DM tools keep evolving to the point where tables and charts allow domain inputs in plain language.  Decision CAMP kept me well-informed for three whole days and well-fed most of that time with free food.  I'm definitely adding the DM sector to the bodies of knowledge I actively track.